Richard S. Sutton
Springer Science & Business Media, May 31, 1992 - Computers - 172 pages
Reinforcement learning is the learning of a mapping from situations to actions so as to maximize a scalar reward or reinforcement signal. The learner is not told which action to take, as in most forms of machine learning, but instead must discover which actions yield the highest reward by trying them. In the most interesting and challenging cases, actions may affect not only the immediate reward, but also the next situation, and through that all subsequent rewards. These two characteristics -- trial-and-error search and delayed reward -- are the most important distinguishing features of reinforcement learning.
Reinforcement learning is both a new and a very old topic in AI. The term appears to have been coined by Minsk (1961), and independently in control theory by Walz and Fu (1965). The earliest machine learning research now viewed as directly relevant was Samuel's (1959) checker player, which used temporal-difference learning to manage delayed reward much as it is used today. Of course learning and reinforcement have been studied in psychology for almost a century, and that work has had a very strong impact on the AI/engineering work. One could in fact consider all of reinforcement learning to be simply the reverse engineering of certain psychological learning processes (e.g. operant conditioning and secondary reinforcement).
Reinforcement Learning is an edited volume of original research, comprising seven invited contributions by leading researchers.
What people are saying - Write a review
We haven't found any reviews in the usual places.
Other editions - View all
action model action representation AHCON AHCON-M approach architecture backgammon backpropagation Barto behavior characteristic eligibility composite task computed configuration connectionist convergence CQ-L decomposition defined deterministic distribution dynamic programming elemental tasks encode environmental reinforcement episode equation error estimate evaluation function expected value experience replay Figure frameworks gating module Gaussian unit goal gradient heuristic hidden units learning agent learning rate learning system Lemma lookup table Machine Learning Markov chain Markov process move number of hidden obstacles optimal policy outcome output units parameter path finding problem path-finder payoff performance positions prediction probability proof Q-learning Q-module QCON QCON-M qo-path random real process REINFORCE algorithm reinforcement baseline reinforcement learning reinforcement signal relaxation planning reward robot path finding Section semilinear unit sequence simulation situations step strategy supervised learning Sutton TD learning TD(X temporal difference temporal difference learning terminal value theorem tion utility network vector Watkins weights