Learning to represent reward structure: a key to adapting to complex environments

Neurosci Res. 2012 Dec;74(3-4):177-83. doi: 10.1016/j.neures.2012.09.007. Epub 2012 Oct 13.

Abstract

Predicting outcomes is a critical ability of humans and animals. The dopamine reward prediction error hypothesis, the driving force behind the recent progress in neural "value-based" decision making, states that dopamine activity encodes the signals for learning in order to predict a reward, that is, the difference between the actual and predicted reward, called the reward prediction error. However, this hypothesis and its underlying assumptions limit the prediction and its error as reactively triggered by momentary environmental events. Reviewing the assumptions and some of the latest findings, we suggest that the internal state representation is learned to reflect the environmental reward structure, and we propose a new hypothesis - the dopamine reward structural learning hypothesis - in which dopamine activity encodes multiplex signals for learning in order to represent reward structure in the internal state, leading to better reward prediction.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Animals
  • Brain / physiology*
  • Dopamine / metabolism*
  • Humans
  • Learning / physiology*
  • Models, Neurological*
  • Reinforcement, Psychology*
  • Reward*

Substances

  • Dopamine