Stimulus representation and the timing of reward-prediction errors in models of the dopamine system

Neural Comput. 2008 Dec;20(12):3034-54. doi: 10.1162/neco.2008.11-07-654.

Abstract

The phasic firing of dopamine neurons has been theorized to encode a reward-prediction error as formalized by the temporal-difference (TD) algorithm in reinforcement learning. Most TD models of dopamine have assumed a stimulus representation, known as the complete serial compound, in which each moment in a trial is distinctly represented. We introduce a more realistic temporal stimulus representation for the TD model. In our model, all external stimuli, including rewards, spawn a series of internal microstimuli, which grow weaker and more diffuse over time. These microstimuli are used by the TD learning algorithm to generate predictions of future reward. This new stimulus representation injects temporal generalization into the TD model and enhances correspondence between model and data in several experiments, including those when rewards are omitted or received early. This improved fit mostly derives from the absence of large negative errors in the new model, suggesting that dopamine alone can encode the full range of TD errors in these situations.

Publication types

  • Letter
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Action Potentials / physiology
  • Algorithms
  • Animals
  • Cues
  • Dopamine / metabolism*
  • Humans
  • Models, Neurological*
  • Neurons / physiology*
  • Reaction Time / physiology*
  • Reward*
  • Time Factors

Substances

  • Dopamine