RT Journal Article SR Electronic T1 Learning in reverse: Dopamine errors drive excitatory and inhibitory components of backward conditioning in an outcome-specific manner JF bioRxiv FD Cold Spring Harbor Laboratory SP 2022.01.10.475719 DO 10.1101/2022.01.10.475719 A1 Benjamin M. Seitz A1 Ivy B. Hoang A1 Aaron P. Blaisdell A1 Melissa J. Sharpe YR 2022 UL http://biorxiv.org/content/early/2022/01/12/2022.01.10.475719.abstract AB For over two decades, midbrain dopamine was considered synonymous with the prediction error in temporal-difference reinforcement learning. Central to this proposal is the notion that reward-predictive stimuli become endowed with the scalar value of predicted rewards. When these cues are subsequently encountered, their predictive value is compared to the value of the actual reward received allowing for the calculation of prediction errors. Phasic firing of dopamine neurons was proposed to reflect this computation, facilitating the backpropagation of value from the predicted reward to the reward-predictive stimulus, thus reducing future prediction errors. There are two critical assumptions of this proposal: 1) that dopamine errors can only facilitate learning about scalar value and not more complex features of predicted rewards, and 2) that the dopamine signal can only be involved in anticipatory learning in which cues or actions precede rewards. Recent work has challenged the first assumption, demonstrating that phasic dopamine signals across species are involved in learning about more complex features of the predicted outcomes, in a manner that transcends this value computation. Here, we tested the validity of the second assumption. Specifically, we examined whether phasic midbrain dopamine activity would be necessary for backward conditioning—when a neutral cue reliably follows a rewarding outcome. Using a specific Pavlovian-to-Instrumental Transfer (PIT) procedure, we show rats learn both excitatory and inhibitory components of a backward association, and that this association entails knowledge of the specific identity of the reward and cue. We demonstrate that brief optogenetic inhibition of VTADA neurons timed to the transition between the reward and cue, reduces both of these components of backward conditioning. These findings suggest VTADA neurons are capable of facilitating associations between contiguously occurring events, regardless of the content of those events. We conclude that these data are in line with suggestions that the VTADA error acts as a universal teaching signal. This may provide insight into why dopamine function has been implicated in a myriad of psychological disorders that are characterized by very distinct reinforcement-learning deficits.Competing Interest StatementThe authors have declared no competing interest.