RT Journal Article SR Electronic T1 The successor representation in human reinforcement learning JF bioRxiv FD Cold Spring Harbor Laboratory SP 083824 DO 10.1101/083824 A1 I Momennejad A1 EM Russek A1 JH Cheong A1 MM Botvinick A1 ND Daw A1 SJ Gershman YR 2017 UL http://biorxiv.org/content/early/2017/07/04/083824.abstract AB Theories of reward learning in neuroscience have focused on two families of algorithms, thought to capture deliberative vs. habitual choice. “Model-based” algorithms compute the value of candidate actions from scratch, whereas “model-free” algorithms make choice more efficient but less flexible by storing pre-computed action values. We examine an intermediate algorithmic family, the successor representation (SR), which balances flexibility and efficiency by storing partially computed action values: predictions about future events. These pre-computation strategies differ in how they update their choices following changes in a task. SR’s reliance on stored predictions about future states predicts a unique signature of insensitivity to changes in the task’s sequence of events, but flexible adjustment following changes to rewards. We provide evidence for such differential sensitivity in two behavioral studies with humans. These results suggest that the SR is a computational substrate for semi-flexible choice in humans, introducing a subtler, more cognitive notion of habit.