Summary
In novel situations, behavior necessarily reduces to latent biases. How these biases interact with new experiences to enable subsequent behavior remains poorly understood. We exposed rats to a family of spatial alternation contingencies and developed a series of reinforcement learning agents to describe the behavior. The performance of these agents shows that accurately describing the learning of individual animals requires accounting for their individual dynamic preferences as well as general, shared, cognitive processes. Agents that include only memory of past choice do not account for the behavior. Adding an explicit representation of biases allows agents to perform the task as rapidly as the rats, to accurately predict critical facets of their behavior on which it was not fitted, and to capture individual differences quantitatively. Our results illustrate the value of making explicit models of learning and highlight the importance of considering the initial state of each animal in understanding behavior.
Footnotes
Lead contact: David B. Kastner: david.kastner2{at}ucsf.edu








