RT Journal Article SR Electronic T1 A neural network model for the orbitofrontal cortex and task space acquisition during reinforcement learning JF bioRxiv FD Cold Spring Harbor Laboratory SP 116608 DO 10.1101/116608 A1 Zhewei Zhang A1 Zhenbo Cheng A1 Zhongqiao Lin A1 Chechang Nie A1 Tianming Yang YR 2017 UL http://biorxiv.org/content/early/2017/07/24/116608.abstract AB Reinforcement learning has been widely used in explaining animal behavior. In reinforcement learning, the agent learns the value of the states in the task, collectively constituting the task state space, and use the knowledge to choose actions and acquire desired outcomes. It has been proposed that the orbitofrontal cortex (OFC) encodes the task state space during reinforcement learning. However, it is not well understood how the OFC acquires and stores task state information. Here, we propose a neural network model based on reservoir computing. Reservoir networks exhibit heterogeneous and dynamic activity patterns that are suitable to encode task states. The information can be extracted by a linear readout trained with reinforcement learning. We demonstrate how the network acquires and stores the task structures. The network exhibits reinforcement learning behavior and its aspects resemble experimental findings of the OFC. Our study provides a theoretical explanation of how the OFC may contribute to reinforcement learning and a new approach to understanding the neural mechanism underlying reinforcement learning.