PT - JOURNAL ARTICLE AU - Ruohan Zhang AU - Shun Zhang AU - Matthew H. Tong AU - Yuchen Cui AU - Constantin A. Rothkopf AU - Dana H. Ballard AU - Mary M. Hayhoe TI - Modeling sensory-motor decisions in natural behavior AID - 10.1101/412155 DP - 2018 Jan 01 TA - bioRxiv PG - 412155 4099 - http://biorxiv.org/content/early/2018/09/16/412155.short 4100 - http://biorxiv.org/content/early/2018/09/16/412155.full AB - Although a standard reinforcement learning model can capture many aspects of reward-seeking behaviors, it may not be practical for modeling human natural behaviors because of the richness of dynamic environments and limitations in cognitive resources. We propose a modular reinforcement learning model that addresses these factors. Based on this model, a modular inverse reinforcement learning algorithm is developed to estimate both the rewards and discount factors from human behavioral data, which allows predictions of human navigation behaviors in virtual reality with high accuracy across different subjects and with different tasks. Complex human navigation trajectories in novel environments can be reproduced by an artificial agent that is based on the modular model. This model provides a strategy for estimating the subjective value of actions and how they influence sensory-motor decisions in natural behavior.Author summary It is generally agreed that human actions can be formalized within the framework of statistical decision theory, which specifies a cost function for actions choices, and that the intrinsic value of actions is controlled by the brain’s dopaminergic reward machinery. Given behavioral data, the underlying subjective reward value for an action can be estimated through a machine learning technique called inverse reinforcement learning. Hence it is an attractive method for studying human reward-seeking behaviors. Standard reinforcement learning methods were developed for artificial intelligence agents, and incur too much computation to be a viable model for real-time human decision making. We propose an approach called modular reinforcement learning that decomposes a complex task into independent decision modules. This model includes a frequently overlooked variable called the discount factor, which controls the degree of impulsiveness in seeking future reward. We develop an algorithm called modular inverse reinforcement learning that estimates both the reward and the discount factor. We show that modular reinforcement learning may be a useful model for natural navigation behaviors. The estimated rewards and discount factors explain human walking direction decisions in a virtual-reality environment, and can be used to train an artificial agent that can accurately reproduce human navigation trajectories.