Model-based reinforcement learning under concurrent schedules of reinforcement in rodents

  1. Namjung Huh,
  2. Suhyun Jo,
  3. Hoseok Kim,
  4. Jung Hoon Sul and
  5. Min Whan Jung,1
  1. Neuroscience Laboratory, Institute for Medical Sciences and Division of Cell Transformation and Restoration, Ajou University School of Medicine, Suwon 443-721, Korea

    Abstract

    Reinforcement learning theories postulate that actions are chosen to maximize a long-term sum of positive outcomes based on value functions, which are subjective estimates of future rewards. In simple reinforcement learning algorithms, value functions are updated only by trial-and-error, whereas they are updated according to the decision-maker's knowledge or model of the environment in model-based reinforcement learning algorithms. To investigate how animals update value functions, we trained rats under two different free-choice tasks. The reward probability of the unchosen target remained unchanged in one task, whereas it increased over time since the target was last chosen in the other task. The results show that goal choice probability increased as a function of the number of consecutive alternative choices in the latter, but not the former task, indicating that the animals were aware of time-dependent increases in arming probability and used this information in choosing goals. In addition, the choice behavior in the latter task was better accounted for by a model-based reinforcement learning algorithm. Our results show that rats adopt a decision-making process that cannot be accounted for by simple reinforcement learning models even in a relatively simple binary choice task, suggesting that rats can readily improve their decision-making strategy through the knowledge of their environments.

    Footnotes

    | Table of Contents