TY - JOUR T1 - Model-free reinforcement learning operates over information stored in working-memory to drive human choices JF - bioRxiv DO - 10.1101/107698 SP - 107698 AU - Carolina Feher da Silva AU - Yuan-Wei Yao AU - Todd A. Hare Y1 - 2017/01/01 UR - http://biorxiv.org/content/early/2017/02/11/107698.abstract N2 - Model-free learning creates stimulus-response associations, but are there limits to the types of stimuli it can operate over? Most experiments on reward-learning have used discrete sensory stimuli, but there is no algorithmic reason to restrict model-free learning to external stimuli, and theories suggest that model-free processes may operate over highly abstract concepts and goals. Our study aimed to determine whether model-free learning can operate over environmental states defined by information held in working memory. We compared the data from human participants in two conditions that presented learning cues either simultaneously or as a temporal sequence that required working memory. There was a significant influence of model-free learning in the working memory condition. Moreover, both groups showed greater model-free effects than simulated model-based agents. Thus, we show that model-free learning processes operate not just in parallel, but also in cooperation with canonical executive functions such as working memory to support behavior. ER -