Abstract
Humans can learn to perform multiple tasks in succession over the lifespan (“continual” learning), whereas current machine learning systems fail. Here, we investigated the cognitive mechanisms that permit successful continual learning in humans. Unlike neural networks, humans that were trained on temporally autocorrelated task objectives (focussed training) learned to perform new tasks more effectively, and performed better on a later test involving randomly interleaved tasks. Analysis of error patterns suggested that focussed learning permitted the formation of factorised task representations that were protected from mutual interference. Furthermore, individuals with a strong prior tendency to represent the task space in a factorised manner enjoyed greater benefit of focussed over interleaved training. Building artificial agents that learn to factorise tasks appropriately may be a promising route to solving continual task performance in machine learning.
Significance Statement Humans learn to perform many different tasks over the lifespan, such as speaking both French and Spanish. The brain has to represent task information without mutual interference. In machine learning, this "continual learning" is a major unsolved challenge. Here, we studied the patterns of errors made by humans and state-of-the-art deep networks whilst they learned new tasks from scratch and without instruction. Humans, but not machines, seem to benefit from training regimes that focussed on one task at a time, especially when they had a prior bias to represent stimuli in a way that facilitated task separation. Machines trained to exhibit the same prior bias suffered less interference between tasks, suggesting new avenues for solving continual learning in artificial systems.