PT - JOURNAL ARTICLE AU - Herce Castañón, Santiago AU - Cardoso-Leite, Pedro AU - Altarelli, Irene AU - Green, C. Shawn AU - Schrater, Paul AU - Bavelier, Daphne TI - A mixture of generative models strategy helps humans generalize across tasks AID - 10.1101/2021.02.16.431506 DP - 2021 Jan 01 TA - bioRxiv PG - 2021.02.16.431506 4099 - http://biorxiv.org/content/early/2021/02/25/2021.02.16.431506.short 4100 - http://biorxiv.org/content/early/2021/02/25/2021.02.16.431506.full AB - What role do generative models play in generalization of learning in humans? Our novel multi-task prediction paradigm—where participants complete four sequence learning tasks, each being a different instance of a common generative family—allows the separate study of within-task learning (i.e., finding the solution to each of the tasks), and across-task learning (i.e., learning a task differently because of past experiences). The very first responses participants make in each task are not yet affected by within-task learning and thus reflect their priors. Our results show that these priors change across successive tasks, increasingly resembling the underlying generative family. We conceptualize multi-task learning as arising from a mixture-of-generative-models learning strategy, whereby participants simultaneously entertain multiple candidate models which compete against each other to explain the experienced sequences. This framework predicts specific error patterns, as well as a gating mechanism for learning, both of which are observed in the data.Competing Interest StatementThe authors have declared no competing interest.