RT Journal Article SR Electronic T1 A mixture of generative models strategy helps humans generalize across tasks JF bioRxiv FD Cold Spring Harbor Laboratory SP 2021.02.16.431506 DO 10.1101/2021.02.16.431506 A1 Herce Castañón, Santiago A1 Cardoso-Leite, Pedro A1 Altarelli, Irene A1 Green, C. Shawn A1 Schrater, Paul A1 Bavelier, Daphne YR 2021 UL http://biorxiv.org/content/early/2021/02/25/2021.02.16.431506.abstract AB What role do generative models play in generalization of learning in humans? Our novel multi-task prediction paradigm—where participants complete four sequence learning tasks, each being a different instance of a common generative family—allows the separate study of within-task learning (i.e., finding the solution to each of the tasks), and across-task learning (i.e., learning a task differently because of past experiences). The very first responses participants make in each task are not yet affected by within-task learning and thus reflect their priors. Our results show that these priors change across successive tasks, increasingly resembling the underlying generative family. We conceptualize multi-task learning as arising from a mixture-of-generative-models learning strategy, whereby participants simultaneously entertain multiple candidate models which compete against each other to explain the experienced sequences. This framework predicts specific error patterns, as well as a gating mechanism for learning, both of which are observed in the data.Competing Interest StatementThe authors have declared no competing interest.