RT Journal Article SR Electronic T1 An active inference approach to modeling concept learning JF bioRxiv FD Cold Spring Harbor Laboratory SP 633677 DO 10.1101/633677 A1 Ryan Smith A1 Philipp Schwartenbeck A1 Thomas Parr A1 Karl J. Friston YR 2019 UL http://biorxiv.org/content/early/2019/05/23/633677.abstract AB Within computational neuroscience, the algorithmic and neural basis of concept learning remains poorly understood. Concept learning requires both a type of internal model expansion process (adding novel hidden states that explain new observations), and a model reduction process (merging different states into one underlying cause and thus reducing model complexity via meta-learning). Although various algorithmic models of concept learning have been proposed within machine learning and cognitive science, many are limited to various degrees by an inability to generalize, the need for very large amounts of training data, and/or insufficiently established biological plausibility. In this paper, we articulate a model of concept learning based on active inference and its accompanying neural process theory, with the idea that a generative model can be equipped with extra (hidden state or cause) ‘slots’ that can be engaged when an agent learns about novel concepts. This can be combined with a Bayesian model reduction process, in which any concept learning – associated with these slots – can be reset in favor of a simpler model with higher model evidence. We use simulations to illustrate this model’s ability to add new concepts to its state space (with relatively few observations) and increase the granularity of the concepts it currently possesses. We further show that it accomplishes a simple form of ‘one-shot’ generalization to new stimuli. Although deliberately simple, these results suggest that active inference may offer useful resources in developing neurocomputational models of concept learning.