PT - JOURNAL ARTICLE AU - Blake Bordelon AU - Cengiz Pehlevan TI - Population Codes Enable Learning from Few Examples By Shaping Inductive Bias AID - 10.1101/2021.03.30.437743 DP - 2021 Jan 01 TA - bioRxiv PG - 2021.03.30.437743 4099 - http://biorxiv.org/content/early/2021/04/18/2021.03.30.437743.short 4100 - http://biorxiv.org/content/early/2021/04/18/2021.03.30.437743.full AB - Learning from a limited number of experiences requires suitable inductive biases. While inductive biases are central components of intelligence, how they are reflected in and shaped by population codes are not well-understood. To address this question, we consider biologically-plausible reading out of arbitrary stimulus-response maps from arbitrary population codes, and develop an analytical theory that predicts the generalization error of the readout as a function of the number of examples. Our theory illustrates in a mathematically precise way how the structure of population codes allow sample-efficient learning of certain stimulus-response maps over others, and how a match between the code and the task is crucial for sample-efficient learning. We observe that many different codes can support the same inductive biases and by analyzing recordings from the mouse primary visual cortex, we demonstrate that biological codes are metabolically more efficient than other codes with identical biases. We apply our theory to experimental recordings of mouse primary visual cortex neural responses, elucidating a bias towards sample-efficient learning of low frequency orientation discrimination tasks. We demonstrate emergence of this bias in a simple model of primary visual cortex, and further show how invariances in the code to stimulus variations affect learning performance. We extend our methods to time-dependent neural codes. Finally, we discuss implications of our theory in the context of recent developments in neuroscience and artificial intelligence. Overall, our study suggests sample-efficient learning as a general normative coding principle.Competing Interest StatementThe authors have declared no competing interest.