Abstract
The brain can learn from a limited number of experiences, an ability which requires suitable built in assumptions about the nature of the tasks which must be learned, or inductive biases. While inductive biases are central components of intelligence, how they are reflected in and shaped by population codes are not well-understood. To address this question, we consider biologically-plausible reading out of an arbitrary stimulus-response pattern from an arbitrary population code, and develop an analytical theory that predicts the generalization error of the readout as a function of the number of samples. We find that learning performance is controlled by the eigenspectrum of the population code’s inner-product kernel, which measures the similarity of neural responses to two different input stimuli. Many different codes can realize the same kernel; by analyzing recordings from the mouse primary visual cortex, we demonstrate that biological codes are metabolically more efficient than other codes with identical kernels. We demonstrate that the spectral properties of the kernel introduce an inductive bias toward explaining stimulus-response samples with simple functions and determine compatibility of the population code with learning task, and hence the sample-efficiency of learning. While the tail of the spectrum is important for large sample size behavior of learning, for small sample sizes, the top eigenvalues of the spectrum govern generalization. We apply our theory to experimental recordings of mouse primary visual cortex neural responses, elucidating a bias towards sample-efficient learning of low frequency orientation discrimination tasks. We demonstrate this emergence of this bias in a simple model of primary visual cortex, and further show how invariances in the code to stimulus variations affect learning performance. Finally, we demonstrate that our methods are applicable to time-dependent neural codes. Overall, our study suggests sample-efficient learning as a general normative coding principle.
Competing Interest Statement
The authors have declared no competing interest.