Abstract
Artificial intelligence provides algorithms that describe how neural networks learn from error, but can the algorithms help uncover the neuronal wiring of the brain? Here, I consider the cerebellum, a major learning site that resembles a feedforward network of neurons. In the cerebellum, Purkinje cells (P-cells) organize in populations that converge onto neurons in a deep cerebellar nucleus (DCN). The outputs produced by DCN neurons are predictions that are compared to the actual observations in the inferior olive, resulting in prediction errors that are fed back strongly to the P-cells, but rather weakly to the neurons in the nucleus. Furthermore, unlike an artificial neural network, a P-cell has a very limited view of the error space: it is aware of only those errors that it can sense via its single climbing fiber. The strength disparity in the error signals, and the limited view of the error space, are critical clues that suggest how the P-cells and their targets in the nucleus should be wired into populations. As a result, the fundamental unit of computation in the cerebellum is not a single neuron, but a group of neurons that share a single teacher. To learn efficiently, the error signal organizes the P-cells, which in turn act as surrogate teachers for the neurons that they project to in the nucleus. The result is a population coding that may be responsible for a number of remarkable features of behavior, including multiple timescales, protection from erasure, and spontaneous recovery.
Competing Interest Statement
The authors have declared no competing interest.