PT - JOURNAL ARTICLE AU - Reza Shadmehr TI - Population coding in the cerebellum and its implications for learning from error AID - 10.1101/2020.05.18.102376 DP - 2020 Jan 01 TA - bioRxiv PG - 2020.05.18.102376 4099 - http://biorxiv.org/content/early/2020/07/29/2020.05.18.102376.short 4100 - http://biorxiv.org/content/early/2020/07/29/2020.05.18.102376.full AB - The cerebellum resembles a feedforward, three-layer network of neurons in which the “hidden layer” consists of Purkinje cells (P-cells), and the output layer consists of deep cerebellar nucleus (DCN) neurons. However, unlike an artificial network, P-cells are grouped into small populations that converge onto single DCN neurons. Why are the P-cells organized in this way, and what is the membership criterion of each population? To consider these questions, in this review I apply elementary mathematics from machine learning and assume that the output of each DCN neuron is a prediction that is compared to the actual observation, resulting in an error signal that originates in the inferior olive. This signal is sent to P-cells via climbing fibers that produce complex spikes. The same error signal from the olive must also guide learning in the DCN neurons, yet the olivary projections to the DCN are weak, particularly in adulthood. However, P-cells that form a population exhibit a special property: they can synchronize their complex spikes, which in turn suppresses activity of the DCN neuron that produced the erroneous output. Viewed in the framework of machine learning, it appears that the olive organizes the P-cells into populations so that through complex spike synchrony each population can act as a surrogate teacher for the DCN neuron it projects to. This error-dependent grouping of P-cells into populations gives rise to a number of remarkable features of behavior, including multiple timescales of learning, protection from erasure, and spontaneous recovery of memory.Competing Interest StatementThe authors have declared no competing interest.