Abstract
Perceptual learning (PL) involves long-lasting improvement in perceptual tasks following extensive training. Such improvement has been found to correlate with modifications in neuronal response properties in early as well as late sensory cortical areas. A major challenge is to dissect the causal relation between modification of the neural circuits and the behavioral changes. Previous theoretical and computational studies of PL have largely focused on single-layer model networks, and thus did not address salient characteristics of PL arising from the multiple-staged “deep” structure of the perceptual system. Here we develop a theory of PL in a deep neuronal network architecture, addressing the questions of how changes induced by PL are distributed across the multiple stages of cortex, and how do the respective changes determine the performance in fine discrimination tasks. We prove that in such tasks, modifications of synaptic weights of early sensory areas are both sufficient and necessary for PL. In addition, optimal synaptic weights in the deep network are not unique but span a large space of solutions. We postulate that, in the brain, plasticity throughout the deep network is distributed such that the resultant perturbation on prior circuit structures is minimized. In contrast to most previous models of PL, the minimum perturbation (MP) learning does not change the network readout weights. Our results provide mechanistic and normative explanations for several important physiological features of PL and reconcile apparently contradictory psychophysical findings.
Competing Interest Statement
The authors have declared no competing interest.