Abstract
How the brain modifies synapses to improve the performance of complicated networks remains one of the biggest mysteries in neuroscience. Existing proposals, which suppose continuous-valued synaptic weights changed according to gradient clues informed by pre- and post-synaptic activities, lack sufficient experimental support. Based on the heterosynaptic plasticity between hippocampal or cortical pyramidal neurons mediated by diffusive nitric oxide and astrocyte calcium wave as well as the flexible dendritic gating of somatostatin interneurons, here we propose that the brain learns by an evolutionary algorithm (EA), which trains biologically plausible binary-weight networks in a gradient-free manner. Our EA provides a framework to re-interpret the biological functions of dopamine, meta-plasticity of dendritic spines, memory replay, and the cooperative plasticity between the synapses within a dendritic neighborhood from a new and coherent aspect. Our EA can train neural networks to exhibit dynamics analogous to brain dynamics in cognitive tasks. Our EA manifests its broad applicability to train spiking or analog neural networks with recurrent or feedforward architecture. Our EA also demonstrates its powerful capability to train deep networks with biologically plausible binary weights in MNIST classification and Atari-game playing tasks up to performance comparable with continuous-weight networks trained by gradient-based methods. Overall, our work leads to a paradigmatically fresh understanding of the brain learning mechanism.
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
↵* zedong.bi{at}outlook.com