Abstract
Nature endows networks of spiking neurons in the brain with innate computing capabilities. But it has remained an open problem how the genome achieves that. Experimental data imply that the genome encodes synaptic connection probabilities between neurons depending on their genetic types and spatial distance. We show that this low-dimensional parameterization suffices for programming fundamental computing capabilities into networks of spiking neurons. However, this method is only effective if the network employs a substantial number of different neuron types. This provides an intriguing answer to the open question why the brain employs so many neuron types, many more than were used so far in neural network models. Neural networks whose computational function is induced through their connectivity structure, rather than through synaptic plasticity, are distinguished by short wire length and robustness to weight perturbations. These neural networks features are not only essential for the brain, but also for energy-efficient neuromorphic hardware.
Significance statement Fundamental computing capabilities of neural networks in the brain are innate, i.e., they do not depend on experience-dependent plasticity. Examples are the capability to recognize odors of poisonous food, and the capability to stand up and walk right after birth. But it has remained unknown how the genetic code can achieve that. A prominent aspect of neural networks of the brain that is under genetic control is the connection probability between neurons of different types. We show that this low-dimensional code suffices for inducing substantial innate computing capabilities in neural networks, provided they have –like the brain– a fair number of different neuron types. Hence under this condition structure can induce computational function in neural networks.
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
More details have been added. The supplement has been restructured.