Abstract
The genetic code endows neural networks of the brain with innate computing capabilities. But it has remained unknown how it achieves this. Experimental data show that the genome encodes the architecture of neocortical circuits through pairwise connection probabilities for a fairly large set of genetically different types of neurons. We build a mathematical model for this style of indirect encoding, a probabilistic skeleton, and show that it suffices for programming a repertoire of quite demanding computing capabilities into neural networks. These computing capabilities emerge without learning, but are likely to provide a powerful platform for subsequent rapid learning. They are engraved into neural networks through architectural features on the statistical level, rather than through synaptic weights. Hence they are specified in a much lower dimensional parameter space, thereby providing enhanced robustness and generalization capabilities as predicted by preceding work.
Competing Interest Statement
The authors have declared no competing interest.