Abstract
Individual neurons, and the circuits they collectively form in the brain, have been subject to joint evolutionary pressure to produce system-level functions. Considerable effort has been invested in understanding the impact of single-neuron input-output mechanisms, such as diversity in f-I curves and spike frequency adaptation, on network computations. Yet, how goal-driven requirements at the network level influence single-neuron coding properties remains largely unexplored. Toward addressing this, we systematically investigate single-neuron input-output adaptive mechanisms, optimized in an end-to-end fashion in artificial recurrent neural networks. This is achieved by interconnected Adaptive Recurrent Units (ARU), which perform online control of a novel two-parameter family of activation functions mimicking the diversity of f-I curves found in common neural types in the brain. Our network of ARUs shows much-improved robustness to noise and changes in input statistics. Importantly, we find that ARUs recover precise biological coding strategies such as gain scaling and fractional order differentiation. Using tools from dynamical systems theory, we elucidate the role of these emergent single neuron properties and argue that neural diversity and adaption likely play an active regularization role that enables neural circuits to optimally propagate information across time. In doing so, we discuss how goal-driven optimization approaches, while not biologically plausible themselves, reveal neural mechanisms that are consistent with evolutionary pressures on the brain.
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
Code available at: https://github.com/vgeadah/NonlinMod