Abstract
The neocortex is composed of spiking neuronal units interconnected in a sparse, recurrent network. Neuronal networks exhibit spiking activity that transforms sensory inputs into appropriate behavioral outputs. In this study, we train biologically realistic spiking neural network (SNN) models to identify the architectural changes which enable task-appropriate computations. Specifically, we employ a binary state change detection task, where each state is defined by motion entropy. This task mirrors behavioral paradigms that mice perform in the lab. SNNs are composed of excitatory and inhibitory units randomly interconnected with connection likelihoods and strengths matched to observations from mouse neocortex. Following training, we discover that SNNs selectively adjust firing rates depending on state, and that excitatory and inhibitory connectivity between input and recurrent layers change in accordance with this rate modulation. Input channels that exhibit bias to one specific motion entropy input develop stronger connections to recurrent excitatory units during training, while channels that exhibit bias to the other input develop stronger connections to inhibitory units. Furthermore, recurrent inhibitory units which positively modulated firing rates to one input strengthened their connections to recurrent units of the opposite modulation. This specific pattern of cross-modulation inhibition emerged as the optimal solution when imposing Dale’s law throughout training of the SNNs. Removing this constraint led to the absence of the emergence of this architectural solution. This work highlights the critical role of interneurons and the specific architectural patterns of inhibition in shaping dynamics and information processing within neocortical circuits.
Competing Interest Statement
The authors have declared no competing interest.