PT - JOURNAL ARTICLE AU - Costacurta, Julia C. AU - Bhandarkar, Shaunak AU - Zoltowski, David M. AU - Linderman, Scott W. TI - Structured flexibility in recurrent neural networks via neuromodulation AID - 10.1101/2024.07.26.605315 DP - 2024 Jan 01 TA - bioRxiv PG - 2024.07.26.605315 4099 - http://biorxiv.org/content/early/2024/07/26/2024.07.26.605315.short 4100 - http://biorxiv.org/content/early/2024/07/26/2024.07.26.605315.full AB - The goal of theoretical neuroscience is to develop models that help us better understand biological intelligence. Such models range broadly in complexity and biological detail. For example, task-optimized recurrent neural networks (RNNs) have generated hypotheses about how the brain may perform various computations, but these models typically assume a fixed weight matrix representing the synaptic connectivity between neurons. From decades of neuroscience research, we know that synaptic weights are constantly changing, controlled in part by chemicals such as neuromodulators. In this work we explore the computational implications of synaptic gain scaling, a form of neuromodulation, using task-optimized low-rank RNNs. In our neuromodulated RNN (NM-RNN) model, a neuromodulatory subnetwork outputs a low-dimensional neuromodulatory signal that dynamically scales the low-rank recurrent weights of an output-generating RNN. In empirical experiments, we find that the structured flexibility in the NM-RNN allows it to both train and generalize with a higher degree of accuracy than low-rank RNNs on a set of canonical tasks. Additionally, via theoretical analyses we show how neuromodulatory gain scaling endows networks with gating mechanisms commonly found in artificial RNNs. We end by analyzing the low-rank dynamics of trai ned NM-RNNs, to show how task computations are distributed.Competing Interest StatementThe authors have declared no competing interest.