RT Journal Article SR Electronic T1 Brain-like learning with exponentiated gradients JF bioRxiv FD Cold Spring Harbor Laboratory SP 2024.10.25.620272 DO 10.1101/2024.10.25.620272 A1 Cornford, Jonathan A1 Pogodin, Roman A1 Ghosh, Arna A1 Sheng, Kaiwen A1 Bicknell, Brendan A. A1 Codol, Olivier A1 Clark, Beverley A. A1 Lajoie, Guillaume A1 Richards, Blake A. YR 2024 UL http://biorxiv.org/content/early/2024/10/26/2024.10.25.620272.abstract AB Computational neuroscience relies on gradient descent (GD) for training artificial neural network (ANN) models of the brain. The advantage of GD is that it is effective at learning difficult tasks. However, it produces ANNs that are a poor phenomenological fit to biology, making them less relevant as models of the brain. Specifically, it violates Dale’s law, by allowing synapses to change from excitatory to inhibitory, and leads to synaptic weights that are not log-normally distributed, contradicting experimental data. Here, starting from first principles of optimisation theory, we present an alternative learning algorithm, exponentiated gradient (EG), that respects Dale’s Law and produces log-normal weights, without losing the power of learning with gradients. We also show that in biologically relevant settings EG outperforms GD, including learning from sparsely relevant signals and dealing with synaptic pruning. Altogether, our results show that EG is a superior learning algorithm for modelling the brain with ANNs.Competing Interest StatementThe authors have declared no competing interest.