PT - JOURNAL ARTICLE AU - Guillaume Bellec AU - Franz Scherr AU - Anand Subramoney AU - Elias Hajek AU - Darjan Salaj AU - Robert Legenstein AU - Wolfgang Maass TI - A solution to the learning dilemma for recurrent networks of spiking neurons AID - 10.1101/738385 DP - 2019 Jan 01 TA - bioRxiv PG - 738385 4099 - http://biorxiv.org/content/early/2019/08/31/738385.short 4100 - http://biorxiv.org/content/early/2019/08/31/738385.full AB - Recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. But in spite of extensive research, it has remained open how learning through synaptic plasticity could be organized in such networks. We argue that two pieces of this puzzle were provided by experimental data from neuroscience. A new mathematical insight tells us how they need to be combined to enable network learning through gradient descent. The resulting learning method – called e-prop – approaches the performance of BPTT (backpropagation through time), the best known method for training recurrent neural networks in machine learning. But in contrast to BPTT, e-prop is biologically plausible. In addition, it elucidates how brain-inspired new computer chips – that are drastically more energy efficient – can be enabled to learn.