RT Journal Article SR Electronic T1 Going Beyond the Point Neuron: Active Dendrites and Sparse Representations for Continual Learning JF bioRxiv FD Cold Spring Harbor Laboratory SP 2021.10.25.465651 DO 10.1101/2021.10.25.465651 A1 Karan Grewal A1 Jeremy Forest A1 Benjamin P. Cohen A1 Subutai Ahmad YR 2021 UL http://biorxiv.org/content/early/2021/10/26/2021.10.25.465651.abstract AB Biological neurons integrate their inputs on dendrites using a diverse range of non-linear functions. However the majority of artificial neural networks (ANNs) ignore biological neurons’ structural complexity and instead use simplified point neurons. Can dendritic properties add value to ANNs? In this paper we investigate this question in the context of continual learning, an area where ANNs suffer from catastrophic forgetting (i.e., ANNs are unable to learn new information without erasing what they previously learned). We propose that dendritic properties can help neurons learn context-specific patterns and invoke highly sparse context-specific subnetworks. Within a continual learning scenario, these task-specific subnetworks interfere minimally with each other and, as a result, the network remembers previous tasks significantly better than standard ANNs. We then show that by combining dendritic networks with Synaptic Intelligence (a biologically motivated method for complex weights) we can achieve significant resilience to catastrophic forgetting, more than either technique can achieve on its own. Our neuron model is directly inspired by the biophysics of sustained depolarization following dendritic NMDA spikes. Our research sheds light on how biological properties of neurons can be used to solve scenarios that are typically impossible for traditional ANNs to solve.Competing Interest StatementThe authors have declared no competing interest.