Abstract
The human brain readily learns tasks in sequence without forgetting previous ones. Artificial neural networks (ANNs), on the other hand, need to be modified to achieve similar performance. While effective, many algorithms that accomplish this are based on weight importance methods that do not correspond to biological mechanisms. Here we introduce a simple, biologically plausible method for enabling effective continual learning in ANNs. We show that it is possible to learn a weight-dependent plasticity function that prevents catastrophic forgetting over multiple tasks. We highlight the effectiveness of our method by evaluating it on a set of MNIST classification tasks. We further find that the use of our method promotes synaptic multi-modality, similar to that seen in biology.
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
Email addresses: ghosh.185{at}osu.edu (Romik Ghosh), dana.mastrovito{at}alleninstitute.org (Dana Mastrovito), stefanm{at}alleninstitute.org (Stefan Mihalas)