Abstract
Recent studies show that, even in constant environments, the tuning of single neurons changes over time in a variety of brain regions. This representational drift has been suggested to be a consequence of continuous learning under noise, but its properties are still not fully understood. To uncover the underlying mechanism, we trained an artificial network to perform a predictive coding task. After the loss converged, the activity slowly became sparser. We verified the generality of this phenomenon across modeling choices. This sparseness is a manifestation of drift in the solution space to a flatter area. It is consistent with recent experimental results demonstrating that CA1 spatial code becomes sparser after familiarity. We conclude that learning is divided into three overlapping phases: Fast familiarity with the environment, slow implicit regularization, and a steady state of null drift. These findings open the possibility of inferring learning algorithms from observations of drift statistics.
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
↵* Aviv.Ratzon{at}campus.technion.ac.il