RT Journal Article SR Electronic T1 Representational drift as a result of implicit regularization JF bioRxiv FD Cold Spring Harbor Laboratory SP 2023.05.04.539512 DO 10.1101/2023.05.04.539512 A1 Ratzon, Aviv A1 Derdikman, Dori A1 Barak, Omri YR 2023 UL http://biorxiv.org/content/early/2023/05/09/2023.05.04.539512.abstract AB Recent studies show that, even in constant environments, the tuning of single neurons changes over time in a variety of brain regions. This representational drift has been suggested to be a consequence of continuous learning under noise, but its properties are still not fully understood. To uncover the underlying mechanism, we trained an artificial network to perform a predictive coding task. After the loss converged, the activity slowly became sparser. We verified the generality of this phenomenon across modeling choices. This sparseness is a manifestation of drift in the solution space to a flatter area. It is consistent with recent experimental results demonstrating that CA1 spatial code becomes sparser after familiarity. We conclude that learning is divided into three overlapping phases: Fast familiarity with the environment, slow implicit regularization, and a steady state of null drift. These findings open the possibility of inferring learning algorithms from observations of drift statistics.Competing Interest StatementThe authors have declared no competing interest.