PT - JOURNAL ARTICLE AU - Ratzon, Aviv AU - Derdikman, Dori AU - Barak, Omri TI - Representational drift as a result of implicit regularization AID - 10.1101/2023.05.04.539512 DP - 2023 Jan 01 TA - bioRxiv PG - 2023.05.04.539512 4099 - http://biorxiv.org/content/early/2023/05/09/2023.05.04.539512.short 4100 - http://biorxiv.org/content/early/2023/05/09/2023.05.04.539512.full AB - Recent studies show that, even in constant environments, the tuning of single neurons changes over time in a variety of brain regions. This representational drift has been suggested to be a consequence of continuous learning under noise, but its properties are still not fully understood. To uncover the underlying mechanism, we trained an artificial network to perform a predictive coding task. After the loss converged, the activity slowly became sparser. We verified the generality of this phenomenon across modeling choices. This sparseness is a manifestation of drift in the solution space to a flatter area. It is consistent with recent experimental results demonstrating that CA1 spatial code becomes sparser after familiarity. We conclude that learning is divided into three overlapping phases: Fast familiarity with the environment, slow implicit regularization, and a steady state of null drift. These findings open the possibility of inferring learning algorithms from observations of drift statistics.Competing Interest StatementThe authors have declared no competing interest.