PT - JOURNAL ARTICLE AU - Linxing Preston Jiang AU - Dimitrios C. Gklezakos AU - Rajesh P. N. Rao TI - Dynamic Predictive Coding with Hypernetworks AID - 10.1101/2021.02.22.432194 DP - 2021 Jan 01 TA - bioRxiv PG - 2021.02.22.432194 4099 - http://biorxiv.org/content/early/2021/02/22/2021.02.22.432194.1.short 4100 - http://biorxiv.org/content/early/2021/02/22/2021.02.22.432194.1.full AB - The original predictive coding model of Rao & Ballard [1] focused on spatial prediction to explain spatial receptive fields and contextual effects in the visual cortex. Here, we introduce a new dynamic predictive coding model that achieves spatiotemporal prediction of complex natural image sequences using time-varying transition matrices. We overcome the limitations of static linear transition models (as in, e.g., Kalman filters) using a hypernetwork to adjust the transition matrix dynamically for every time step, allowing the model to predict using a time-varying mixture of possible transition dynamics. We developed a single level model with recurrent modulation of transition weights by a hypernetwork and a two-level hierarchical model with top-down modulation based on a hypernetwork. At each time step, the model predicts the next input and estimates a sparse neural code by minimizing prediction error. When exposed to natural movies, the model learned localized, oriented spatial filters as well as both separable and inseparable (direction-selective) space-time receptive fields at the first level, similar to those found in the primary visual cortex (V1). Longer timescale responses and stability at the second level also emerged naturally from minimizing prediction errors for the first level dynamics. Our results suggest that the multiscale temporal response properties of cortical neurons could be the result of the cortex learning a hierarchical generative model of the visual world with higher order areas predicting the transition dynamics of lower order areas.Competing Interest StatementThe authors have declared no competing interest.