PT - JOURNAL ARTICLE AU - Kion Fallah AU - Adam A. Willats AU - Ninghao Liu AU - Christopher J. Rozell TI - Learning sparse codes from compressed representations with biologically plausible local wiring constraints AID - 10.1101/2020.10.23.352443 DP - 2020 Jan 01 TA - bioRxiv PG - 2020.10.23.352443 4099 - http://biorxiv.org/content/early/2020/10/23/2020.10.23.352443.short 4100 - http://biorxiv.org/content/early/2020/10/23/2020.10.23.352443.full AB - Sparse coding is an important method for unsupervised learning of task-independent features in theoretical neuroscience models of neural coding. While a number of algorithms exist to learn these representations from the statistics of a dataset, they largely ignore the information bottlenecks present in fiber pathways connecting cortical areas. For example, the visual pathway has many fewer neurons transmitting visual information to cortex than the number of photoreceptors. Both empirical and analytic results have recently shown that sparse representations can be learned effectively after performing dimensionality reduction with randomized linear operators, producing latent coefficients that preserve information. Unfortunately, current proposals for sparse coding in the compressed space require a centralized compression process (i.e., dense random matrix) that is biologically unrealistic due to local wiring constraints observed in neural circuits. The main contribution of this paper is to leverage recent results on structured random matrices to propose a theoretical neuroscience model of randomized projections for communication between cortical areas that is consistent with the local wiring constraints observed in neuroanatomy. We show analytically and empirically that unsupervised learning of sparse representations can be performed in the compressed space despite significant local wiring constraints in compression matrices of varying forms (corresponding to different local wiring patterns). Our analysis verifies that even with significant local wiring constraints, the learned representations remain qualitatively similar, have similar quantitative performance in both training and generalization error, and are consistent across many measures with measured macaque V1 receptive fields.Competing Interest StatementThe authors have declared no competing interest.