PT - JOURNAL ARTICLE AU - Khosla, Meenakshi AU - Williams, Alex H AU - McDermott, Josh AU - Kanwisher, Nancy TI - Privileged representational axes in biological and artificial neural networks AID - 10.1101/2024.06.20.599957 DP - 2024 Jan 01 TA - bioRxiv PG - 2024.06.20.599957 4099 - http://biorxiv.org/content/early/2024/06/20/2024.06.20.599957.short 4100 - http://biorxiv.org/content/early/2024/06/20/2024.06.20.599957.full AB - How do neurons code information? Recent work emphasizes properties of population codes, such as their geometry and decodable information, using measures that are blind to the native tunings (or ‘axes’) of neural responses. But might these representational axes matter, with some privileged systematically over others? To find out, we developed methods to test for alignment of neural tuning across brains and deep convolutional neural networks (DCNNs). Across both vision and audition, both brains and DCNNs consistently favored certain axes for representing the natural world. Moreover, the representational axes of DCNNs trained on natural inputs were aligned to those in perceptual cortices, such that axis-sensitive model-brain similarity metrics better differentiated competing models of biological sensory systems. We further show that coding schemes that privilege certain axes can reduce downstream wiring costs and improve generalization. These results motivate a new framework for understanding neural tuning in biological and artificial networks and its computational benefits.Competing Interest StatementThe authors have declared no competing interest.