RT Journal Article SR Electronic T1 Convolutional neural networks do not develop brain-like transformation tolerant visual representations JF bioRxiv FD Cold Spring Harbor Laboratory SP 2020.08.11.246934 DO 10.1101/2020.08.11.246934 A1 Yaoda Xu A1 Maryam Vaziri-Pashkam YR 2021 UL http://biorxiv.org/content/early/2021/05/11/2020.08.11.246934.abstract AB Forming transformation-tolerant object representations is critical for high-level primate vision. Although single cell neural recording studies predict the existence of highly consistent object representational structure across transformation in high-level vison, this prediction has not been tested at the population level. Here, using fMRI pattern analysis, we show that high representational consistency across position and size changes indeed exists in human higher visual regions. Moreover, consistency is lower in early visual areas and increases as information ascends the ventral visual processing pathway. Such an increase in consistency over the course of visual processing, however, is not found in 14 different convolutional neural networks (CNNs) trained for object categorization that varied in architecture, depth and the presence/absence of recurrent processing. If anything, consistency decreases from lower to higher CNN layers. All tested CNNs thus do not appear to develop brain-like transformation-tolerant visual representation during visual processing despite their ability to classify objects under transformations. This brain-CNN difference could potentially contribute to the large number of data required to train CNNs and their limited ability to generalize to objects not included in training.Impact Statement Convolutional neural networks capable of object categorization do not develop brain-like transformation tolerant visual representations during the course of visual processing, potentially accounting for some of their current performance limitations.Competing Interest StatementThe authors have declared no competing interest.