RT Journal Article SR Electronic T1 Instance-level contrastive learning yields human brain-like representation without category-supervision JF bioRxiv FD Cold Spring Harbor Laboratory SP 2020.06.15.153247 DO 10.1101/2020.06.15.153247 A1 Konkle, Talia A1 Alvarez, George A. YR 2020 UL http://biorxiv.org/content/early/2020/06/16/2020.06.15.153247.abstract AB Humans learn object categories without millions of labels, but to date the models with the highest correspondence to primate visual systems are all category-supervised. This paper introduces a new self-supervised learning framework: instance-prototype contrastive learning (IPCL), and compares the internal representations learned by this model and other instance-level contrastive learning systems to the structure of human brain responses. We present the first evidence to date showing that self-supervised systems can show more brain-like representation than category-supervised models. Further, we find that recent substantial gains in top-1 accuracy from instance-wise contrastive learning models do not result in more brain-like representation—instead we find the architecture and normalization scheme are critical. Finally, this dataset reveals substantial representational structure in intermediate and late stages of the human visual system that is not accounted for by any model, whether self-supervised or category-supervised. Considering both neuroscience and machine vision perspectives, these results provide promise for instance-level representation as a key objective of visual system encoding, and highlight the room to grow towards more robust, efficient, human-like object representation.Competing Interest StatementThe authors have declared no competing interest.