PT - JOURNAL ARTICLE AU - Talia Konkle AU - George A. Alvarez TI - Instance-level contrastive learning yields human brain-like representation without category-supervision AID - 10.1101/2020.06.15.153247 DP - 2020 Jan 01 TA - bioRxiv PG - 2020.06.15.153247 4099 - http://biorxiv.org/content/early/2020/06/16/2020.06.15.153247.short 4100 - http://biorxiv.org/content/early/2020/06/16/2020.06.15.153247.full AB - Humans learn object categories without millions of labels, but to date the models with the highest correspondence to primate visual systems are all category-supervised. This paper introduces a new self-supervised learning framework: instance-prototype contrastive learning (IPCL), and compares the internal representations learned by this model and other instance-level contrastive learning systems to the structure of human brain responses. We present the first evidence to date showing that self-supervised systems can show more brain-like representation than category-supervised models. Further, we find that recent substantial gains in top-1 accuracy from instance-wise contrastive learning models do not result in more brain-like representation—instead we find the architecture and normalization scheme are critical. Finally, this dataset reveals substantial representational structure in intermediate and late stages of the human visual system that is not accounted for by any model, whether self-supervised or category-supervised. Considering both neuroscience and machine vision perspectives, these results provide promise for instance-level representation as a key objective of visual system encoding, and highlight the room to grow towards more robust, efficient, human-like object representation.Competing Interest StatementThe authors have declared no competing interest.