Skip to main content
bioRxiv
  • Home
  • About
  • Submit
  • ALERTS / RSS
Advanced Search
New Results

Incorporating natural language into vision models improves prediction and understanding of higher visual cortex

View ORCID ProfileAria Y. Wang, View ORCID ProfileKendrick Kay, Thomas Naselaris, View ORCID ProfileMichael J. Tarr, View ORCID ProfileLeila Wehbe
doi: https://doi.org/10.1101/2022.09.27.508760
Aria Y. Wang
1Neuroscience Institute, Carnegie Mellon University
2Machine Learning Department, Carnegie Mellon University
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Aria Y. Wang
Kendrick Kay
3Center for Magnetic Resonance Research (CMRR), Department of Radiology, University of Minnesota
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Kendrick Kay
Thomas Naselaris
3Center for Magnetic Resonance Research (CMRR), Department of Radiology, University of Minnesota
4Department of Neuroscience, University of Minnesota
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Michael J. Tarr
1Neuroscience Institute, Carnegie Mellon University
2Machine Learning Department, Carnegie Mellon University
5Department of Psychology, Carnegie Mellon University
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Michael J. Tarr
Leila Wehbe
1Neuroscience Institute, Carnegie Mellon University
2Machine Learning Department, Carnegie Mellon University
5Department of Psychology, Carnegie Mellon University
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Leila Wehbe
  • For correspondence: lwehbe@cmu.edu
  • Abstract
  • Full Text
  • Info/History
  • Metrics
  • Preview PDF
Loading

ABSTRACT

We hypothesize that high-level visual representations contain more than the representation of individual categories: they represent complex semantic information inherent in scenes that is most relevant for interaction with the world. Consequently, multimodal models such as Contrastive Language-Image Pre-training (CLIP) which construct image embeddings to best match embeddings of image captions should better predict neural responses in visual cortex, since image captions typically contain the most semantically relevant information in an image for humans. We extracted image features using CLIP, which encodes visual concepts with supervision from natural language captions. We then used voxelwise encoding models based on CLIP features to predict brain responses to real-world images from the Natural Scenes Dataset. CLIP explains up to R2 = 78% of variance in stimulus-evoked responses from individual voxels in the held out test data. CLIP also explains greater unique variance in higher-level visual areas compared to models trained only with image/label pairs (ImageNet trained ResNet) or text (BERT). Visualizations of model embeddings and Principal Component Analysis (PCA) reveal that, with the use of captions, CLIP captures both global and fine-grained semantic dimensions represented within visual cortex. Based on these novel results, we suggest that human understanding of their environment form an important dimension of visual representation.

Competing Interest Statement

The authors have declared no competing interest.

Copyright 
The copyright holder for this preprint is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. All rights reserved. No reuse allowed without permission.
Back to top
PreviousNext
Posted September 29, 2022.
Download PDF
Email

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Enter multiple addresses on separate lines or separate them with commas.
Incorporating natural language into vision models improves prediction and understanding of higher visual cortex
(Your Name) has forwarded a page to you from bioRxiv
(Your Name) thought you would like to see this page from the bioRxiv website.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
Incorporating natural language into vision models improves prediction and understanding of higher visual cortex
Aria Y. Wang, Kendrick Kay, Thomas Naselaris, Michael J. Tarr, Leila Wehbe
bioRxiv 2022.09.27.508760; doi: https://doi.org/10.1101/2022.09.27.508760
Reddit logo Twitter logo Facebook logo LinkedIn logo Mendeley logo
Citation Tools
Incorporating natural language into vision models improves prediction and understanding of higher visual cortex
Aria Y. Wang, Kendrick Kay, Thomas Naselaris, Michael J. Tarr, Leila Wehbe
bioRxiv 2022.09.27.508760; doi: https://doi.org/10.1101/2022.09.27.508760

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Neuroscience
Subject Areas
All Articles
  • Animal Behavior and Cognition (4382)
  • Biochemistry (9594)
  • Bioengineering (7091)
  • Bioinformatics (24861)
  • Biophysics (12615)
  • Cancer Biology (9956)
  • Cell Biology (14354)
  • Clinical Trials (138)
  • Developmental Biology (7948)
  • Ecology (12105)
  • Epidemiology (2067)
  • Evolutionary Biology (15988)
  • Genetics (10925)
  • Genomics (14739)
  • Immunology (9869)
  • Microbiology (23670)
  • Molecular Biology (9484)
  • Neuroscience (50866)
  • Paleontology (369)
  • Pathology (1539)
  • Pharmacology and Toxicology (2683)
  • Physiology (4014)
  • Plant Biology (8657)
  • Scientific Communication and Education (1508)
  • Synthetic Biology (2394)
  • Systems Biology (6435)
  • Zoology (1346)