Skip to main content
bioRxiv
  • Home
  • About
  • Submit
  • ALERTS / RSS
Advanced Search
New Results

Diverse deep neural networks all predict human IT well, after training and fitting

Katherine R. Storrs, View ORCID ProfileTim C. Kietzmann, Alexander Walther, Johannes Mehrer, Nikolaus Kriegeskorte
doi: https://doi.org/10.1101/2020.05.07.082743
Katherine R. Storrs
1Justus Liebig University Giessen, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: katherine.storrs@gmail.com
Tim C. Kietzmann
2Donders Institute for Brain, Cognition and Behaviour, Nijmegen, Netherlands
3MRC Cognition and Brain Sciences Unit, Cambridge, UK
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Tim C. Kietzmann
Alexander Walther
3MRC Cognition and Brain Sciences Unit, Cambridge, UK
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Johannes Mehrer
3MRC Cognition and Brain Sciences Unit, Cambridge, UK
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Nikolaus Kriegeskorte
4Zuckerman Mind Brain Behavior Institute, Columbia University, New York, USA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Abstract
  • Full Text
  • Info/History
  • Metrics
  • Preview PDF
Loading

ABSTRACT

Deep neural networks (DNNs) trained on object recognition provide the best current models of high-level visual areas in the brain. What remains unclear is how strongly network design choices, such as architecture, task training, and subsequent fitting to brain data contribute to the observed similarities. Here we compare a diverse set of nine DNN architectures on their ability to explain the representational geometry of 62 isolated object images in human inferior temporal (hIT) cortex, as measured with functional magnetic resonance imaging. We compare untrained networks to their task-trained counterparts, and assess the effect of fitting them to hIT using a cross-validation procedure. To best explain hIT, we fit a weighted combination of the principal components of the features within each layer, and subsequently a weighted combination of layers. We test all models across all stages of training and fitting for their correlation with the hIT representational dissimilarity matrix (RDM) using an independent set of images and subjects. We find that trained models significantly outperform untrained models (accounting for 57% more of the explainable variance), suggesting that features representing natural images are important for explaining hIT. Model fitting further improves the alignment of DNN and hIT representations (by 124%), suggesting that the relative prevalence of different features in hIT does not readily emerge from the particular ImageNet object-recognition task used to train the networks. Finally, all DNN architectures tested achieved equivalent high performance once trained and fitted. Similar ability to explain hIT representations appears to be shared among deep feedforward hierarchies of nonlinear features with spatially restricted receptive fields.

Competing Interest Statement

The authors have declared no competing interest.

Copyright 
The copyright holder for this preprint is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under a CC-BY-NC-ND 4.0 International license.
Back to top
PreviousNext
Posted May 08, 2020.
Download PDF
Email

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Enter multiple addresses on separate lines or separate them with commas.
Diverse deep neural networks all predict human IT well, after training and fitting
(Your Name) has forwarded a page to you from bioRxiv
(Your Name) thought you would like to see this page from the bioRxiv website.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
Diverse deep neural networks all predict human IT well, after training and fitting
Katherine R. Storrs, Tim C. Kietzmann, Alexander Walther, Johannes Mehrer, Nikolaus Kriegeskorte
bioRxiv 2020.05.07.082743; doi: https://doi.org/10.1101/2020.05.07.082743
Digg logo Reddit logo Twitter logo Facebook logo Google logo LinkedIn logo Mendeley logo
Citation Tools
Diverse deep neural networks all predict human IT well, after training and fitting
Katherine R. Storrs, Tim C. Kietzmann, Alexander Walther, Johannes Mehrer, Nikolaus Kriegeskorte
bioRxiv 2020.05.07.082743; doi: https://doi.org/10.1101/2020.05.07.082743

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Neuroscience
Subject Areas
All Articles
  • Animal Behavior and Cognition (3609)
  • Biochemistry (7584)
  • Bioengineering (5533)
  • Bioinformatics (20816)
  • Biophysics (10341)
  • Cancer Biology (7992)
  • Cell Biology (11651)
  • Clinical Trials (138)
  • Developmental Biology (6616)
  • Ecology (10222)
  • Epidemiology (2065)
  • Evolutionary Biology (13639)
  • Genetics (9553)
  • Genomics (12856)
  • Immunology (7928)
  • Microbiology (19561)
  • Molecular Biology (7673)
  • Neuroscience (42165)
  • Paleontology (308)
  • Pathology (1259)
  • Pharmacology and Toxicology (2204)
  • Physiology (3271)
  • Plant Biology (7052)
  • Scientific Communication and Education (1294)
  • Synthetic Biology (1953)
  • Systems Biology (5431)
  • Zoology (1119)