RT Journal Article SR Electronic T1 Transferring and Generalizing Deep-Learning-based Neural Encoding Models across Subjects JF bioRxiv FD Cold Spring Harbor Laboratory SP 171017 DO 10.1101/171017 A1 Haiguang Wen A1 Junxing Shi A1 Wei Chen A1 Zhongming Liu YR 2017 UL http://biorxiv.org/content/early/2017/08/01/171017.abstract AB Recent studies have shown the value of using deep learning models for mapping and characterizing how the brain represents and organizes information for natural vision. However, modeling the relationship between deep learning models and the brain (or encoding models), requires measuring cortical responses to large and diverse sets of natural visual stimuli from single subjects. This requirement limits prior studies to few subjects, making it difficult to generalize findings across subjects or for a population. In this study, we developed new methods to transfer and generalize encoding models across subjects. To train encoding models specific to a subject, the models trained for other subjects were used as the prior models and were refined efficiently using Bayesian inference with a limited amount of data from the specific subject. To train encoding models for a population, the models were progressively trained and updated with incremental data from different subjects. For the proof of principle, we applied these methods to functional magnetic resonance imaging (fMRI) data from three subjects watching tens of hours of naturalistic videos, while deep residual neural network driven by image recognition was used to model the visual cortical processing. Results demonstrate that the methods developed herein provide an efficient and effective strategy to establish subject-specific or populationwide predictive models of cortical representations of high-dimensional and hierarchical visual features.