RT Journal Article SR Electronic T1 Lexical semantic content, not syntactic structure, is the main contributor to ANN-brain similarity of fMRI responses in the language network JF bioRxiv FD Cold Spring Harbor Laboratory SP 2023.05.05.539646 DO 10.1101/2023.05.05.539646 A1 Carina Kauf A1 Greta Tuckute A1 Roger Levy A1 Jacob Andreas A1 Evelina Fedorenko YR 2023 UL http://biorxiv.org/content/early/2023/05/06/2023.05.05.539646.abstract AB Representations from artificial neural network (ANN) language models have been shown to predict human brain activity in the language network. To understand what aspects of linguistic stimuli contribute to ANN-to-brain similarity, we used an fMRI dataset of responses to n=627 naturalistic English sentences (Pereira et al., 2018) and systematically manipulated the stimuli for which ANN representations were extracted. In particular, we i) perturbed sentences’ word order, ii) removed different subsets of words, or iii) replaced sentences with other sentences of varying semantic similarity. We found that the lexical semantic content of the sentence (largely carried by content words) rather than the sentence’s syntactic form (conveyed via word order or function words) is primarily responsible for the ANN-to-brain similarity. In follow-up analyses, we found that perturbation manipulations that adversely affect brain predictivity also lead to more divergent representations in the ANN’s embedding space and decrease the ANN’s ability to predict upcoming tokens in those stimuli. Further, results are robust to whether the mapping model is trained on intact or perturbed stimuli, and whether the ANN sentence representations are conditioned on the same linguistic context that humans saw. The critical result—that lexical- semantic content is the main contributor to the similarity between ANN representations and neural ones—aligns with the idea that the goal of the human language system is to extract meaning from linguistic strings. Finally, this work highlights the strength of systematic experimental manipulations for evaluating how close we are to accurate and generalizable models of the human language network.Competing Interest StatementThe authors have declared no competing interest.