TY - JOUR T1 - Neural evidence for the prediction of animacy features during language comprehension: Evidence from MEG and EEG Representational Similarity Analysis JF - bioRxiv DO - 10.1101/709394 SP - 709394 AU - Lin Wang AU - Edward Wlotko AU - Edward Alexander AU - Lotte Schoot AU - Minjae Kim AU - Lena Warnke AU - Gina R. Kuperberg Y1 - 2019/01/01 UR - http://biorxiv.org/content/early/2019/07/22/709394.abstract N2 - It has been proposed that people generate probabilistic predictions at multiple levels of linguistic representation during language comprehension. Here we used Magnetoencephalography (MEG) and Electroencephalography (EEG) in combination with Representational Similarity Analysis (RSA) to seek neural evidence for the prediction of animacy features. In two studies, MEG and EEG activity was measured as participants read three-sentence scenarios in which the verbs in the final sentences constrained for either animate or inanimate semantic features of upcoming nouns. The broader context constrained for either a specific noun or for multiple nouns belonging to the same animacy category. We quantified the spatial similarity pattern of the brain activity measured by MEG and EEG following the verbs until just before the presentation of the nouns. We found clear and converging evidence across the MEG and EEG datasets that the spatial pattern of neural activity following animate constraining verbs was more similar than the spatial pattern following inanimate constraining verbs. This effect could not be explained by lexical-semantic processing of the verbs themselves. We therefore suggest that it reflects the inherent difference in the semantic similarity structure of the predicted animate and inanimate nouns. Moreover, the effect was present regardless of whether it was possible to predict a specific word on the basis of the prior discourse context. This provides strong evidence for the prediction of coarse-grained semantic features that goes beyond the prediction of individual words.Significant statement Language inputs unfold very quickly during real-time communication. By predicting ahead, we can give our brains a “head-start”, so that language comprehension is faster and more efficient. While most contexts do not constrain strongly for a specific word, they do allow us to predict some upcoming information. For example, following the context, “they cautioned the…”, we know that the next word will be animate rather than inanimate (we can caution a person, but not an object). Here we used EEG and MEG techniques to show that the brain is able to use these contextual constraints to predict the animacy of upcoming words during sentence comprehension, and that these predictions are associated with specific spatial patterns of neural activity. ER -