RT Journal Article SR Electronic T1 A hierarchy of linguistic predictions during natural language comprehension JF bioRxiv FD Cold Spring Harbor Laboratory SP 2020.12.03.410399 DO 10.1101/2020.12.03.410399 A1 Micha Heilbron A1 Kristijan Armeni A1 Jan-Mathijs Schoffelen A1 Peter Hagoort A1 Floris P. de Lange YR 2021 UL http://biorxiv.org/content/early/2021/05/27/2020.12.03.410399.abstract AB Understanding spoken language requires transforming ambiguous acoustic streams into a hierarchy of representations, from phonemes to meaning. It has been suggested that the brain uses prediction to guide the interpretation of incoming input. However, the role of prediction in language processing remains disputed, with disagreement about both the ubiquity and representational nature of predictions. Here, we address both issues by analysing brain recordings of participants listening to audiobooks, and using a deep neural network (GPT-2) to precisely quantify contextual predictions. First, we establish that brain responses to words are modulated by ubiquitous, probabilistic predictions. Next, we disentangle model-based predictions into distinct dimensions, revealing dissociable signatures of syntactic, phonemic and semantic predictions. Finally, we show that high-level (word) predictions inform low-level (phoneme) predictions, supporting hierarchical predictive processing. Together, these results underscore the ubiquity of prediction in language processing, showing that the brain spontaneously predicts upcoming language at multiple levels of abstraction.Competing Interest StatementThe authors have declared no competing interest.