Abstract
Recent research has shown that the internal dynamics of an artificial neural network model of sentence comprehension displayed a similar pattern to the amplitude of the N400 in several conditions known to modulate this event-related potential. These results led Rabovsky, Hansen, and McClelland (2018) to suggest that the N400 might reflect change in an implicit predictive representation of meaning corresponding to semantic prediction error. This explanation stands as an alternative to the hypothesis that the N400 reflects lexical-prediction error as estimated by word Surprisal (Frank, Otten, Galli, & Vigliocco, 2015). In the present study, we directly model the amplitude of the N400 elicited during naturalistic sentence processing by using as predictor the update of the distributed representation of sentence meaning generated by a Sentence Gestalt model (McClelland, St. John, & Taraban, 1989) trained on a large-scale text corpus. This enables a quantitative prediction of N400 amplitudes based on a cognitively motivated model, as well as quantitative comparison of this model to alternative models of the N400. Specifically, we compare the update measure from the SG model to Surprisal estimated by a comparable language model trained on next-word prediction. The results reported in this paper corroborate the hypothesis that N400 amplitudes correspond to the change in an implicit predictive representation of meaning after every word presentation. Furthermore, we argue that a comparison of the Sentence Gestalt update and Surprisal might also uncover two distinct but probably closely related sub-processes that contribute to the processing of a sentence.
Competing Interest Statement
The authors have declared no competing interest.