Abstract
The emergence of artificial neural networks (ANNs) that seem capable of emulating the remarkable human capacity for language has raised fundamental questions about complex cognition in humans and machines. This debate has taken place, however, in the context of limited empirical evidence about how the internal operations of ANNs relate to dynamic processes in the human brain as listeners understand language. Using Representational Similarity Analysis, we conducted a set of in-depth comparisons between these two types of systems, focusing on a core aspect of human language - the building of a structured and meaningful interpretation as each word in a sentence is heard in sequence. We extracted incremental structural measures of unfolding sentences from a deep language ANN and tested these against the spatiotemporally resolved brain activity recorded by electro/magnetoencephalography when human participants were listening to the same sentences. These uniquely neurocomputationally specific comparisons revealed strong behavioral and neural alignments between humans and ANNs in the use of multi-dimensional probabilistic constraints to build word-by-word structural interpretations, suggesting important commonalities in basic computational strategies for finding structure in time.
Competing Interest Statement
The authors have declared no competing interest.