The domain-general multiple demand (MD) network does not support core aspects of language comprehension: a large-scale fMRI investigation

E Diachek, I Blank, M Siegelman, J Affourtit… - Journal of …, 2020 - Soc Neuroscience
Aside from the language-selective left-lateralized frontotemporal network, language
comprehension sometimes recruits a domain-general bilateral frontoparietal network implicated in …

Composition is the core driver of the language-selective network

F Mollica, M Siegelman, E Diachek… - Neurobiology of …, 2020 - direct.mit.edu
The frontotemporal language network responds robustly and selectively to sentences. But
the features of linguistic input that drive this response and the computations that these …

An attempt to conceptually replicate the dissociation between syntax and semantics during sentence comprehension

M Siegelman, IA Blank, Z Mineroff, E Fedorenko - Neuroscience, 2019 - Elsevier
Is sentence structure processed by the same neural and cognitive resources that are recruited
for processing word meanings, or do structure and meaning rely on distinct resources? …

'Constituent length'effects in fMRI do not provide evidence for abstract syntactic processing

C Shain, H Kean, B Lipkin, J Affourtit, M Siegelman… - BioRxiv, 2021 - biorxiv.org
How are syntactically and semantically connected word sequences, or constituents, represented
in the human language system? An influential fMRI study, Pallier et al. (2011, PNAS), …

[HTML][HTML] Probabilistic atlas for the language network based on precision fMRI data from> 800 individuals

…, O Jouravlev, L Rakocevic, B Pritchett, M Siegelman… - Scientific Data, 2022 - nature.com
Two analytic traditions characterize fMRI language research. One relies on averaging activations
across individuals. This approach has limitations: because of inter-individual variability …

Lack of selectivity for syntax relative to word meanings throughout the language network

E Fedorenko, IA Blank, M Siegelman, Z Mineroff - Cognition, 2020 - Elsevier
To understand what you are reading now, your mind retrieves the meanings of words and
constructions from a linguistic knowledge store (lexico-semantic processing) and identifies the …

Testing the limits of natural language models for predicting human language judgements

T Golan, M Siegelman, N Kriegeskorte… - Nature Machine …, 2023 - nature.com
Neural network language models appear to be increasingly aligned with how humans process
and generate language, but identifying their weaknesses through adversarial examples …

Graded sensitivity to structure and meaning throughout the human language network

…, H Kean, C Casto, B Lipkin, J Affourtit, M Siegelman… - 2021 - europepmc.org
Human language has a remarkable capacity to encode complex ideas. This capacity arises
because language is compositional the form and arrangement of words in sentences (…

[HTML][HTML] Functional identification of language-responsive channels in individual participants in MEG investigations

…, A Pongos, C Shain, B Lipkin, M Siegelman… - bioRxiv, 2023 - ncbi.nlm.nih.gov
Making meaningful inferences about the functional architecture of the language system
requires the ability to refer to the same neural units across individuals and studies. Traditional …

An attempt to replicate a dissociation between syntax and semantics during sentence comprehension reported by Dapretto & Bookheimer (1999, Neuron)

M Siegelman, Z Mineroff, I Blank, E Fedorenko - bioRxiv, 2017 - biorxiv.org
Does processing the meanings of individual words vs. assembling words into phrases and
sentences rely on distinct pools of cognitive and neural resources? Many have argued for …