Abstract
Speech comprehension entails the neural mapping of the acoustic speech signal onto learned linguistic units. This acousto-linguistic transformation is bi-directional, whereby higher-level linguistic processes (e.g., semantics) modulate the acoustic analysis of individual linguistic units. Here, we investigated the cortical topography and linguistic modulation of the most fundamental linguistic unit, the phoneme. We presented natural speech and ‘phoneme quilts’ (pseudo-randomly shuffled phonemes) in either a familiar (English) or unfamiliar (Korean) language to native English speakers while recording fMRI. This design dissociates the contribution of acoustic and linguistic processes towards phoneme analysis. We show that (1) the four main phoneme classes (vowels, nasals, plosives, fricatives) are differentially and topographically encoded in human auditory cortex, and that (2) their acoustic analysis is modulated by linguistic analysis. These results suggest that the linguistic modulation of cortical sensitivity to phoneme classes minimizes prediction error during natural speech perception, thereby aiding speech comprehension in challenging listening situations.
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
Figure 1C is corrected.
https://osf.io/zgj3m/?view_only=cd4942f9ea674d79a5644796d5498e3c