A multisensory cortical network for understanding speech in noise

J Cogn Neurosci. 2009 Sep;21(9):1790-805. doi: 10.1162/jocn.2009.21118.

Abstract

In noisy environments, listeners tend to hear a speaker's voice yet struggle to understand what is said. The most effective way to improve intelligibility in such conditions is to watch the speaker's mouth movements. Here we identify the neural networks that distinguish understanding from merely hearing speech, and determine how the brain applies visual information to improve intelligibility. Using functional magnetic resonance imaging, we show that understanding speech-in-noise is supported by a network of brain areas including the left superior parietal lobule, the motor/premotor cortex, and the left anterior superior temporal sulcus (STS), a likely apex of the acoustic processing hierarchy. Multisensory integration likely improves comprehension through improved communication between the left temporal-occipital boundary, the left medial-temporal lobe, and the left STS. This demonstrates how the brain uses information from multiple modalities to improve speech comprehension in naturalistic, acoustically adverse conditions.

Publication types

  • Research Support, N.I.H., Extramural

MeSH terms

  • Acoustic Stimulation / methods
  • Adolescent
  • Adult
  • Analysis of Variance
  • Brain Mapping*
  • Cerebral Cortex / anatomy & histology
  • Cerebral Cortex / blood supply
  • Cerebral Cortex / physiology*
  • Comprehension / physiology*
  • Female
  • Hearing / physiology
  • Humans
  • Image Processing, Computer-Assisted / methods
  • Magnetic Resonance Imaging / methods
  • Male
  • Neural Pathways / blood supply
  • Neural Pathways / physiology
  • Noise*
  • Oxygen / blood
  • Photic Stimulation / methods
  • Reaction Time / physiology
  • Speech / physiology*
  • Speech Perception / physiology*
  • Young Adult

Substances

  • Oxygen