TY - JOUR T1 - Auditory and Language Contributions to Neural Encoding of Speech Features in Noisy Environments JF - bioRxiv DO - 10.1101/377838 SP - 377838 AU - Jiajie Zou AU - Jun Feng AU - Tianyong Xu AU - Peiqing Jin AU - Cheng Luo AU - Feiyan Chen AU - Jianfeng Zhang AU - Nai Ding Y1 - 2018/01/01 UR - http://biorxiv.org/content/early/2018/07/26/377838.abstract N2 - Recognizing speech in noisy environments is a challenging task that involves both auditory and language mechanisms. Previous studies have demonstrated noise-robust neural tracking of the speech envelope, i.e., fluctuations in sound intensity, in human auditory cortex, which provides a plausible neural basis for noise-robust speech recognition. The current study aims at teasing apart auditory and language contributions to noise-robust envelope tracking by comparing 2 groups of listeners, i.e., native listeners of the testing language and foreign listeners who do not understand the testing language. In the experiment, speech is mixed with spectrally matched stationary noise at 4 intensity levels and the neural responses are recorded using electroencephalography (EEG). When the noise intensity increases, an increase in neural response gain is observed for both groups of listeners, demonstrating auditory gain control mechanisms. Language comprehension creates no overall boost in the response gain or the envelope-tracking precision but instead modulates the spatial and temporal profiles of envelope-tracking activity. Based on the spatio-temporal dynamics of envelope-tracking activity, the 2 groups of listeners and the 4 levels of noise intensity can be jointly decoded by a linear classifier. All together, the results show that without feedback from language processing, auditory mechanisms such as gain control can lead to a noise-robust speech representation. High-level language processing, however, further modulates the spatial-temporal profiles of the neural representation of the speech envelope. ER -