PT - JOURNAL ARTICLE AU - Laura Gwilliams AU - Tal Linzen AU - David Poeppel AU - Alec Marantz TI - In spoken word recognition the future predicts the past AID - 10.1101/150151 DP - 2017 Jan 01 TA - bioRxiv PG - 150151 4099 - http://biorxiv.org/content/early/2017/06/14/150151.short 4100 - http://biorxiv.org/content/early/2017/06/14/150151.full AB - Speech is an inherently noisy and ambiguous signal. In order to fluently derive meaning, a listener must integrate contextual information to guide interpretations of the sensory input. While many studies have demonstrated the influence of prior context, the neural mechanisms supporting the integration of subsequent information remain unknown. Using magnetoencephalography, we analysed responses to spoken words with a varyingly ambiguous onset phoneme, the identity of which is later disambiguated at the lexical uniqueness point1. Our results uncover a three-level processing network. Subphonemic detail is preserved in primary auditory cortex over long timescales, and re-evoked at subsequent phoneme positions. Commitments to phonological categories occur in parallel, resolving on the shorter time-scale of ~450 ms. Finally, predictions are formed over likely lexical items. These findings provide evidence that future input determines the perception of earlier speech sounds by maintaining sensory features until they can be optimally integrated with top-down information.