RT Journal Article SR Electronic T1 In spoken word recognition the future predicts the past JF The Journal of Neuroscience JO J. Neurosci. FD Society for Neuroscience SP 0065-18 DO 10.1523/JNEUROSCI.0065-18.2018 A1 Laura Gwilliams A1 Tal Linzen A1 David Poeppel A1 Alec Marantz YR 2018 UL http://www.jneurosci.org/content/early/2018/07/16/JNEUROSCI.0065-18.2018.abstract AB Speech is an inherently noisy and ambiguous signal. In order to fluently derive meaning, a listener must integrate contextual information to guide interpretations of the sensory input. While many studies have demonstrated the influence of prior context on speech perception, the neural mechanisms supporting the integration of subsequent context remain unknown. Using magnetoencephalography (MEG) to record from human auditory cortex, we analysed responses to spoken words with a varyingly ambiguous onset phoneme, the identity of which is later disambiguated at the lexical uniqueness point. Fifty participants (both male and female) were recruited across two MEG experiments. Our findings suggest that primary auditory cortex is sensitive to phonological ambiguity very early during processing — at just 50 ms after onset. Subphonemic detail is preserved in auditory cortex over long timescales, and re-evoked at subsequent phoneme positions. Commitments to phonological categories occur in parallel, resolving on the shorter time-scale of ∼450 ms. These findings provide evidence that future input determines the perception of earlier speech sounds by maintaining sensory features until they can be integrated with top-down lexical information.Significance statementThe perception of a speech sound is determined by its surrounding context, in the form of words, sentences, and other speech sounds. Often, such contextual information becomes available later than the sensory input. The present study is the first to unveil how the brain uses this subsequent information to aid speech comprehension. Concretely, we find that the auditory system actively maintains the acoustic signal in auditory cortex, while concurrently making guesses about the identity of the words being said. Such a processing strategy allows the content of the message to be accessed quickly, while also permitting re-analysis of the acoustic signal to minimise parsing mistakes.