RT Journal Article SR Electronic T1 Predictive Brain Mechanisms in Sound-to-Meaning Mapping during Speech Processing JF The Journal of Neuroscience JO J. Neurosci. FD Society for Neuroscience SP 10813 OP 10822 DO 10.1523/JNEUROSCI.0583-16.2016 VO 36 IS 42 A1 Bingjiang Lyu A1 Jianqiao Ge A1 Zhendong Niu A1 Li Hai Tan A1 Jia-Hong Gao YR 2016 UL http://www.jneurosci.org/content/36/42/10813.abstract AB Spoken language comprehension relies not only on the identification of individual words, but also on the expectations arising from contextual information. A distributed frontotemporal network is known to facilitate the mapping of speech sounds onto their corresponding meanings. However, how prior expectations influence this efficient mapping at the neuroanatomical level, especially in terms of individual words, remains unclear. Using fMRI, we addressed this question in the framework of the dual-stream model by scanning native speakers of Mandarin Chinese, a language highly dependent on context. We found that, within the ventral pathway, the violated expectations elicited stronger activations in the left anterior superior temporal gyrus and the ventral inferior frontal gyrus (IFG) for the phonological–semantic prediction of spoken words. Functional connectivity analysis showed that expectations were mediated by both top-down modulation from the left ventral IFG to the anterior temporal regions and enhanced cross-stream integration through strengthened connections between different subregions of the left IFG. By further investigating the dynamic causality within the dual-stream model, we elucidated how the human brain accomplishes sound-to-meaning mapping for words in a predictive manner.SIGNIFICANCE STATEMENT In daily communication via spoken language, one of the core processes is understanding the words being used. Effortless and efficient information exchange via speech relies not only on the identification of individual spoken words, but also on the contextual information giving rise to expected meanings. Despite the accumulating evidence for the bottom-up perception of auditory input, it is still not fully understood how the top-down modulation is achieved in the extensive frontotemporal cortical network. Here, we provide a comprehensive description of the neural substrates underlying sound-to-meaning mapping and demonstrate how the dual-stream model functions in the modulation of expectations, allowing for a better understanding of how the human brain accomplishes sound-to-meaning mapping in a predictive manner.