PT - JOURNAL ARTICLE AU - Antoine J. Shahin AU - Kristina C. Backer AU - Lawrence D. Rosenblum AU - Jess R. Kerlin TI - Neural mechanisms underlying cross-modal phonetic encoding AID - 10.1523/JNEUROSCI.1566-17.2017 DP - 2017 Dec 20 TA - The Journal of Neuroscience PG - 1566-17 4099 - http://www.jneurosci.org/content/early/2017/12/20/JNEUROSCI.1566-17.2017.short 4100 - http://www.jneurosci.org/content/early/2017/12/20/JNEUROSCI.1566-17.2017.full AB - Audiovisual (AV) integration is essential for speech comprehension, especially in adverse listening situations. Divergent, but not mutually exclusive, theories have been proposed to explain the neural mechanisms underlying AV integration. One theory advocates that this process occurs via interactions between the auditory and visual cortices, as opposed to fusion of AV percepts in a multisensory integrator. Building upon this idea, we proposed that AV integration in spoken language reflects visually-induced weighting of phonetic representations at the auditory cortex. EEG was recorded while male and female human subjects watched and listened to videos of a speaker uttering consonant vowel (CV) syllables/ba/and/fa/, presented in auditory-only, AV congruent or incongruent contexts. Subjects reported whether they heard/ba/or/fa/. We hypothesized that vision alters phonetic encoding by dynamically weighting which phonetic representation in the auditory cortex is strengthened or weakened. That is, when subjects are presented with visual/fa/and acoustic/ba/and hear/fa/(illusion-fa), the visual input strengthens the weighting of the phone/f/representation. When subjects are presented with visual/ba/and acoustic/fa/and hear/ba/(illusion-ba), the visual input weakens the weighting of the phone/f/representation. Indeed, we found an enlarged N1 auditory evoked potential when subjects perceived illusion-ba, and a reduced N1 when they perceived illusion-fa, mirroring the N1 behavior for/ba/and/fa/in auditory-only settings. These effects were especially pronounced in individuals with more robust illusory perception. These findings provide evidence that visual speech modifies phonetic encoding at the auditory cortex.SIGNIFICANCE STATEMENTThe current study presents evidence that audiovisual integration in spoken language occurs when one modality (vision) acts on representations of a second modality (audition). Using the McGurk illusion, we show that visual context primes phonetic representations at the auditory cortex, altering the auditory percept, evidenced by changes in the N1 auditory evoked potential. This finding reinforces the theory that audiovisual integration occurs via visual networks influencing phonetic representations in the auditory cortex. We believe that this will lead to the generation of new hypotheses regarding cross-modal mapping, particularly whether it occurs via direct or indirect routes, e.g., via a multisensory mediator.