Neural processes underlying perceptual enhancement by visual speech gestures

Neuroreport. 2003 Dec 2;14(17):2213-8. doi: 10.1097/00001756-200312020-00016.

Abstract

This fMRI study explores brain regions involved with perceptual enhancement afforded by observation of visual speech gesture information. Subjects passively identified words presented in the following conditions: audio-only, audiovisual, audio-only with noise, audiovisual with noise, and visual only. The brain may use concordant audio and visual information to enhance perception by integrating the information in a converging multisensory site. Consistent with response properties of multisensory integration sites, enhanced activity in middle and superior temporal gyrus/sulcus was greatest when concordant audiovisual stimuli were presented with acoustic noise. Activity found in brain regions involved with planning and execution of speech production in response to visual speech presented with degraded or absent auditory stimulation, is consistent with the use of an additional pathway through which speech perception is facilitated by a process of internally simulating the intended speech act of the observed speaker.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Acoustic Stimulation / methods*
  • Adult
  • Brain / physiology*
  • Humans
  • Magnetic Resonance Imaging / methods
  • Male
  • Middle Aged
  • Neural Pathways / physiology
  • Photic Stimulation / methods*
  • Psychomotor Performance / physiology
  • Speech Perception / physiology*
  • Visual Perception / physiology*