Polysensory interactions along lateral temporal regions evoked by audiovisual speech

Cereb Cortex. 2003 Oct;13(10):1034-43. doi: 10.1093/cercor/13.10.1034.

Abstract

Many socially significant biological stimuli are polymodal, and information processing is enhanced for polymodal over unimodal stimuli. The human superior temporal sulcus (STS) region has been implicated in processing socially relevant stimuli--particularly those derived from biological motion such as mouth movements. Single unit studies in monkeys have demonstrated that regions of STS are polysensory--responding to visual, auditory and somato-sensory stimuli, and human neuroimaging studies have shown that lip-reading activates auditory regions of the lateral temporal lobe. We evaluated whether concurrent speech sounds and mouth movements were more potent activators of STS than either speech sounds or mouth movements alone. In an event-related fMRI study, subjects observed an animated character that produced audiovisual speech and the audio and visual components of speech alone. Strong activation of the STS region was evoked in all three conditions, with greatest levels of activity elicited by audiovisual speech. Subsets of activated voxels within the STS region demonstrated overadditivity (audiovisual > audio + visual) and underadditivity (audiovisual < audio + visual). These results confirm the polysensory nature of STS region and demonstrate for the first time that polymodal interactions may both potentiate and inhibit activation.

Publication types

  • Research Support, Non-U.S. Gov't
  • Research Support, U.S. Gov't, Non-P.H.S.
  • Research Support, U.S. Gov't, P.H.S.

MeSH terms

  • Acoustic Stimulation / methods*
  • Adult
  • Analysis of Variance
  • Brain Mapping / methods
  • Female
  • Humans
  • Magnetic Resonance Imaging / methods
  • Male
  • Photic Stimulation / methods*
  • Speech / physiology
  • Speech Perception / physiology*
  • Temporal Lobe / physiology*
  • Visual Perception / physiology*