A comparison of bound and unbound audio-visual information processing in the human cerebral cortex

Brain Res Cogn Brain Res. 2002 Jun;14(1):129-38. doi: 10.1016/s0926-6410(02)00067-8.

Abstract

Human speech has auditory (heard speech) and visual (seen speech) qualities. The neural representation of audiovisual integration in speech was investigated using functional magnetic resonance imaging (fMRI). Ten subjects were imaged while viewing a face in four different conditions: with speech and mouth movements synchronized, with speech and mouth movements desynchronized, during silent speech, or while viewing a static face. Subtractions of the different sets of images showed that lipreading primarily activated the STG/STS. Synchronized audio-visual speech and desynchronized audio-visual speech activated similar areas. Regions activated more in the synchronized versus the desynchronized conditions were considered to be those involved in cross-modal integration. One dominant activation focus was found near the left claustrum, a subcortical region. A region-of-interest analysis of the STS and parietal areas found no difference between audio-visual conditions. However, this analysis found that synchronized audio-visual stimuli led to a higher signal change in the claustrum region. This study extends previous results, using other sensory combinations, and other tasks, indicating involvement of the claustrum in sensory integration.

Publication types

  • Comparative Study
  • Research Support, U.S. Gov't, P.H.S.

MeSH terms

  • Acoustic Stimulation / methods*
  • Adolescent
  • Adult
  • Analysis of Variance
  • Auditory Perception / physiology*
  • Cerebral Cortex / physiology*
  • Female
  • Humans
  • Magnetic Resonance Imaging / methods
  • Photic Stimulation / methods
  • Visual Perception / physiology*