Audiovisual integration during speech comprehension: an fMRI study comparing ROI-based and whole brain analyses

Hum Brain Mapp. 2009 Jul;30(7):1990-9. doi: 10.1002/hbm.20640.

Abstract

Visual information (lip movements) significantly contributes to speech comprehension raising the question for the neural implementation of audiovisual (AV) integration during speech processing. To replicate and extend earlier neuroimaging findings, we compared two different analysis approaches in a slow event-related fMRI study of healthy native speakers of German who were exposed to AV speech stimuli (disyllabic nouns) with audio and visual signals being either congruent or incongruent. First, data was subjected to whole brain general linear model analysis after transformation of all individual data sets into standard space. Second, a region of interest (ROI) approach based on individual anatomy was used with ROI defined in areas identified previously as being important for AV processing. Standard space analysis revealed a widespread cortical network including the posterior part of the left superior temporal sulcus, Broca's region and its right hemispheric counterpart showing increased activity for incongruent stimuli. The ROI approach allowed to identify differences in activity between Brodmann areas 44 and 45, within Broca's area for incongruent stimulation, and also allowed to study activity of subdivisions of superior temporal regions. The complementary strengths and weaknesses of the two analysis approaches are discussed.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adult
  • Auditory Perception / physiology*
  • Brain / physiology*
  • Brain Mapping
  • Comprehension / physiology*
  • Female
  • Humans
  • Image Processing, Computer-Assisted / methods
  • Linear Models
  • Magnetic Resonance Imaging
  • Male
  • Speech
  • Speech Perception / physiology*
  • Time Factors
  • Visual Perception / physiology*
  • Young Adult