Time course of multisensory interactions during audiovisual speech perception in humans: a magnetoencephalographic study
Section snippets
Acknowledgments
We thank Professor Riitta Hari and Dr Iiro Jääskeläinen for the comments on the manuscript and Mika Seppä for help in data analysis. The MRIs were obtained at the Department of Radiology, Helsinki University Central Hospital. This study has been supported by the Academy of Finland and the Sigrid Juselius Foundation.
References (20)
- et al.
Response amplification in sensory-specific cortices during crossmodal binding
NeuroReport
(1999) - et al.
Evidence from functional magnetic resonance imaging of crossmodal binding in the human heteromodal cortex
Curr. Biol.
(2000) - et al.
Detection of audio-visual integration sites in humans by application of electrophysiological criteria to the bold effect
Neuroimage
(2001) - et al.
Auditory-visual integration during multimodal object recognition in humans: a behavioral and electrophysiological study
J. Cogn. Neurosci.
(1999) - et al.
Electrophysiological indicators of phonetic and non-phonetic multisensory interactions during audiovisual speech perception
Cogn. Brain Res.
(2003) - et al.
Selective minimum-norm solution of the biomagnetic inverse problem
IEEE Trans. Biomed. Eng.
(1995) - et al.
Multisensory auditory-visual interactions during early sensory processing in humans: a high-density electrical mapping study
Cogn. Brain Res.
(2002) - et al.
Processing of changes in visual speech in the human auditory cortex
Cogn. Brain Res.
(2002) - et al.
Audiovisual speech synthesis for Finnish
- et al.
Audiovisual integration of letters in the human brain
Neuron
(2000)
Cited by (65)
The relationship between multisensory associative learning and multisensory integration
2022, NeuropsychologiaSomatosensory contribution to audio-visual speech processing
2021, CortexCitation Excerpt :Previous studies using magnetencephalography have detailed the time course of audio-visual speech processing for the McGurk effect (Hertrich et al., 2007; Möttönen et al., 2004). The early component (<200 msec) is processed in the sensory-specific areas and the latter component (>250 msec) is processed in multisensory regions of the human temporal cortex (Möttönen et al., 2004). A similar time course can be seen in our results.
Adult dyslexic readers benefit less from visual input during audiovisual speech processing: fMRI evidence
2018, NeuropsychologiaCitation Excerpt :In typical readers, a number of brain regions have been repeatedly implicated in audiovisual speech processing. These regions include high-level associative areas such as the superior temporal gyrus (including the planum temporale) (e.g., Beauchamp et al., 2004a; Beauchamp et al., 2010; Calvert et al., 2000; Stevenson et al., 2010; Stevenson et al., 2011) and the supramarginal gyrus (e.g., Skipper et al., 2005), as well as other more primary sensory regions such as Heschl's gyrus (e.g., Calvert et al., 1999; Callan et al., 2003; Möttönen et al., 2004; Pekkola et al., 2005) and the superior temporal gyrus (e.g., Beauchamp et al., 2004a, 2010; Calvert et al., 2000) for auditory information, and the middle temporal gyrus (e.g., Callan et al., 2003; Calvert et al., 1999; Calvert and Campbell, 2003) and the fusiform gyrus (e.g., Calvert and Campbell, 2003; Macaluso et al., 2004; Stevenson et al., 2010; Wyk et al., 2010) for visual information processing. Some studies have shown that the regions involved in multisensory processing exhibit enhanced responses to audiovisual stimuli, when compared to the sum of the responses to unisensory auditory and visual stimuli (Giard and Peronnet, 1999; Calvert et al., 2000, 2001; Macaluso et al., 2000; Bushara et al., 2001; Klucharev et al., 2003; Wright et al., 2003; Molholm et al., 2004).