Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Articles, Behavioral/Cognitive

Crossmodal Adaptation in Right Posterior Superior Temporal Sulcus during Face–Voice Emotional Integration

Rebecca Watson, Marianne Latinus, Takao Noguchi, Oliver Garrod, Frances Crabbe and Pascal Belin
Journal of Neuroscience 14 May 2014, 34 (20) 6813-6821; https://doi.org/10.1523/JNEUROSCI.4478-13.2014
Rebecca Watson
1Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6229 EV, The Netherlands,
2Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, G12 8QB, United Kingdom,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Marianne Latinus
2Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, G12 8QB, United Kingdom,
3Neuroscience Institute of Timone, Coeducational Research Unit 7289, National Center of Scientific Research–Aix-Marseille University, F-13284 Marseille, France,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Takao Noguchi
4Department of Psychology, University of Warwick, Coventry, CV4 7A, United Kingdom, and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Oliver Garrod
2Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, G12 8QB, United Kingdom,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Frances Crabbe
2Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, G12 8QB, United Kingdom,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Pascal Belin
2Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, G12 8QB, United Kingdom,
3Neuroscience Institute of Timone, Coeducational Research Unit 7289, National Center of Scientific Research–Aix-Marseille University, F-13284 Marseille, France,
5International Laboratories for Brain, Music, and Sound, University of Montreal and McGill University, Montreal, Quebec, Canada, H2V 4P3
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Article Figures & Data

Figures

  • Tables
  • Figure 1.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 1.

    Experimental stimuli: two sets (1 per actor) of dynamic and time-synchronized audiovisual stimuli were presented within a continuous carryover design (Aguirre, 2007) in interleaved type 1–index 1 sequences over two experimental sessions (sequential presentation indicated by dotted lines in the left panels). Every face morph (extending from 90% angry to 90% happy) was paired with every voice morph (extending from 90% angry to 90% happy) within actor, so to create two sets of 25 face–voice stimuli, parametrically varying in congruence (examples in colored rectangles). In a carryover design, every stimulus precedes and follows every other stimulus, such that each stimulus serves as an adaptor for the following stimulus. The right panels show parts of an example type 1–index 1 sequence. In each experimental run, sequences were blocked by actor. Participants performed a two-alternative forced-choice task (angry or happy) on emotion. The right panels indicate examples of within-block sequences of stimuli. ISI, Interstimulus interval.

  • Figure 2.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 2.

    Behavioral results: direct effects of face and voice emotion morph. a, Categorization results. Categorization (percentage angry responses) as a function of face morph (middle), voice morph (right), and both (left). b, Reaction time results. Reaction time (milliseconds) as a function of face morph (middle), voice morph (right), and both (left). Both face and voice morph were morphed between 10% happy and 90% happy in 20% steps. Both categorization and reaction time results are averaged across actor. Note the greater influence of facial versus vocal emotional cues on behavioral responses. A, Angry; H, Happy.

  • Figure 3.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 3.

    Imaging results: multimodal localizer. Left, A cluster in the right STG/STS responding more to audiovisual, compared with either visual or auditory, information alone localized using a conjunction analysis (AV > A ∩ AV > V; conjunction null hypothesis; Nichols et al., 2005). Results were thresholded at p < 0.05 (cluster corrected). Right, Condition effects at the peak voxel of the cluster.

  • Figure 4.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 4.

    Imaging results. a, Unimodal face adaptation. Activation in left putamen and right FG in response to varying percentage difference in face morph between consecutive stimuli. Left and right, Parameter estimate at the peak activated voxel of left putamen and right FG, respectively, as a result of varying percentage difference in face morph. Results were thresholded at p < 0.05 (FWE voxel corrected) and a minimum cluster size of more than five contiguous voxels. b, Unimodal voice adaptation. Activation in bilateral STG/STS in response to varying percentage difference in voice morph between consecutive stimuli. Left and right, Parameter estimate at the peak activated voxel in left and right STG/STS, respectively, as a result of varying percentage difference in voice morph. Results were thresholded at p < 0.05 (FWE voxel corrected) and a minimum cluster size of more than five contiguous voxels. c, Crossmodal adaptation. Red, Results from the independent functional multimodal localizer. An ROI analysis showed that voice-to-face emotion morph difference evoked a significant response in this region (p < 0.025, t = 2.12). Green, Activation in right pSTS as a result of varying percentage difference between voice and the following face morph of consecutive stimuli. Results were thresholded at p < 0.001 (voxel uncorrected) with a minimum cluster size of more than five contiguous voxels. Yellow, Overlap between the localizer and voice-to-face morph difference activation. Left and right, Parameter estimate at the peak activated voxel of right pSTS as a result of varying percentage difference in face-to-voice morph, and voice-to-face morph, respectively. It should be noted that face-to-voice morph difference did not evoke a significant response in this region.

Tables

  • Figures
    • View popup
    Table 1.

    Imaging results

    Brain regionsCoordinates (mm)kt statistic
    xyz
    a, Multimodal localizer
        STG/STS48−40131535.23
    b, Unimodal adaptation
        i, Adaptation to face emotion
            Putamen−21810147.46
            FG30−52−2386.40
        ii, Adaptation to voice emotion
            STG/STS54−2210517.98
            STG/STS−60−377247.37
            Inferior frontal gyrus482322116.32
    c, Crossmodal adaptation
        i, Adaptation to voice-to-face emotion
            STS66−46494.20
        ii, Adaptation to face-to-voice emotion
    No significant voxels
    • a, Results from multisensory functional localizer experiment. Contrasts were cluster thresholded at p < 0.05 (FWE corrected). MNI coordinates and t scores are from the peak voxel of a cluster. b, Unimodal adaptation results. bi, Adaptation to face emotion. bii, Adaptation to voice emotion. Contrasts were thresholded to display voxels reaching a significance level of p < 0.05 (FWE corrected) and an additional minimum cluster size of more than five contiguous voxels. Contrasts were also masked by an AV versus baseline contrast thresholded at p < 0.001 (voxel uncorrected). MNI coordinates and t scores are from the peak voxel of a cluster. c, Crossmodal adaptation results. ci, Voice-to-face adaptation. cii, Face-to-voice adaptation. Contrasts were thresholded to display voxels reaching a significance level of p < 0.001 (uncorrected) and an additional minimum cluster size of more than five contiguous voxels. Contrasts were masked by an AV versus baseline contrast thresholded at p < 0.001 (voxel uncorrected). MNI coordinates and t scores are from the voxel of a cluster.

Back to top

In this issue

The Journal of Neuroscience: 34 (20)
Journal of Neuroscience
Vol. 34, Issue 20
14 May 2014
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Advertising (PDF)
  • Ed Board (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Crossmodal Adaptation in Right Posterior Superior Temporal Sulcus during Face–Voice Emotional Integration
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Crossmodal Adaptation in Right Posterior Superior Temporal Sulcus during Face–Voice Emotional Integration
Rebecca Watson, Marianne Latinus, Takao Noguchi, Oliver Garrod, Frances Crabbe, Pascal Belin
Journal of Neuroscience 14 May 2014, 34 (20) 6813-6821; DOI: 10.1523/JNEUROSCI.4478-13.2014

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Crossmodal Adaptation in Right Posterior Superior Temporal Sulcus during Face–Voice Emotional Integration
Rebecca Watson, Marianne Latinus, Takao Noguchi, Oliver Garrod, Frances Crabbe, Pascal Belin
Journal of Neuroscience 14 May 2014, 34 (20) 6813-6821; DOI: 10.1523/JNEUROSCI.4478-13.2014
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • emotion perception
  • functional magnetic resonance adaptation
  • multisensory integration

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Articles

  • Memory Retrieval Has a Dynamic Influence on the Maintenance Mechanisms That Are Sensitive to ζ-Inhibitory Peptide (ZIP)
  • Neurophysiological Evidence for a Cortical Contribution to the Wakefulness-Related Drive to Breathe Explaining Hypocapnia-Resistant Ventilation in Humans
  • Monomeric Alpha-Synuclein Exerts a Physiological Role on Brain ATP Synthase
Show more Articles

Behavioral/Cognitive

  • Similar Computational Hierarchies for Reading and Speech in the Occipital Cortex of Sighed and Blind: Converging Evidence from fMRI and Chronometric TMS
  • “What” and “When” Predictions Jointly Modulate Speech Processing
  • Orbitofrontal High-Gamma Reflects Spike-Dissociable Value and Decision Mechanisms
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.