Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

Phonological representations of auditory and visual speech in the occipito-temporal cortex and beyond

Alice Van Audenhaege, Stefania Mattioni, Filippo Cerpelloni, Remi Gau, Arnaud Szmalec and Olivier Collignon
Journal of Neuroscience 30 April 2025, e1415242025; https://doi.org/10.1523/JNEUROSCI.1415-24.2025
Alice Van Audenhaege
1Institute of Psychology (IPSY) & Institute of Neuroscience (IoNS), UCLouvain, Louvain-la-Neuve 1348, Belgium
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Stefania Mattioni
1Institute of Psychology (IPSY) & Institute of Neuroscience (IoNS), UCLouvain, Louvain-la-Neuve 1348, Belgium
2Department of Experimental Psychology, Ghent University, Ghent 9000, Belgium
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Filippo Cerpelloni
1Institute of Psychology (IPSY) & Institute of Neuroscience (IoNS), UCLouvain, Louvain-la-Neuve 1348, Belgium
3Research Unit Brain and Cognition, Leuven Brain Institute, KULeuven 3000, Leuven, Belgium
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Remi Gau
1Institute of Psychology (IPSY) & Institute of Neuroscience (IoNS), UCLouvain, Louvain-la-Neuve 1348, Belgium
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Arnaud Szmalec
1Institute of Psychology (IPSY) & Institute of Neuroscience (IoNS), UCLouvain, Louvain-la-Neuve 1348, Belgium
2Department of Experimental Psychology, Ghent University, Ghent 9000, Belgium
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Olivier Collignon
1Institute of Psychology (IPSY) & Institute of Neuroscience (IoNS), UCLouvain, Louvain-la-Neuve 1348, Belgium
4The Sense Innovation and Research Center, Lausanne 1007 & Sion 1950, Switzerland
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

Speech is a multisensory signal that can be extracted from the voice and the lips. Previous studies suggested that occipital and temporal regions encode both auditory and visual speech features but their location and nature remain unclear. We characterized brain activity using fMRI (in male and female) to functionally and individually define bilateral Fusiform Face Areas (FFA), the left word-selective ventral occipito-temporal cortex (word-VOTC), an audio-visual speech region in the left Superior Temporal Sulcus (lSTS) and control regions in bilateral scene-selective Para-hippocampal Place Areas (PPA). In these regions, we performed multivariate patterns classification of corresponding phonemes (speech sounds) and visemes (lip movements). We observed that the word-VOTC and lSTS represent phonological information from both vision and sounds. The multisensory nature of phonological representations appeared selective to the word-VOTC, as we found viseme but not phoneme representation in adjacent FFA, while PPA did not encode phonology in any modality. Interestingly, cross-modal decoding revealed aligned phonological representations across the senses in lSTS, but not in word-VOTC. A whole-brain cross-modal searchlight analysis additionally revealed aligned audio-visual phonological representations in bilateral pSTS and left somato-motor cortex overlapping with oro-facial articulators. Altogether, our results demonstrate that auditory and visual phonology are represented in the word-VOTC, extending its functional coding beyond orthography. The geometries of auditory and visual representations do not align in the word-VOTC as they do in the STS and left somato-motor cortex, suggesting distinct multisensory representations across a distributed phonological network.

Significance statement Speech is a multisensory signal that can be extracted from the voice and the lips. Which brain regions encode both visual and auditory speech representations? We show that the written word-selective ventral occipito-temporal (word-VOTC) and the left Superior Temporal Sulcus (lSTS) both process phonological information from speech sounds and lip movements. However, while the lSTS aligns these representations across the senses, the word-VOTC does not, indicating different encoding mechanisms. These findings extend the functional role of the word-VOTC beyond reading. An additional whole-brain approach reveals shared representations in bilateral superior temporal cortex and left somato-motor cortex, indicating a distributed network for multisensory phonology.

Footnotes

  • The project was funded in parts by a Mandat d’Impulsion Scientifique awarded to OC, the Belgian Excellence of Science (EOS) program (Project No. 30991544) awarded to OC, and a Flagship ERA-NET grant SoundSight (FRS-FNRS PINT-MULTI R.8008.19) awarded to OC. AV is a research fellow and OC is a senior research associate at the Fond National de la Recherche Scientifique de Belgique (FRS-FNRS). OC thanks Andrea Collignon for promoting the importance of visual speech in language development.

  • The authors declare no competing financial interests.

SfN exclusive license.

Back to top
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Phonological representations of auditory and visual speech in the occipito-temporal cortex and beyond
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
View Full Page PDF
Citation Tools
Phonological representations of auditory and visual speech in the occipito-temporal cortex and beyond
Alice Van Audenhaege, Stefania Mattioni, Filippo Cerpelloni, Remi Gau, Arnaud Szmalec, Olivier Collignon
Journal of Neuroscience 30 April 2025, e1415242025; DOI: 10.1523/JNEUROSCI.1415-24.2025

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Phonological representations of auditory and visual speech in the occipito-temporal cortex and beyond
Alice Van Audenhaege, Stefania Mattioni, Filippo Cerpelloni, Remi Gau, Arnaud Szmalec, Olivier Collignon
Journal of Neuroscience 30 April 2025, e1415242025; DOI: 10.1523/JNEUROSCI.1415-24.2025
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
  • Info & Metrics
  • eLetters
  • PDF

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Increased neuronal expression of the early endosomal adaptor APPL1 replicates Alzheimer’s Disease-related endosomal and synaptic dysfunction with cholinergic neurodegeneration.
  • Presynaptic mu opioid receptors suppress the functional connectivity of ventral tegmental area dopaminergic neurons with aversion-related brain regions
  • Change of spiny neuron structure in the basal ganglia song circuit and its regulation by miR-9 during song development
Show more Research Articles

Behavioral/Cognitive

  • Increased neuronal expression of the early endosomal adaptor APPL1 replicates Alzheimer’s Disease-related endosomal and synaptic dysfunction with cholinergic neurodegeneration.
  • Presynaptic mu opioid receptors suppress the functional connectivity of ventral tegmental area dopaminergic neurons with aversion-related brain regions
  • Change of spiny neuron structure in the basal ganglia song circuit and its regulation by miR-9 during song development
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.