Abstract
The common marmoset (Callithrix jacchus) is known for its highly vocal nature, displaying a diverse range of calls. Functional imaging in marmosets has shown that the processing of conspecific calls activates a brain network that includes fronto-temporal areas. It is currently unknown whether different call types activate the same or different networks. In this study, nine adult marmosets (four females) were exposed to four common vocalizations (phee, chatter, trill, and twitter), and their brain responses were recorded using event-related fMRI at 9.4T. We found robust activations in the auditory cortices, encompassing core, belt, and parabelt regions, and in subcortical areas like the inferior colliculus, medial geniculate nucleus, and amygdala in response to these calls. Although a common network was engaged, distinct activity patterns were evident for different vocalizations that could be distinguished by a 3D convolution neural network, indicating unique neural processing for each vocalization. Our findings also indicate the involvement of the cerebellum and medial prefrontal cortex (mPFC) in distinguishing particular vocalizations from others.
Significance Statement This study investigates the neural processing of vocal communications in the common marmoset (Callithrix jacchus). Utilizing event-related fMRI at 9.4T, we demonstrate that different calls (phee, chatter, trill, and twitter) elicit distinct brain activation patterns, challenging the notion of a uniform neural network for all vocalizations. Each call type distinctly engages various regions within the auditory cortices and subcortical areas. These findings offer insights into the evolutionary mechanisms of primate vocal perception and provide a foundation for understanding the origins of human speech and language processing.
Footnotes
The authors declare no competing financial interests.
We wish to thank Cheryl Vander Tuin, Whitney Froese, Miranda Bellyou, and Hannah Pettypiece for animal preparation and care and Dr. Alex Li for scanning assistance. Support was provided by the Canadian Institutes of Health Research (FRN 148365, S.E.), the Natural Sciences and Engineering Council of Canada (S.E.), and the Canada First Research Excellence Fund to BrainsCAN. We also acknowledge the support of the Government of Canada’s New Frontiers in Research Fund (NFRF), [NFRF-T-2022-00051].