Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

A Neurodevelopmental Shift in Reward Circuitry from Mother's to Nonfamilial Voices in Adolescence

Daniel A. Abrams, Percy K. Mistry, Amanda E. Baker, Aarthi Padmanabhan and Vinod Menon
Journal of Neuroscience 18 May 2022, 42 (20) 4164-4173; https://doi.org/10.1523/JNEUROSCI.2018-21.2022
Daniel A. Abrams
1Department of Psychiatry and Behavioral Sciences
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Percy K. Mistry
1Department of Psychiatry and Behavioral Sciences
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Amanda E. Baker
1Department of Psychiatry and Behavioral Sciences
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Amanda E. Baker
Aarthi Padmanabhan
1Department of Psychiatry and Behavioral Sciences
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Vinod Menon
1Department of Psychiatry and Behavioral Sciences
2Department of Neurology and Neurological Sciences
3Stanford Neuroscience Institute, Stanford University School of Medicine, Stanford, California 94305
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

The social worlds of young children primarily revolve around parents and caregivers, who play a key role in guiding children's social and cognitive development. However, a hallmark of adolescence is a shift in orientation toward nonfamilial social targets, an adaptive process that prepares adolescents for their independence. Little is known regarding neurobiological signatures underlying changes in adolescents' social orientation. Using functional brain imaging of human voice processing in children and adolescents (ages 7-16), we demonstrate distinct neural signatures for mother's voice and nonfamilial voices across child and adolescent development in reward and social valuation systems, instantiated in nucleus accumbens and ventromedial prefrontal cortex. While younger children showed greater activity in these brain systems for mother's voice compared with nonfamilial voices, older adolescents showed the opposite effect with increased activity for nonfamilial compared with mother's voice. Findings uncover a critical role for reward and social valuative brain systems in the pronounced changes in adolescents' orientation toward nonfamilial social targets. Our approach provides a template for examining developmental shifts in social reward and motivation in individuals with pronounced social impairments, including adolescents with autism.

SIGNIFICANCE STATEMENT Children's social worlds undergo a transformation during adolescence. While socialization in young children revolves around parents and caregivers, adolescence is characterized by a shift in social orientation toward nonfamilial social partners. Here we show that this shift is reflected in neural activity measured from reward processing regions in response to brief vocal samples. When younger children hear their mother's voice, reward processing regions show greater activity compared with when they hear nonfamilial, unfamiliar voices. Strikingly, older adolescents show the opposite effect, with increased activity for nonfamilial compared with mother's voice. Findings identify the brain basis of adolescents' switch in social orientation toward nonfamilial social partners and provides a template for understanding neurodevelopment in clinical populations with social and communication difficulties.

  • adolescence
  • auditory
  • development
  • reward
  • voice
  • brain

Introduction

Children's social worlds undergo a dramatic transformation during adolescence. In younger children, socialization revolves primarily around parents and caregivers, and parent–child interactions during this stage of development play a key role in children's cognitive (Landry et al., 2006; Bernier et al., 2012), language (Liu et al., 2003; Majorano et al., 2013), and socio-emotional development (Cassidy et al., 1996; Rah and Parke, 2008). During adolescence, however, individuals increasingly engage with social targets outside the family (Larson et al., 1996). Importantly, adolescents show increased sensitivity to interactions with nonfamilial (NF) social targets (Allen et al., 2005) and seek these individuals out for social support (Furman and Buhrmester, 1992). It is thought that this shift in social orientation is adaptive and represents a key component of healthy development that prepares children for their journey toward independence (Sachser et al., 2018).

Very little is known regarding neurobiological signatures underlying changes in social orientation that occur across childhood and adolescence. Functional brain imaging of human voice processing provides a powerful approach for examining these changes during development. The human voice provides a primary channel for social engagement, and listeners are highly adept at identifying “who” is speaking from the first days of life (Kisilevsky et al., 2009). Moreover, a recent fMRI study in children (7-12 years old) identified a functional brain circuit that is selectively engaged during the processing of mother's voice (Abrams et al., 2016), a biologically salient signal associated with cognitive (Liu et al., 2003) and social function (Seltzer et al., 2012), compared with novel NF voices. Results showed that, compared with NF voices, brief samples of mother's voice elicit increased neural activity not only in auditory processing regions, including primary auditory cortex and voice-selective superior temporal sulcus (STS) (Belin et al., 2000), but also in salience (Menon and Uddin, 2010) and reward processing regions (Haber and Knutson, 2010), instantiated in the anterior insula (AI), nucleus accumbens (NAc), and orbitofrontal cortex (OFC). Results further showed that, compared with NF voices, mother's voice recruited key nodes of the default mode network (DMN) (Greicius et al., 2003), anchored in the ventromedial prefrontal cortex (vmPFC) and posterior medial cortex (posterior cingulate cortex [PCC] and precuneus), which has been implicated in self-referential (Gusnard et al., 2001) and social valuation processing (Fareri et al., 2015; Kumar et al., 2019). In contrast, when children listened to novel NF female voices compared with nonsocial environmental sounds, neural activation was prominent in voice-selective STS and the amygdala but did not elicit activity in the DMN or salience and reward processing systems.

How these patterns of brain responses change during the transition from childhood to adolescence is not understood. While previous studies in children and adolescents have shown age-related increases in neural activity in the NAc and insula in response to visual social stimuli (i.e., images of faces) (Guyer et al., 2009; Somerville et al., 2011), previous studies have not examined these neurodevelopmental effects in the context of familial and NF communication targets. Specifically, it is unknown whether the pronounced social changes that occur during adolescent development shape the neural signatures of both biologically salient voices, such as mother's voice, and NF communication targets, who become increasingly important in adolescents' social world.

Here, we investigate brain features associated with the adaptive transition toward NF social stimuli that occurs between childhood and adolescence, and test neurodevelopmental models using a cross-sectional sample of children and adolescents (7.7-16.6 years old). The primary goal of our study was to determine developmental changes in brain response to NF and mother's voice and examine whether children and adolescents show stimulus-specific preferences for these stimuli across development. Our findings reveal neurodevelopmental changes in response to mother's and NF voices and identify the neural basis of adolescents' switch in social orientation toward NF targets.

Materials and Methods

Participants

The Stanford University Institutional Review Board approved the study protocol, and all participants provided written consent for their participation in the study. Participants were recruited locally from schools near Stanford University. All children and adolescents were required to have a full-scale IQ > 80, as measured by the Wechsler Abbreviated Scale of Intelligence (Wechsler, 1999). All children and adolescents were right-handed and had no history of the following: neurologic, psychiatric, or learning disorders; negative personal and family history (first degree) of developmental cognitive disorders and heritable neuropsychiatric disorders; evidence of significant difficulty during pregnancy, labor, delivery, or immediate neonatal period; or abnormal developmental milestones as determined by neurologic history and examination. Participants are the biological offspring of the mothers who were recorded for the fMRI and behavioral experiments and were raised in homes that included their mothers. Participants' neuropsychological characteristics are provided in Table 1.

View this table:
  • View inline
  • View popup
Table 1.

Demographic and IQ measures

Stimuli

Stimuli consisted of the three nonsense words, “teebudieshawlt,” “keebudieshawlt,” and “peebudieshawlt,” produced by the participant's mother as well as two NF females who were also mothers (Fig. 1a; for audio examples, see Extended Data Figs. 1-1, 1-2, 1-3, 1-4, 1-5, and 1-6). Nonsense words were used to avoid activating semantic neural systems, thereby enabling a focus on the neural responses to each speaker's vocal characteristics (Binder et al., 2000; Raettig and Kotz, 2008). These particular nonsense words were selected for a number of reasons: first, stimuli are exemplars from a standardized behavioral test of phonological abilities (Wagner et al., 1999); second, the use of stimulus contrasts that differ by only one phoneme (i.e., minimal pair) enables a fine-grained assessment of phonological decoding in the auditory system; finally, the use of four syllable nonsense words, differentiated by word-initial, place-of-articulation contrasts, provides challenging but realistic speech-like stimuli. These vocal stimuli were used in both the fMRI task and the Voice Identification behavioral task. A second class of stimuli included in the fMRI task was nonspeech environmental sounds. These sounds, which included brief recordings of laundry machines, dishwashers, and other household sounds, were taken from a professional sound effects library.

Stimulus recording

Recordings of each mother were made individually for use in the voice identification and fMRI tasks. Mother's voice stimuli and NF voices were recorded in a quiet conference room using a Shure PG27-USB condenser microphone connected to a MacBook Air laptop. The audio signal was digitized at a sampling rate of 44.1 kHz and A/D converted with 16-bit resolution. Mothers were positioned in the conference room to avoid early sound wave reflections from contaminating the recordings. To provide a natural speech context for the recording of each nonsense word, mothers were instructed to repeat three sentences, each of which contained one of the nonsense words, during the recording. The first word of each of these sentences was their child's name, which was followed by the words “that is a,” followed by one of the three nonsense words. A hypothetical example of a sentence spoken by a mother for the recording was “Johnny, that is a keebudieshawlt.” Before beginning the recording, mothers were instructed on how to produce these nonsense words by repeating them to the experimenter until the mothers had reached proficiency. Importantly, mothers were instructed to say these sentences using the tone of voice they would use when speaking with their child during an engaging and enjoyable shared learning experience (e.g., if their child asked them to identify an item at a museum). The vocal recording session resulted in digitized recordings of the mothers repeating each of the three sentences ∼30 times to ensure multiple high-quality samples of each nonsense word for each mother.

Stimulus postprocessing

The goal of stimulus postprocessing was to isolate the three nonsense words from the sentences that each mother and NF spoke during the recording session and normalize them for duration and RMS amplitude for inclusion in the fMRI stimulus presentation protocol and the voice identification task. First, a digital sound editor (Audacity: http://audacity.sourceforge.net) was used to isolate each utterance of the three nonsense words from the sentences spoken by each mother. The three best versions of each nonsense word were then selected based on the audio and vocal quality of the utterances (i.e., eliminating versions that were mispronounced, included vocal creak, or were otherwise not ideal exemplars of the nonsense words). These nine nonsense words were then normalized for duration to 956 ms, the mean duration of the nonsense words produced by the NF voices, using Praat software similar to previous studies (Abrams et al., 2008). On average, speech samples were adjusted 8.7% during normalization, and this process did not affect the naturalness of the vocal stimuli. A 10 ms linear fade (ramp and damp) was then performed on each stimulus to prevent click-like sounds at the beginning and end of the stimulus, and then stimuli were equated for RMS amplitude. These final stimuli were then evaluated for audibility and clarity to ensure that postprocessing manipulations had not introduced any artifacts into the samples. The same process was performed on the NF voices and environmental sounds to ensure that all stimuli presented in the fMRI experiment were the same duration and RMS amplitude.

Voice identification behavioral task

All participants who participated in the fMRI experiment completed an auditory behavioral test immediately following the voice processing fMRI scan. The goal of the voice identification behavioral task was to ensure that participants were able to reliably discriminate their mother's voice from NF female voices. Participants were seated in a quiet area of the brain imaging suite in front of a laptop computer and facing a wall with noise-cancellation headphones placed over their ears to prevent distractions. Vocal stimuli were delivered via Eprime on the laptop computer. In each trial, participants were presented with a recording of a multisyllabic nonsense word spoken by either the participant's mother or an unfamiliar, NF voice, and the task was to indicate whether their mother spoke the word. The multisyllabic nonsense words used in the behavioral task were the exact same samples used in the fMRI task. Specifically, the stimuli that were presented in the voice identification task consisted of the three nonsense words produced 3 times each by each child's mother as well as the two NF voices (i.e., all the vocal stimuli identified in Fig. 1a). This yielded a total of 27 stimuli (3 nonsense words × 3 repetitions × 3 different speakers). Each stimulus was presented twice in a random order during the voice identification task for a total of 54 trials, including 18 trials in which mother's voice nonsense words were presented and 18 trials in which each NF voice nonsense word was presented. Participants were instructed to press a button on the laptop keyboard as soon as they knew the answer.

fMRI data acquisition parameters

All fMRI data were acquired in a single session at the Richard M. Lucas Center for Imaging at Stanford University. Functional images were acquired on a 3-T Signa scanner (General Electric) using a custom-built head coil. Participants were instructed to stay as still as possible during scanning, and head movement was further minimized by placing memory-foam pillows around the participant's head. A total of 31 axial slices (4.0 mm thickness, 0.5 mm skip) parallel to the anterior/posterior commissure line and covering the whole brain were imaged by using a T2*-weighted gradient-echo spiral in-out pulse sequence (Glover and Law, 2001) with the following parameters: TR, 3576 ms; TE, 30 ms; flip angle, 80°; one interleave. This TR can be calculated as the sum of the stimulus duration (956 ms), a 300 ms silent interval buffering the beginning and end of each stimulus presentation to avoid backward and forward masking effects, the 2000 ms volume acquisition time, and an additional 22 ms silent interval which helped the stimulus computer maintain precise and accurate timing during stimulus presentation. The FOV was 22 cm, and the matrix size was 64 × 64, providing an in-plane spatial resolution of 3.4375 mm. Reduction of blurring and signal loss arising from field inhomogeneities was accomplished using an automated high-order shimming method before data acquisition.

fMRI task

Auditory stimuli were presented in 10 separate runs, each lasting 4 min. One run consisted of 56 randomized trials of mother's voice and NF voices producing the three nonsense words, environmental sounds, and catch trials. Randomizing the order of presentation for all stimuli, including both mother's and NFs vocal samples, during fMRI data collection creates the same level of (in)dependence between iterations of the nonsense words for both vocal sources included in the data analysis. Each stimulus lasted 956 ms in duration. Before each run, participants were instructed to play the “kitty cat” game during the fMRI scan. While laying down in the scanner, participants were first shown a brief video of a cat and were told that the goal of the cat game was to listen to a variety of sounds, including “voices that may be familiar,” and to push a button on a button box only when they heard cat meows (catch trials). During each run, four or five exemplars of each stimulus type, including three speakers producing three nonsense words, multiple environmental sounds, and three catch trials, were presented. Silent trials were not included in the fMRI task. At the end of each run, participants were shown another engaging video of a cat. Across the 10 runs, a total of 48 exemplars of each stimulus condition were presented to each subject. Auditory stimuli were presented to participants in the scanner using Eprime version 1.0 (Psychological Software Tools, 2002). Participants wore custom-built headphones designed to reduce the background scanner noise to ∼70 dBA (Abrams et al., 2011, 2013a). Headphone sound levels were calibrated before each data collection session, and all stimuli were presented at a sound level of 75 dBA. Participants were scanned using an event-related design. Auditory stimuli were presented during silent intervals between volume acquisitions to eliminate the effects of scanner noise on auditory discrimination. One stimulus was presented every 3.576 s.

fMRI preprocessing

fMRI data collected in each of the 10 functional runs were subject to the following preprocessing procedures. The first five volumes were not analyzed to allow for signal equilibration. A linear shim correction was applied separately for each slice during reconstruction by using a magnetic field map acquired automatically by the pulse sequence at the beginning of the scan. Translational movement in millimeters (x, y, z) was calculated based on the SPM12 parameters for motion correction of the functional images in each subject. To correct for deviant volumes resulting from spikes in movement, we used a de-spiking procedure (Iuculano et al., 2014) similar to those implemented in the Analysis of Functional Neuroimages toolkit maintained by the National Institute of Mental Health (Cox, 1996). Volumes with movement exceeding 0.5 voxels (1.562 mm) or spikes in global signal exceeding 5% were interpolated using adjacent scans. The majority of volumes repaired occurred in isolation. After the interpolation procedure, images were spatially normalized to standard MNI space, resampled to 2 mm isotropic voxels, and smoothed with a 6 mm FWHM Gaussian kernel.

Movement criteria for inclusion in fMRI analysis

For inclusion in the fMRI analysis, we required that each functional run had a maximum scan-to-scan movement of <6 mm and no more than 15% of volumes were corrected in the de-spiking procedure. Moreover, we required that all individual subject data included in the analysis consisted of at least seven functional runs that met our criteria for scan-to-scan movement and percentage of volumes corrected; subjects who had fewer than seven functional runs that met our movement criteria were not included in the data analysis. All 46 participants included in the analysis had at least 7 functional runs that met our movement criteria; 32 of the participants had 10 runs of data that met these movement criteria; 6 subjects had 9 runs of data that met movement criteria, 6 subjects had 8 runs of data, and 2 subjects had 7 runs that met criteria.

Voxel-wise analysis of fMRI activation

The goal of this analysis was to identify brain regions that showed differential activity levels in response to mother's voice, NF voices, and environmental sounds. Brain activation related to each task condition was first modeled at the individual subject level using boxcar functions with a canonical HRF and a temporal derivative to account for voxel-wise latency differences in hemodynamic response. Low-frequency drifts at each voxel were removed using a high-pass filter (0.5 cycles/min), and serial correlations were accounted for by modeling the fMRI time series as a first-degree autoregressive process (Friston et al., 1997). Voxel-wise t-statistics maps for each condition were generated for each participant using the GLM, along with the respective contrast images. Group-level activation was determined using individual subject contrast images and a second-level ANOVA. The contrasts of interest were as follows: (1) [NF voices – environmental sounds]; (2) [mother's voice – environmental sounds]; and (3) [NF voices – mother's voice]. To ensure that the inclusion of two NF voices compared with the (one) mother's voice stimulus did not bias fMRI contrast betas and T-maps, a value of 1 was entered into the contrast matrix for mother's voice, while a value of 0.5 was entered into the contrast matrix for each of the two NF voices. A fourth contrast of interest [NF voice #1 – NF voice #2] served as a control analysis to examine whether brain activation differences in response to the two NF voices used in the study vary as a function of age.

For all voxel-wise analyses, including the age covariate analysis (Figs. 2-4) and main effects of all stimulus contrasts (Fig. 5), significant clusters of activation were determined using a voxel-wise statistical height threshold of p < 0.005, with family-wise error corrections for multiple spatial comparisons (p < 0.05; 67 voxels) determined using Monte Carlo simulations using a custom MATLAB script. Significant correlations are inherent to all scatterplots included in the age covariate analysis (Figs. 2-4) because they are based on results from the whole-brain GLM analysis (Vul et al., 2009); however, the results provide important information regarding the distributions and covariation of activity strength in response to voices and age. For age covariate analyses, effect sizes were computed as Cohen's f according to Equation 1, where t is the mean t-score within a cluster and N is the sample size, as follows: Cohen's d=tsqrt(N2)(1)

To examine GLM results in the NAc, a small subcortical brain structure, we used a small volume correction at p < 0.005 using the Harvard-Oxford probabilistic maps of the NAc thresholded at 25%.

Acoustical analysis

The goal of the acoustical analysis was to examine whether physical attributes of the mother's voice stimuli may have contributed to age-related increases in brain activity for the NF voice versus mother's voice contrast (Fig. 4). Acoustical features of each mother's voice stimulus were extracted, including mean pitch, pitch SD, pitch slope, spectral center of gravity, and spectral SD, using Praat software similar to previous studies (Abrams et al., 2016, 2019). These acoustical values were then averaged across pseudoword stimuli for each mother. We examined the relation between mean acoustical values for each mother and signal levels measured in ROIs identified in Figure 4, including left-hemisphere NAc and right-hemisphere vmPFC, for the [NF – mother's voice] contrast. GLM betas were extracted from left-hemisphere NAc and right-hemisphere vmPFC. Coordinates for these ROIs were based on the voxel with the peak T-score in that region for each of these whole-brain age covariate maps. The cortical ROI (i.e., vmPFC) was defined as a 4 mm sphere, and the subcortical ROI (i.e., NAc) was a 2 mm sphere, and signal level was calculated by extracting the β value from individual subjects' contrast maps within each ROI and computing the mean β value for each ROI. Results were FDR-corrected for multiple comparisons.

Sex difference analysis

A regression analysis was performed to examine the effect of sex on age-related changes in ROI signal level measured in response to the [NF voices > mother's voice] GLM contrast. GLM betas were extracted from left-hemisphere NAc and right-hemisphere vmPFC as described previously (see Acoustical analysis). We then built a regression model with age as the dependent variable and sex, mean β value for each ROI, and their interaction as predictors in the regression equation. Significant results for the [sex × mean β value] interaction are reported (p < 0.05).

Support vector regression (SVR) analysis: brain activity levels and age prediction

The robustness and replicability of fMRI data remain a crucial concern for the field, and an established approach for addressing this issue is to perform a confirmatory cross-validation (CV) analysis (Cohen et al., 2010). We therefore used SVR to perform a confirmatory CV analysis that uses a machine-learning approach with balanced fourfold CV combined with linear regression (Cohen et al., 2010). In this analysis, we extracted individual subject activation β values taken from the [NF voices > environmental sounds], [mother's voice > environmental sounds], and [NF voices > mother's voice] GLM contrasts as described above. Mean β values for each ROI were entered as independent variables in a linear regression analysis with age as the dependent variable. r(predicted, observed), a measure of how well the independent variable predicts the dependent variable, was first estimated using a balanced fourfold CV procedure. Data were divided into four folds so that the distributions of dependent and independent variables were balanced across folds. Data were randomly assigned to four folds and the independent and dependent variables tested in one-way ANOVAs, repeating as necessary until both ANOVAs were insignificant to guarantee balance across the folds. A linear regression model was built using three folds leaving out the fourth, and this model was then used to predict the data in the left-out fold. This procedure was repeated 4 times to compute a final r(predicted, observed) representing the correlation between the data predicted by the regression model and the observed data. Finally, the statistical significance of the model was assessed using a nonparametric testing approach. The empirical null distribution of r(predicted, observed) was estimated by generating 1000 surrogate datasets under the null hypothesis that there was no association between changes in age and brain activity levels.

Results

Voice identification in children and adolescents

We first examined behavioral sensitivity for recognizing mother's voice in child and adolescent participants. Results from the voice identification behavioral task revealed that child and adolescent participants showed high levels of accuracy for identifying mother's voice with a mean accuracy of 97.7% across all participants (SD = 0.05; Fig. 1b). There was neither a relationship between age and mother's voice identification accuracy (R = 0.09; p = 0.54) nor was there a group difference in accuracy when splitting the sample into younger participants (7- to 12-year-olds) and older participants (13- to 16-year-olds; 2 samples t test, p = 0.73). Reaction time (RT) results from the voice identification task revealed a negative relationship between age and RT (Fig. 1c; R = −0.32; p = 0.03), which is consistent with previous studies reporting reductions in RT as a function of age (Kail, 1991a,b; Kwon et al., 2002). Results from the voice identification task show that all participants were highly accurate at identifying the vocal sources included in the fMRI voice processing task.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

fMRI experimental design and behavioral results. a, During fMRI data collection, three auditory nonsense words, produced by three different speakers, were presented to child and adolescent participants at a comfortable listening level. The three speakers consisted of each participant's mother and two NF female voices. Nonspeech environmental sounds were also presented to enable baseline comparisons for the voice contrasts of interest. All auditory stimuli were 0.95 s in duration and were equated for RMS amplitude. b, Child and adolescent participants were able to identify their mother's voice with high levels of accuracy, and there was no relation between age and identification accuracy. c, RTs during the voice identification task decreased as a function of participant age.

Extended Data Figure 1-1

Exemplar of nonsense word “keebudishawlt” used in fMRI task produced by NF female 1. Download Figure 1-1, WAV file

Extended Data Figure 1-2

Exemplar of nonsense word “peebudishawlt” used in fMRI task produced by NF female 1. Download Figure 1-2, WAV file

Extended Data Figure 1-3

Exemplar of nonsense word “teebudishawlt” used in fMRI task produced by NF female 1. Download Figure 1-3, WAV file

Extended Data Figure 1-4

Exemplar of nonsense word “keebudishawlt” used in fMRI task produced by NF female 2. Download Figure 1-4, WAV file

Extended Data Figure 1-5

Exemplar of nonsense word “peebudishawlt” used in fMRI task produced by NF female 2. Download Figure 1-5, WAV file

Extended Data Figure 1-6

Exemplar of nonsense word “teebudishawlt” used in fMRI task produced by NF female 2. Download Figure 1-6, WAV file

Age-related changes in neural response to human voices compared with nonsocial stimuli

A primary goal of our study was to identify brain areas which showed age-related changes in neural response to human voices compared with nonsocial stimuli. An influential model identifies adolescence as a “sensitive period” for social information processing characterized by increased sensitivity to social signals (Blakemore and Mills, 2014), and a prediction of this model is that the transition from childhood to adolescence is accompanied by a general increase in sensitivity to social signals, including those produced by both NF and familial sources, compared with nonsocial stimuli. We therefore used fMRI to measure brain activity in response to brief (0.95 s) samples of three sound sources in 46 neurotypical children and adolescents (for demographic and IQ measures, see Table 1). Auditory stimuli included mother's voice, unfamiliar NF voices, and nonsocial environmental sounds, an acoustical and cognitive control condition of the same duration and intensity as the vocal stimuli (for stimulus design, see Fig. 1a).

Compared with nonsocial environmental sounds, both NF voices (Fig. 2) and mother's voice showed extensive age-related increases in neural activity (Fig. 3; for main effects for all contrasts, see Fig. 5). Age-related increases in activity in response to both vocal stimuli were evident in voice-selective STS, salience processing regions, including AI and dorsal anterior cingulate cortex (dACC), and a key node of the DMN, instantiated in PCC. A critical distinction between these response profiles is that age-related changes in response to NF voices (Fig. 2), but not mother's voice (Fig. 3), were evident in NAc and vmPFC. These results indicate that, compared with nonsocial environmental sounds, both mother's voice and NF voices show age-related increases across a wide expanse of auditory, salience, and social evaluative processing systems. Findings suggest that a large extent of the social brain increasingly “tunes in” to a range of social stimuli, including both mother's voice and NF voices, as children progress into adolescence.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Age-related increases in brain activity in response to NF female voices compared with nonsocial environmental sounds. Color maps represent significant effects of age for the [NF voices – environmental sounds] contrast. Compared with children, adolescents show increased neural activity in response to NF voices compared with nonsocial environmental sounds throughout a wide range of brain systems. NF voices elicit age-related increases in activity in voice-selective STS extending to middle temporal gyrus (MTG, top left), dorsolateral regions of prefrontal cortex (DLPFC; top right), nodes of the salience network, including AI and dACC (bottom left), NAc and OFC of the mesolimbic reward system (middle right), and PCC and vmPFC of the DMN (bottom right). Significant correlations are inherent to all scatterplots because they are based on results from the whole-brain age covariate GLM analysis (Vul et al., 2009); however, the results provide important information regarding the distributions and covariation of activity strength in response to NF voices and age.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Age-related increases in brain activity in response to mother's voice compared with nonsocial environmental sounds. Compared with children, adolescents show increased neural activity in response to mother's voice compared with nonsocial environmental sounds throughout a wide range of brain systems. Mother's voice elicits age-related increases in activity in voice-selective STS extending to middle temporal gyrus (MTG; top left), dorsolateral regions of prefrontal cortex (DLPFC; top right), nodes of the salience network, including AI and dACC (bottom left), and PCC and dorsomedial prefrontal cortex (DMPFC) of the DMN (bottom right). NAc of the mesolimbic reward system did not reveal age-related increases in activity in response to mother's voice. Color maps represent significant effects of age for the [mother's voice – environmental sounds] contrast. Precun, Precuneus.

Age-related changes in neural response to mother's versus NF voices

A critical next step in our analysis was to examine age-related changes in neural activity associated with the direct comparison between mother's and NF voices. This analysis provides an opportunity to examine whether children and adolescents show stimulus-specific neural preferences for these stimuli across different ages during child and adolescent development. Results from this analysis confirmed age-related increases in activity for NF voices compared with mother's voice in NAc of the reward processing system and vmPFC of the DMN (Fig. 4). Specifically, younger children showed a preference for mother's voice compared with NF voices in the NAc and vmPFC, whereas older adolescents revealed a preference for NF voices compared with mother's voice in these brain regions. The y axis zero-crossings for the fitted regression lines (scatterplots, dotted blue vertical line) show that the neural transition between mother's and NF voice preference occurs between 13 and 14 years of age in the NAc and vmPFC.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Age-related increases in brain activity in response to nonfamilial (NF) voices compared with mother's voice. Results reveal stimulus-specific preference for vocal sources at different ages of child and adolescent development: while younger children showed increased neural activity for mother's voice compared with NF voices in the NAc of the mesolimbic reward system (top; left side of scatter plot) and vmPFC of the DMN (bottom), older adolescents showed increased neural activity for NF voices compared with mother's voice in these particular brain regions (right side of scatter plots). Color maps represent significant effects of age for the [NF voices – mother's voice] contrast. Blue vertical lines in the scatterplots indicate the zero crossing for the regression lines, which identify the age associated with neural transition between mother's and NF voice preference (13.8 and 13.5 years old in the NAc and vmPFC, respectively).

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Main effects for vocal contrasts. a, Brain activation to NF female voices compared with environmental sounds. NF voices elicit greater activity throughout a wide extent of voice-selective superior temporal gyrus (STG) and STS (top left), bilateral amygdala (Amyg; top right), superior parietal lobule (SPL) and PCC of parietal cortex (bottom left), bilateral lateral occipital cortex (LOC; bottom middle), and sensorimotor cortex (bottom left). b, Brain activation to mother's voice compared with environmental sounds. Mother's voice elicits greater activity throughout a wide extent of voice-selective STG and STS (top left), BA 45 of PFC (top right), bilateral amygdala (Amyg; bottom left), OFC of the mesolimbic reward system (bottom middle), and PCC, medial prefrontal cortex (pPFC), and angular gyrus (AG) of the DMN (bottom right). c, Brain activity for NF female voices compared with mother's voice. NF voices elicited greater activity in right-hemisphere postcentral gyrus of parietal cortex. d, Brain activation to mother's voice compared with NF voices. Mother's voice elicited greater activity throughout a wide extent of voice-selective STG and STS (top left), nodes of the mesolimbic reward system including OFC and vmPFC (top right), nodes of the salience network including AI and dACC (bottom left), PCC of parietal cortex (bottom middle), and lateral frontoparietal regions including AG, supramarginal gyrus, and BA 45 (bottom right).

Control analyses

We performed multiple control analyses to further probe these age-related increases in neural activity for NF voices compared with mother's voice. First, an analysis was performed to examine the influence of low-level acoustical features in the vocal stimuli on neural results. Acoustical characteristics of voices, which include pitch, harmonic, and amplitude features, are idiosyncratic between speakers, enabling listeners to rapidly discriminate between vocal sources (Hecker, 1971). Acoustical features of each mother's voice stimulus were extracted, including mean pitch, pitch SD, pitch slope, spectral center of gravity, and spectral SD, and were then averaged across pseudoword stimuli for each mother. The relation between acoustical values for each mother and signal levels measured in ROIs identified in Figure 4 for the [NF – mother's voice] contrast was then examined. Results showed that none of the correlations between acoustical features of mother's voice and brain activation in response to mother's voice survived FDR correction (q < 0.05), indicating that age-related changes in brain responses to vocal stimuli did not simply reflect low-level acoustical features of the stimuli. Rather, results suggest that neural response properties reflect the biological salience of these stimuli.

A second control analysis examined whether brain activation differences for the two NF voices vary as a function of age, potentially influencing the observed patterns of results. Results from this analysis failed to reveal a relationship between brain activation differences for the two NF voices and age in either of the brain regions highlighted in Figure 4 (p > 0.30 for all regions).

A third control analysis examined whether sex differences influence age-related increases in neural activity for NF voices compared with mother's voice, consistent with previous studies that have shown that sex differences are associated with behavioral (Kret and De Gelder, 2012) and neural processing (Whittle et al., 2011) of social-emotional stimuli. Regression analysis focused on the brain regions identified in the direct comparison between mother's and NF voices (Fig. 4). Regression results revealed that male and female groups showed similar age-related increases in neural response for NF compared with mother's voice (p > 0.15 male vs. female group comparisons for both NAc and vmPFC). Results from this analysis revealed that sex differences did not contribute to age-related increases in neural activity for NF voices compared with mother's voice.

A fourth control analysis examined whether behavioral accuracy (Fig. 1b) or RT (Fig. 1c) measured during the voice identification task were related to neural activity for NF voices compared with mother's voice. Results from this analysis failed to reveal a relationship between neural activity for NF voices compared with mother's voice and either voice identification accuracy (p > 0.35) or RT (p > 0.10).

Robustness of age-related neurodevelopmental changes

To examine the robustness and reliability of brain activity levels for predicting age, we used SVR to perform a confirmatory CV analysis that uses a machine-learning approach with balanced fourfold CV combined with linear regression (Cohen et al., 2010; Abrams et al., 2016). Results showed that the strength of neural activity was a reliable predictor of age across child and adolescent participants for all regions identified in Figures 2-4 (R ≥ 0.34; p ≤ 0.006 for all regions). These results demonstrate that child and adolescent development is associated with robust age-related increases in neural activity across a wide array of brain systems in response to human voices compared with nonsocial environmental sounds, including both NF voices and mother's voice. Importantly, in a direct comparison between mother's voice and NF voices, stimulus-specific preferences in NAc and vmPFC for these stimuli were observed at different ages during development, reflecting social preference for mother's voice in young children and preference for NF voices in adolescents.

Discussion

A hallmark of children's social worlds is a focus on parents and caregivers while adolescents show a shift in social orientation toward NF social targets, and here we demonstrate stimulus-specific neural preferences for mother's voice and NF voices in key nodes of the reward and default mode networks across neurodevelopment. Specifically, younger children showed greater neural activity in NAc of the reward circuit and vmPFC of the DMN, a brain network associated with social valuation, in response to mother's voice compared with NF voices, whereas older adolescents revealed greater neural activity in response to NF voices compared with mother's voice in these brain systems. Findings provide new information by highlighting neural features underlying changes in social orientation that occur during adolescent development.

Findings from the current study inform our understanding of several key principles regarding adolescent social development. First, a prominent developmental model identifies five distinct stages of social development across the lifespan, each of which is defined by a primary social target whose engagement is vigorously pursued within each developmental stage (Nelson et al., 2016). For infants, mothers and caregivers are the primary social target. During the juvenile phase, which extends from weaning to puberty, the mother/caregiver is considered a core target (with an increasing focus on peers), whereas the adolescent phase, which extends between puberty and full maturity, focuses on integration with NF peer social groups. This model further states that shifts across developmental stages are accompanied by a reduction in the motivation to engage with the target from the previous stage. For example, the model predicts that adolescence would be associated with both increased motivation to engage with NF social targets and reduced motivation to engage with parents and caregivers. Findings from the current study, which spanned the juvenile and adolescent phases from this model (Nelson et al., 2016), are consistent with this model by revealing that regional activity profiles within the NAc and vmPFC reflect the primary social targets identified for these two developmental stages, with younger children showing increased activity for mother's compared with NF voices and older adolescents showing the opposite effect. Increased activity for NF voices in the NAc is particularly relevant given extensive evidence from behavioral (Steinberg et al., 2008; Somerville et al., 2017) and neural studies (Galvan et al., 2006; Steinberg, 2008) showing heightened sensitivity to novelty in reward systems during adolescence. Current findings add to this literature by showing that brief (<1 s) auditory social cues produced by novel social targets elicit increased NAc activity in adolescents relative to mother's voice.

Findings further inform key components of a related developmental model which identifies adolescence as a “sensitive period” for social information processing characterized by increased sensitivity to social signals (Blakemore and Mills, 2014). For example, previous studies have consistently shown that adolescents are more adept at key aspects of social perception (Fuhrmann et al., 2016) and cognition (Güroğlu et al., 2009; Dumontheil et al., 2010) compared with pre-adolescent children. Therefore, a plausible hypothesis is that adolescence would be accompanied by a general increase to social signals, including those produced by both NF and familial sources, compared with nonsocial stimuli. Consistent with this hypothesis, results from the current study revealed age-related increases in neural activity in voice-selective STS, AI, and dACC of the salience network, and PCC of the DMN, in response to both NF and mother's voice stimuli compared with nonsocial environmental sounds (Figs. 2 and 3). While a comprehensive test of a sensitive period model requires longitudinal data and additional samples with adult participants, our findings are consistent with an important component of this model by showing a shift in neural activity profiles across sensory, salience, and default mode regions during social stimulus processing between childhood and adolescence. It is further hypothesized that “tuning into” perceptual aspects of social information, such as human vocal stimuli, during adolescence may serve as a critical precursor for increased higher-order social cognitive processing, including understanding the perspectives and intentions of others during interactions (Güroğlu et al., 2009; Dumontheil et al., 2010).

A consideration regarding the stimuli used in this study is that there are two major differences between the mother's voice and NF voice conditions: familial versus NF (social relationship) and familiar versus unfamiliar (familiarity). Therefore, there is some ambiguity regarding whether age-related changes are because of changes in processing familiarity or changes in processing social relationship. The “familial” versus “familiar” question is inherent to all mother's voice studies, of which there is an extensive behavioral literature dating back many decades (Mills and Melhuish, 1974). Because of the unique nature of mother's voice, none of these studies address, or could possibly address, the question: do infants recognize mother's voice because they are familial or familiar? In the context of the current study, the crucial finding is that, consistent with mother's voice being a unique and biologically salient signal associated with social and language learning, children show increased neural selectivity in reward processing regions for mother's voice versus nonmaternal voices. Remarkably, our study is the first to show that adolescents show the opposite effect, with increased neural selectivity in reward processing regions for nonmaternal voices compared with mother's voice. We believe that our neurodevelopmental study provides an exciting starting point for future research to disentangle the contribution of these factors.

In conclusion, we have examined the developmental features underlying changes in neural sensitivity to familial and NF individuals across childhood and adolescence. Our findings demonstrate that brain systems involved in reward and social valuation processing show stimulus-specific preferences for mother's and NF voices during different ages across child and adolescent development. Findings provide a neurobiological template for understanding dynamic changes in social orientation throughout the lifespan in both neurotypical populations and clinical psychiatric populations who experience deficits in social function, such as individuals with autism (Abrams et al., 2013b, 2019).

Footnotes

  • This work was supported by National Institutes of Health Grants K01 MH102428 to D.A.A., DC011095 and MH084164 to V.M., and DC017950 and DC017950-S1 to D.A.A. and V.M.; Brain and Behavior Research Foundation NARSAD Young Investigator Grant to D.A.A.; the Singer Foundation; and Simons Foundation/SFARI 308939 to V.M. All fMRI activation maps reported in the manuscript will be made available at NeuroVault (https://neurovault.org/collections). Full single subject raw data will be made public on the National Institutes of Health NDAR repository, as per National Institutes of Health rules (procedure is ongoing). We thank all the children and their parents who participated in our study; Emma Adair and the staff at the Stanford Lucas Center for Imaging for assistance with data collection; Shelby Karraker for assistance with data processing; and Heidi Abrams and Cindy Anderson for help with stimulus production.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Daniel A. Abrams at daa{at}stanford.edu or Vinod Menon at menon{at}stanford.edu

SfN exclusive license.

References

  1. ↵
    1. Abrams DA,
    2. Nicol T,
    3. Zecker S,
    4. Kraus N
    (2008) Right-hemisphere auditory cortex is dominant for coding syllable patterns in speech. J Neurosci 28:3958–3965. doi:10.1523/JNEUROSCI.0187-08.2008 pmid:18400895
    OpenUrlAbstract/FREE Full Text
  2. ↵
    1. Abrams DA,
    2. Bhatara A,
    3. Ryali S,
    4. Balaban E,
    5. Levitin DJ,
    6. Menon V
    (2011) Decoding temporal structure in music and speech relies on shared brain resources but elicits different fine-scale spatial patterns. Cereb Cortex 21:1507–1518. doi:10.1093/cercor/bhq198 pmid:21071617
    OpenUrlCrossRefPubMed
  3. ↵
    1. Abrams DA,
    2. Ryali S,
    3. Chen T,
    4. Balaban E,
    5. Levitin DJ,
    6. Menon V
    (2013a) Multivariate activation and connectivity patterns discriminate speech intelligibility in Wernicke's, Broca's, and Geschwind's areas. Cereb Cortex 23:1703–1714. doi:10.1093/cercor/bhs165 pmid:22693339
    OpenUrlCrossRefPubMed
  4. ↵
    1. Abrams DA,
    2. Lynch CJ,
    3. Cheng KM,
    4. Phillips J,
    5. Supekar K,
    6. Ryali S,
    7. Uddin LQ,
    8. Menon V
    (2013b) Underconnectivity between voice-selective cortex and reward circuitry in children with autism. Proc Natl Acad Sci USA 110:12060–12065. doi:10.1073/pnas.1302982110 pmid:23776244
    OpenUrlAbstract/FREE Full Text
  5. ↵
    1. Abrams DA,
    2. Chen T,
    3. Odriozola P,
    4. Cheng KM,
    5. Baker AE,
    6. Padmanabhan A,
    7. Ryali S,
    8. Kochalka J,
    9. Feinstein C,
    10. Menon V
    (2016) Neural circuits underlying mother's voice perception predict social communication abilities in children. Proc Natl Acad Sci USA 113:6295–6300. doi:10.1073/pnas.1602948113
    OpenUrlAbstract/FREE Full Text
  6. ↵
    1. Abrams DA,
    2. Padmanabhan A,
    3. Chen T,
    4. Odriozola P,
    5. Baker AE,
    6. Kochalka J,
    7. Phillips JM,
    8. Menon V
    (2019) Impaired voice processing in reward and salience circuits predicts social communication in children with autism. Elife 8:e39906. doi:10.7554/eLife.39906
    OpenUrlCrossRef
  7. ↵
    1. Allen JP,
    2. Porter MR,
    3. McFarland FC,
    4. Marsh P,
    5. McElhaney KB
    (2005) The two faces of adolescents' success with peers: adolescent popularity, social adaptation, and deviant behavior. Child Dev 76:747–760. doi:10.1111/j.1467-8624.2005.00875.x pmid:15892790
    OpenUrlCrossRefPubMed
  8. ↵
    1. Belin P,
    2. Zatorre RJ,
    3. Lafaille P,
    4. Ahad P,
    5. Pike B
    (2000) Voice-selective areas in human auditory cortex. Nature 403:309–312. doi:10.1038/35002078 pmid:10659849
    OpenUrlCrossRefPubMed
  9. ↵
    1. Bernier A,
    2. Carlson SM,
    3. Deschenes M,
    4. Matte-Gagne C
    (2012) Social factors in the development of early executive functioning: a closer look at the caregiving environment. Dev Sci 15:12–24. doi:10.1111/j.1467-7687.2011.01093.x pmid:22251288
    OpenUrlCrossRefPubMed
  10. ↵
    1. Binder JR,
    2. Frost JA,
    3. Hammeke TA,
    4. Bellgowan PS,
    5. Springer JA,
    6. Kaufman JN,
    7. Possing ET
    (2000) Human temporal lobe activation by speech and nonspeech sounds. Cereb Cortex 10:512–528. doi:10.1093/cercor/10.5.512 pmid:10847601
    OpenUrlCrossRefPubMed
  11. ↵
    1. Blakemore SJ,
    2. Mills KL
    (2014) Is adolescence a sensitive period for sociocultural processing? Annu Rev Psychol 65:187–207. doi:10.1146/annurev-psych-010213-115202 pmid:24016274
    OpenUrlCrossRefPubMed
  12. ↵
    1. Cassidy J,
    2. Kirsh SJ,
    3. Scolton KL,
    4. Parke RD
    (1996) Attachment and representations of peer relationships. Dev Psychol 32:892–904. doi:10.1037/0012-1649.32.5.892
    OpenUrlCrossRef
  13. ↵
    1. Cohen JR,
    2. Asarnow RF,
    3. Sabb FW,
    4. Bilder RM,
    5. Bookheimer SY,
    6. Knowlton BJ,
    7. Poldrack RA
    (2010) Decoding developmental differences and individual variability in response inhibition through predictive analyses across individuals. Front Hum Neurosci 4:47. doi:10.3389/fnhum.2010.00047 pmid:20661296
    OpenUrlCrossRefPubMed
  14. ↵
    1. Cox RW
    (1996) AFNI: software for analysis and visualization of functional magnetic resonance neuroimages. Comput Biomed Res 29:162–173. doi:10.1006/cbmr.1996.0014 pmid:8812068
    OpenUrlCrossRefPubMed
  15. ↵
    1. Dumontheil I,
    2. Apperly IA,
    3. Blakemore SJ
    (2010) Online usage of theory of mind continues to develop in late adolescence. Dev Sci 13:331–338. doi:10.1111/j.1467-7687.2009.00888.x pmid:20136929
    OpenUrlCrossRefPubMed
  16. ↵
    1. Fareri DS,
    2. Chang LJ,
    3. Delgado MR
    (2015) Computational substrates of social value in interpersonal collaboration. J Neurosci 35:8170–8180. doi:10.1523/JNEUROSCI.4775-14.2015 pmid:26019333
    OpenUrlAbstract/FREE Full Text
  17. ↵
    1. Friston KJ,
    2. Buechel C,
    3. Fink GR,
    4. Morris J,
    5. Rolls E,
    6. Dolan RJ
    (1997) Psychophysiological and modulatory interactions in neuroimaging. Neuroimage 6:218–229. doi:10.1006/nimg.1997.0291 pmid:9344826
    OpenUrlCrossRefPubMed
  18. ↵
    1. Fuhrmann D,
    2. Knoll LJ,
    3. Sakhardande AL,
    4. Speekenbrink M,
    5. Kadosh KC,
    6. Blakemore SJ
    (2016) Perception and recognition of faces in adolescence. Sci Rep 6:33497. doi:10.1038/srep33497 pmid:27647477
    OpenUrlCrossRefPubMed
  19. ↵
    1. Furman W,
    2. Buhrmester D
    (1992) Age and sex differences in perceptions of networks of personal relationships. Child Dev 63:103–115. doi:10.1111/j.1467-8624.1992.tb03599.x pmid:1551320
    OpenUrlCrossRefPubMed
  20. ↵
    1. Galvan A,
    2. Hare TA,
    3. Parra CE,
    4. Penn J,
    5. Voss H,
    6. Glover G,
    7. Casey BJ
    (2006) Earlier development of the accumbens relative to orbitofrontal cortex might underlie risk-taking behavior in adolescents. J Neurosci 26:6885–6892. doi:10.1523/JNEUROSCI.1062-06.2006 pmid:16793895
    OpenUrlAbstract/FREE Full Text
  21. ↵
    1. Glover GH,
    2. Law CS
    (2001) Spiral-in/out BOLD fMRI for increased SNR and reduced susceptibility artifacts. Magn Reson Med 46:515–522. doi:10.1002/mrm.1222 pmid:11550244
    OpenUrlCrossRefPubMed
  22. ↵
    1. Greicius MD,
    2. Krasnow B,
    3. Reiss AL,
    4. Menon V
    (2003) Functional connectivity in the resting brain: a network analysis of the default mode hypothesis. Proc Natl Acad Sci USA 100:253–258. doi:10.1073/pnas.0135058100 pmid:12506194
    OpenUrlAbstract/FREE Full Text
  23. ↵
    1. Güroğlu B,
    2. van den Bos W,
    3. Crone EA
    (2009) Fairness considerations: increasing understanding of intentionality during adolescence. J Exp Child Psychol 104:398–409. doi:10.1016/j.jecp.2009.07.002 pmid:19674756
    OpenUrlCrossRefPubMed
  24. ↵
    1. Gusnard DA,
    2. Akbudak E,
    3. Shulman GL,
    4. Raichle ME
    (2001) Medial prefrontal cortex and self-referential mental activity: relation to a default mode of brain function. Proc Natl Acad Sci USA 98:4259–4264. doi:10.1073/pnas.071043098 pmid:11259662
    OpenUrlAbstract/FREE Full Text
  25. ↵
    1. Guyer AE,
    2. McClure-Tone EB,
    3. Shiffrin ND,
    4. Pine DS,
    5. Nelson EE
    (2009) Probing the neural correlates of anticipated peer evaluation in adolescence. Child Dev 80:1000–1015. doi:10.1111/j.1467-8624.2009.01313.x pmid:19630890
    OpenUrlCrossRefPubMed
  26. ↵
    1. Haber SN,
    2. Knutson B
    (2010) The reward circuit: linking primate anatomy and human imaging. Neuropsychopharmacology 35:4–26. doi:10.1038/npp.2009.129 pmid:19812543
    OpenUrlCrossRefPubMed
  27. ↵
    1. Hecker MH
    (1971) Speaker recognition: an interpretive survey of the literature. ASHA Monogr 16:1–103. pmid:4943814
    OpenUrlPubMed
  28. ↵
    1. Iuculano T,
    2. Rosenberg-Lee M,
    3. Supekar K,
    4. Lynch CJ,
    5. Khouzam A,
    6. Phillips J,
    7. Uddin LQ,
    8. Menon V
    (2014) Brain organization underlying superior mathematical abilities in children with autism. Biol Psychiatry 75:223–230. doi:10.1016/j.biopsych.2013.06.018 pmid:23954299
    OpenUrlCrossRefPubMed
  29. ↵
    1. Kail R
    (1991a) Processing time declines exponentially during childhood and adolescence. Dev Psychol 27:259–266. doi:10.1037/0012-1649.27.2.259
    OpenUrlCrossRef
  30. ↵
    1. Kail R
    (1991b) Developmental change in speed of processing during childhood and adolescence. Psychol Bull 109:490–501. doi:10.1037/0033-2909.109.3.490 pmid:2062981
    OpenUrlCrossRefPubMed
  31. ↵
    1. Kisilevsky BS,
    2. Hains SM,
    3. Brown CA,
    4. Lee CT,
    5. Cowperthwaite B,
    6. Stutzman SS,
    7. Swansburg ML,
    8. Lee K,
    9. Xie X,
    10. Huang H,
    11. Ye HH,
    12. Zhang K,
    13. Wang Z
    (2009) Fetal sensitivity to properties of maternal speech and language. Infant Behav Dev 32:59–71. doi:10.1016/j.infbeh.2008.10.002 pmid:19058856
    OpenUrlCrossRefPubMed
  32. ↵
    1. Kret ME,
    2. De Gelder B
    (2012) A review on sex differences in processing emotional signals. Neuropsychologia 50:1211–1221. doi:10.1016/j.neuropsychologia.2011.12.022 pmid:22245006
    OpenUrlCrossRefPubMed
  33. ↵
    1. Kumar P,
    2. Pisoni A,
    3. Bondy E,
    4. Kremens R,
    5. Singleton P,
    6. Pizzagalli DA,
    7. Auerbach RP
    (2019) Delineating the social valuation network in adolescents. Soc Cogn Affect Neurosci 14:1159–1166. doi:10.1093/scan/nsz086 pmid:31680163
    OpenUrlCrossRefPubMed
  34. ↵
    1. Kwon H,
    2. Reiss AL,
    3. Menon V
    (2002) Neural basis of protracted developmental changes in visuo-spatial working memory. Proc Natl Acad Sci USA 99:13336–13341. doi:10.1073/pnas.162486399 pmid:12244209
    OpenUrlAbstract/FREE Full Text
  35. ↵
    1. Landry SH,
    2. Smith KE,
    3. Swank PR
    (2006) Responsive parenting: establishing early foundations for social, communication, and independent problem-solving skills. Dev Psychol 42:627–642. doi:10.1037/0012-1649.42.4.627 pmid:16802896
    OpenUrlCrossRefPubMed
  36. ↵
    1. Larson RW,
    2. Richards MH,
    3. Moneta G,
    4. Holmbeck G,
    5. Duckett E
    (1996) Changes in adolescents' daily interactions with their families from ages 10 to 18: disengagement and transformation. Dev Psychol 32:744–754. doi:10.1037/0012-1649.32.4.744
    OpenUrlCrossRef
  37. ↵
    1. Liu HM,
    2. Kuhl PK,
    3. Tsao FM
    (2003) An association between mothers' speech clarity and infants' speech discrimination skills. Dev Sci 6:F1–F10. doi:10.1111/1467-7687.00275
    OpenUrlCrossRef
  38. ↵
    1. Majorano M,
    2. Rainieri C,
    3. Corsano P
    (2013) Parents' child-directed communication and child language development: a longitudinal study with Italian toddlers. J Child Lang 40:836–859. doi:10.1017/S0305000912000323 pmid:22883628
    OpenUrlCrossRefPubMed
  39. ↵
    1. Menon V,
    2. Uddin LQ
    (2010) Saliency, switching, attention and control: a network model of insula function. Brain Struct Funct 214:655–667. doi:10.1007/s00429-010-0262-0 pmid:20512370
    OpenUrlCrossRefPubMed
  40. ↵
    1. Mills M,
    2. Melhuish E
    (1974) Recognition of mother's voice in early infancy. Nature 252:123–124. doi:10.1038/252123a0 pmid:4420642
    OpenUrlCrossRefPubMed
  41. ↵
    1. Nelson EE,
    2. Jarcho JM,
    3. Guyer AE
    (2016) Social re-orientation and brain development: an expanded and updated view. Dev Cogn Neurosci 17:118–127. doi:10.1016/j.dcn.2015.12.008 pmid:26777136
    OpenUrlCrossRefPubMed
  42. ↵
    1. Raettig T,
    2. Kotz SA
    (2008) Auditory processing of different types of pseudo-words: an event-related fMRI study. Neuroimage 39:1420–1428. doi:10.1016/j.neuroimage.2007.09.030 pmid:17981055
    OpenUrlCrossRefPubMed
  43. ↵
    1. Rah Y,
    2. Parke RD
    (2008) Pathways between parent–child interactions and peer acceptance: the role of children's social information processing. Soc Dev 17:341–357. doi:10.1111/j.1467-9507.2007.00428.x
    OpenUrlCrossRef
  44. ↵
    1. Sachser N,
    2. Hennessy MB,
    3. Kaiser S
    (2018) The adaptive shaping of social behavioural phenotypes during adolescence. Biol Lett 14:20180536.
    OpenUrlCrossRefPubMed
  45. ↵
    1. Seltzer LJ,
    2. Prososki AR,
    3. Ziegler TE,
    4. Pollak SD
    (2012) Instant messages vs. speech: hormones and why we still need to hear each other. Evol Hum Behav 33:42–45. doi:10.1016/j.evolhumbehav.2011.05.004 pmid:22337755
    OpenUrlCrossRefPubMed
  46. ↵
    1. Somerville LH,
    2. Hare T,
    3. Casey BJ
    (2011) Frontostriatal maturation predicts cognitive control failure to appetitive cues in adolescents. J Cogn Neurosci 23:2123–2134. doi:10.1162/jocn.2010.21572 pmid:20809855
    OpenUrlCrossRefPubMed
  47. ↵
    1. Somerville LH,
    2. Sasse SF,
    3. Garrad MC,
    4. Drysdale AT,
    5. Abi Akar N,
    6. Insel C,
    7. Wilson RC
    (2017) Charting the expansion of strategic exploratory behavior during adolescence. J Exp Psychol Gen 146:155–164. doi:10.1037/xge0000250 pmid:27977227
    OpenUrlCrossRefPubMed
  48. ↵
    1. Steinberg L
    (2008) A social neuroscience perspective on adolescent risk-taking. Dev Rev 28:78–106. doi:10.1016/j.dr.2007.08.002 pmid:18509515
    OpenUrlCrossRefPubMed
  49. ↵
    1. Steinberg L,
    2. Albert D,
    3. Cauffman E,
    4. Banich M,
    5. Graham S,
    6. Woolard J
    (2008) Age differences in sensation seeking and impulsivity as indexed by behavior and self-report: evidence for a dual systems model. Dev Psychol 44:1764–1778. doi:10.1037/a0012955 pmid:18999337
    OpenUrlCrossRefPubMed
  50. ↵
    1. Vul E,
    2. Harris C,
    3. Winkielman P,
    4. Pashler H
    (2009) Puzzlingly high correlations in fMRI studies of emotion, personality, and social cognition. Perspect Psychol Sci 4:274–290. doi:10.1111/j.1745-6924.2009.01125.x pmid:26158964
    OpenUrlCrossRefPubMed
  51. ↵
    1. Wagner RK,
    2. Rashotte JK,
    3. Pearson CA
    (1999) Comprehensive test of phonological processing: CTOPP. Austin, TX: Pro-Ed.
  52. ↵
    1. Wechsler D
    (1999) Wechsler Abbreviated Scale of Intelligence. San Antonio, TX: Harcourt.
  53. ↵
    1. Whittle S,
    2. Yucel M,
    3. Yap MB,
    4. Allen NB
    (2011) Sex differences in the neural correlates of emotion: evidence from neuroimaging. Biol Psychol 87:319–333. doi:10.1016/j.biopsycho.2011.05.003 pmid:21600956
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 42 (20)
Journal of Neuroscience
Vol. 42, Issue 20
18 May 2022
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Ed Board (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
A Neurodevelopmental Shift in Reward Circuitry from Mother's to Nonfamilial Voices in Adolescence
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
A Neurodevelopmental Shift in Reward Circuitry from Mother's to Nonfamilial Voices in Adolescence
Daniel A. Abrams, Percy K. Mistry, Amanda E. Baker, Aarthi Padmanabhan, Vinod Menon
Journal of Neuroscience 18 May 2022, 42 (20) 4164-4173; DOI: 10.1523/JNEUROSCI.2018-21.2022

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
A Neurodevelopmental Shift in Reward Circuitry from Mother's to Nonfamilial Voices in Adolescence
Daniel A. Abrams, Percy K. Mistry, Amanda E. Baker, Aarthi Padmanabhan, Vinod Menon
Journal of Neuroscience 18 May 2022, 42 (20) 4164-4173; DOI: 10.1523/JNEUROSCI.2018-21.2022
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • adolescence
  • auditory
  • development
  • reward
  • voice
  • brain

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Increased neuronal expression of the early endosomal adaptor APPL1 replicates Alzheimer’s Disease-related endosomal and synaptic dysfunction with cholinergic neurodegeneration.
  • Presynaptic mu opioid receptors suppress the functional connectivity of ventral tegmental area dopaminergic neurons with aversion-related brain regions
  • Change of spiny neuron structure in the basal ganglia song circuit and its regulation by miR-9 during song development
Show more Research Articles

Behavioral/Cognitive

  • Increased neuronal expression of the early endosomal adaptor APPL1 replicates Alzheimer’s Disease-related endosomal and synaptic dysfunction with cholinergic neurodegeneration.
  • Presynaptic mu opioid receptors suppress the functional connectivity of ventral tegmental area dopaminergic neurons with aversion-related brain regions
  • Change of spiny neuron structure in the basal ganglia song circuit and its regulation by miR-9 during song development
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.