Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

A Computational Account of Optimizing Social Predictions Reveals That Adolescents Are Conservative Learners in Social Contexts

Gabriela Rosenblau, Christoph W. Korn and Kevin A. Pelphrey
Journal of Neuroscience 24 January 2018, 38 (4) 974-988; https://doi.org/10.1523/JNEUROSCI.1044-17.2017
Gabriela Rosenblau
1Autism and Neurodevelopmental Disorders Institute, George Washington University and Children's National Health System, Washington, DC 20052,
2Yale Child Study Center, Yale University, New Haven, Connecticut 06520, and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Christoph W. Korn
3Institute for Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg 20246, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Kevin A. Pelphrey
1Autism and Neurodevelopmental Disorders Institute, George Washington University and Children's National Health System, Washington, DC 20052,
2Yale Child Study Center, Yale University, New Haven, Connecticut 06520, and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

As adolescents transition to the complex world of adults, optimizing predictions about others' preferences becomes vital for successful social interactions. Mounting evidence suggests that these social learning processes are affected by ongoing brain development across adolescence. A mechanistic understanding of how adolescents optimize social predictions and how these learning strategies are implemented in the brain is lacking. To fill this gap, we combined computational modeling with functional neuroimaging. In a novel social learning task, male and female human adolescents and adults predicted the preferences of peers and could update their predictions based on trial-by-trial feedback about the peers' actual preferences. Participants also rated their own preferences for the task items and similar additional items. To describe how participants optimize their inferences over time, we pitted simple reinforcement learning models against more specific “combination” models, which describe inferences based on a combination of reinforcement learning from past feedback and participants' own preferences. Formal model comparison revealed that, of the tested models, combination models best described how adults and adolescents update predictions of others. Parameter estimates of the best-fitting model differed between age groups, with adolescents showing more conservative updating. This developmental difference was accompanied by a shift in encoding predictions and the errors thereof within the medial prefrontal and fusiform cortices. In the adolescent group, encoding of own preferences and prediction errors scaled with parent-reported social traits, which provides additional external validity for our learning task and the winning computational model. Our findings thus help to specify adolescent-specific social learning processes.

SIGNIFICANCE STATEMENT Adolescence is a unique developmental period of heightened awareness about other people. Here we probe the suitability of various computational models to describe how adolescents update their predictions of others' preferences. Within the tested model space, predictions of adults and adolescents are best described by the same learning model, but adolescents show more conservative updating. Compared with adults, brain activity of adolescents is modulated less by predictions themselves and more by prediction errors per se, and this relationship scales with adolescents' social traits. Our findings help specify social learning across adolescence and generate hypotheses about social dysfunctions in psychiatric populations.

  • adolescence
  • fMRI
  • mPFC
  • mental state inference
  • preferences
  • reinforcement learning

Introduction

As social networks grow in size and complexity, adolescents have to continuously adapt to ever more challenging environments (Crone and Dahl, 2012). This adaptation requires improving Theory of Mind (ToM) abilities, that is, improving predictions about others' preferences, mental states, and behaviors (Crick and Dodge, 1994). However, a mechanistic account of ToM development over the course of adolescence is dearly needed, especially given that aberrant social development during adolescence increases the risk for psychiatric disorders (Mrazek and Haggerty, 1994).

Mounting evidence suggests that multiple decision processes, including social decision making and their neural implementation, follow a nonlinear (inverted) U-shaped trajectory (Crone and Dahl, 2012; Pfeifer and Blakemore, 2012; Hartley and Somerville, 2015). Studies, including broad age ranges and longitudinal designs, highlight adolescence as a sensitive period, in which individuals show more explorative behavior, cognitive flexibility, and reward sensitivity in nonsocial contexts (Cohen et al., 2010; Galvan, 2010; Hauser et al., 2015). Yet, emerging evidence indicates that adolescents are more rigid in social contexts (Jones et al., 2014).

Adolescents' reduced flexibility in social decision making and ToM capabilities may result from the dramatic changes in brain regions that support social information processing. Regions assigned to the ToM network, such as the mPFC (Yang et al., 2015; Rosenblau et al., 2016), undergo the most prominent structural and functional development during adolescence (Blakemore, 2010). Some evidence relates ToM abilities to brain plasticity over the course of adolescence (Blakemore and Mills, 2014); for instance, adolescents display greater mPFC activity during mental state inference than adults (Blakemore, 2007). But it is unclear to what extent the mPFC reflects ongoing development in making predictions about others. A precise characterization of social development, in particular ToM development, during adolescence is crucial for understanding adolescents' increased risk for developing neuropsychiatric disorders (Malti and Krettenauer, 2013; Laible et al., 2014).

Here, we capitalized on combining computational modeling with functional neuroimaging for elucidating social learning in typical adolescents. Computational models, in particular variants of reinforcement learning (RL) models, provide a mechanistic description of how humans learn about rewards in the nonsocial domain; these models rely on prediction errors (PEs), the differences between expected and experienced outcomes, to update expectations about the future (Montague et al., 2006; Dayan and Niv, 2008). Recently, such models have been harnessed to describe how adults make dynamic social decisions (Hampton et al., 2008; Behrens et al., 2009; Ruff and Fehr, 2014; Garvert et al., 2015). But simple RL models fall short of capturing the complex dynamics of tracking another person's preferences (Hampton et al., 2008; Behrens et al., 2009), especially when self-related information processing and attribution biases may interfere with learning (Korn et al., 2012, 2016).

Here, we devised a novel task, in which participants had to predict other people's mental states, specifically their preferences for activities, fashion, and food items, over time, and could learn about these people's actual preferences from trial-by-trial feedback. By adapting RL models to describe learning about others, we followed a burgeoning literature (Behrens et al., 2008; Hampton et al., 2008; Ruff and Fehr, 2014). Specifically, we pitted variants of the simple RL model against more specific combination models, to describe how adolescents and adults updated predictions about a person's preference over time. The combination models assume that participants' predictions are based on trial-by-trial feedback about the person's actual preference and their own preference (OP) for a given item.

First, we expected adults to make more accurate predictions (resulting in overall lower PEs). Second, adolescents may rely more on their OPs than adults when making judgments. Third, we expected developmental changes in neural encoding of model variables, notably predictions and PEs in the mPFC. Finally, we hypothesized that adolescents' tendency to update predictions would vary with their social traits, measured with the Social Responsiveness Scale (SRS) (Constantino and Gruber, 2012), a parent-report questionnaire calibrated for typical adolescent samples and adolescents with autism (Payakachat et al., 2012). Given that the current literature suggests nonlinear development of decision-making skills, including social decision making (Casey et al., 2008; Hartley and Somerville, 2015), we tested linear as well as quadratic effects of age and social traits on adolescents' behavioral and neural responses (Shaw et al., 2012; Somerville et al., 2013; Braams et al., 2015).

Materials and Methods

Participants

Adult participants were recruited via mailing lists and advertisements at Yale University. Twenty-one adults took part in the study (12 female; mean ± SD, age = 28.4 ± 4.0 years; age range: 23–36 years). Twenty-eight adolescent participants, matched for gender, were recruited via mailing lists and existing participant databases (12 female, age = 13.8 ± 2.3 years; age range: 10–17 years). Participants met MRI inclusion criteria; they did not have any neuropsychiatric disorder and did not take psychotropic medication. Included participants did not show head motion deviation from the initial position >4.5 mm or 4.5 degrees on any of the three translational or three rotational axes at any point throughout the scan. Four adolescents were excluded from the analysis due to excessive motion or insufficient behavioral responses (we excluded participants with <50% of valid responses in any run, N = 2). This resulted in a final sample of 24 adolescents (10 female; age = 13.5 ± 2.2 years; age range: 10–17 years). The study was approved by the Human Research Protection Committee at Yale University. All participants provided written informed consent and received $100 compensation. We consented adolescent participants in the presence of at least one parent guardian. We obtained both parental written informed consent and additional written assent from the adolescents.

Preferences survey

We designed a preference survey to acquire preference profiles for the main fMRI study. None of the survey participants took part in the main fMRI study. The survey comprised pictures of activities, fashion, and food items and a short demographic questionnaire. It was available through the Yale Qualtrics Survey Tool (www.yalesurvey.qualtrics.com) and took ∼25 min to complete. Survey participants rated how much they liked each item on a 10 point Likert scale ranging from 1 (not at all) to 10 (very much) and were offered a $5.00 gift card upon completion. Six adult coworkers, who were not otherwise involved in the study, and 6 adolescents, who participated in previous studies, came to the laboratory to complete the survey. We selected 3 adult and 3 corresponding adolescent profiles that were maximally distinguishable (i.e., the preferences of the 3 selected adult and adolescents should show the lowest absolute agreement in ratings using intraclass correlations [ICC]). The average absolute agreement of the three selected profiles in both groups was low to moderate (Hays and Reviki, 2005) (adult ICC = 0.495 with a 95% CI from 0.395 to 0.581 (F(343,686) = 1.987, p < 0.001); adolescent ICC = 0.713 with a 95% CI from 0.655 to 0.762 (F(343,686) = 3.565, p < 0.001). The participants who took part in the main fMRI study took the same survey after completing the preference task (see Experimental design and statistical analysis).

Stimuli

The 343 picture items of the preference survey were assigned to one of three categories with four subcategories each. The survey included 120 activity items (arts and crafts; music; sports; toys, gadgets, and games), 106 fashion items (accessories; bags; cosmetics; shoes), and 117 food items (fast food; healthy savory food; raw fruits and vegetables; sweets). We chose categories, which were broad enough to reflect the interests of adolescents and adults (the stimuli are available by contacting the corresponding author). The picture stimuli either originated from validated stimulus sets (Brunyé et al., 2013) (http://ase.tufts.edu/psychology/spacelab/pubs.htm; http://cvcl.mit.edu/MM/) or were freely available online. All pictures were postprocessed with Adobe Photoshop (version CS5.1, Adobe Systems). To make the stimulus set more homogeneous, we replaced the backgrounds with a white background, removed writing such as brands from the objects, and saved all images with a resolution of 500 × 500 pixels (5.208 × 5.208 inches).

fMRI task stimuli

For the fMRI experiment, we randomly chose 49 activity, 30 fashion, and 41 food items (120 items in total), including a relatively equal number of subcategory items from the survey item pool. These were divided into three equivalent item sets of 40 items each. We subsequently assigned each of the three adult and adolescent preference profiles to one of the three item sets. This way, adolescents and adults saw the same items during the fMRI experiment but rated different individuals who were part of their own peer group. The overall preference distribution did not differ between the groups (overall: F(1,119) = 1.066, p = 0.727). Furthermore, variances of preference ratings for each category across the three profiles did not differ between the groups (activities: F(1,2) = 1.142, p = 0.934; fashion: F(1,2) = 0.058, p = 0.109; food: F(1,2) = 0.359, p = 0.528).

Experimental design

We first acquired preference profiles of adolescents and adults with an online survey (see Preference survey). The survey asked for a person's preference for a number of activity, fashion, and food items depicted by pictures. In the following fMRI study, participants who had not previously completed the survey were asked to infer the preferences of 3 people from their peer group. Following their rating for an item, they received trial-by-trial feedback about the other's actual preference rating (i.e., rating outcomes). The task consisted of three functional runs (one run for each person's preference ratings). One run lasted for ∼9 min 30 s (total task duration was ∼30 min).

fMRI experiment

First, participants read a short task introduction while in the magnet. This stated that they were going to judge how much a person likes certain items and that they should provide the answer by choosing a number on a 10 point Likert scale: from 1 (not at all) to 10 (very much). After rating each item, participants would see the person's actual rating. They were also instructed to memorize what people liked or not, but they were not given any specific memory strategies. Thereafter, participants practiced making the choices using a cylinder button box. They pressed either index or middle finger to slide up and down the scale, and selected their response by pressing the confirmation button with their thumb. Once they confirmed, a red box lit up around the number on the screen, indicating their final response. The confirmation button had to be pressed for the response to be included into the subsequent data analysis. The overall frequency of missing responses did not differ between adults and the remaining adolescent sample (adults: 3.4 ± 3.4; range: 0–14; adolescents: 3.3 ± 3.3; range: 0–12; t(43) = 0.05, p = 0.962).

Before each run, a short vignette introduced participants to the person for whom they would subsequently rate the items. The vignette contained name, age, and profession of the adult (e.g., “Mathew is 25 years old. He is a third-year medical student”) or adolescent person (e.g., “Lisa is 14 years old. She is in middle school in the ninth grade”). The rating outcomes were the true ratings of people who had participated in an initial preference survey (for more information please refer to the preference survey section). The names of the selected individuals were slightly changed to deidentify them. To ensure similar task difficulty for adults and adolescents and increase the task's resemblance to real life scenarios, participants were asked to infer preferences of individuals from their peer group: adults judged adult profiles and adolescents judged adolescent profiles (Fig. 1A). The vignettes for both age groups consisted of the same number of characters (t(4) = −1.21, p = 0.292) and did not significantly differ in their usefulness to adults and adolescents (as indicated by a lack of group difference in the three initial PEs; t(43) = −0.831, p = 0.4101).

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Preference task. A, Before each run, participants were introduced to the person whose preferences they would subsequently rate. Adults and adolescents rated preferences of persons from their own peer group on a 10 point Likert scale (1 = “not at all” to 10 “very much”; rating phase) and received trial-by-trial feedback about the person's actual rating for the item (feedback). B, After the preference task, participants rated their OPs for the same and similar additional items on the same rating scale. C, The reinforcement learning (RL) framework in the context of the preference task. The combination model, which best described participants' ratings in both groups, assumes that ratings rely on RL and participants' OPs for the items. γ, free parameter, which formalizes the assumption that participants use a weighted combination of RL and their OPs to predict others' preferences.

On each task trial, participants were asked how much the person to whom they were introduced liked a particular item. They rated the item on a 10 point Likert scale, as described above (i.e., rating phase; 5 s). After a jittered interval (1–5 s), they saw the person's rating for the item (outcome phase; 2 s). Participants judged 40 items per run (120 across the 3 runs). Pictures of items did not repeat. Participants could, however, take the person's rating (i.e., feedback) into account when judging the next similar item. For instance, to judge the other's preference for apples, they could take into account how much the other preferred oranges on a previous trial (for a detailed description of the models used to describe participants' ratings, see Computational models).

Post-fMRI assessments

After participants completed the fMRI task, they were asked to describe the individuals based on what they had learned from the task. We chose an open answer format for the question to probe whether participants were able to form an impression based on a person's preferences for specific items. We summarized the attributes participants used to describe each of the 3 persons in word clouds (Fig. 2B). We also investigated whether participants formed a social impression of each person, by generalizing from the items presented to our predefined categories and even further to character traits. Two raters, unfamiliar with the objective of this study, coded the frequency of classifications (i.e., categories, subcategories, and personality inferences) in both adult and adolescent groups (on average, raters agreed on 84.6% of their classifications). We computed and compared between-group differences for classifications, if both raters agreed. The frequency of overall classifications, and specific classification type did not differ between groups (χ(1, N = 184)2 = 1.39, p = 0.238; χ(10, N = 184)2 = 13.48, p = 0.198; Figure 2A). Last, participants completed the same preference survey as the initial survey respondents whose profiles we used in the fMRI task (Fig. 1B; see Preference survey). We assessed participants' OPs after the scanner task to avoid priming them toward their OPs when judging those of their peers. Participants did not know beforehand that they would be asked for their OPs in this study. We also tested whether adolescents had more rigid preferences for the fMRI task items compared with adults. This was not the case. The variance of OP ratings for the three categories (activities, fashion, and food) did not differ between adults and adolescents (Box's 2.419, F(6,44080) = 0.386, p = 0.888).

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Free recollection of preference profiles in the adult and adolescent groups. A, Frequencies of predefined categories, subcategories, and personality trait inferences mentioned in the adult and adolescent groups. Both groups provided a similar amount of general descriptions of the other persons in a free recall task after the fMRI experiment. B, Word clouds represent the descriptions of adults and adolescents for the rated persons.

Social skill assessment in the adolescent group

For most adolescents (N = 21), we were able to obtain an additional, independent assessment of social skills. In a previous unrelated study, parents completed the SRS first (N = 5) (Constantino and Gruber, 2005) or second (N = 16) (Constantino and Gruber, 2012) edition. For 4- to 18-year-olds, the SRS-2 represents exactly the same item set as the SRS. The SRS measures the presence of impairments in reciprocal social behaviors, typically associated with autism spectrum disorder. This measure has been validated as a measure of autistic traits in typical and autism spectrum disorder samples (Payakachat et al., 2012). All adolescents in this sample scored in the normal, nonautistic range (i.e., their total SRS T-score was <59; 44.5 ± 4.5; range: 37–53). Even in this typically developing sample, with scores indicating typical social functioning, higher scores indicate less social traits.

Behavioral data analysis

On the fMRI task, the difference between participants' ratings and a person's actual preference represents an error independent of the direction of deviation (positive or negative). Overall accuracy was thus defined as the average of absolute PEs (i.e., the absolute differences between participants' ratings and the feedback they subsequently received).

We first identified computational models that best described the behavior of adolescents and adults (see Computational models). Second, we investigated whether the free parameters of the winning model (e.g., learning rates) differed between adolescents and adults. Finally, to further specify individual differences and the developmental trajectory of social learning in adolescence, we performed a hirarchical multiple regression to test linear and quadratic effects of age (i.e., age in days was transformed into age in years as a variable with 5 decimals) and SRS scores on the free parameters of the winning model. The variance of the behavioral variables of interest, such as PEs and parameter estimates, did not differ between groups. We assessed whether variables were normally distributed using the Kolmogorov–Smirnov test. Nonparametric group comparisons were performed for non-normally distributed variables.

Computational models

RL models have been instrumental in revealing behavioral and neural mechanisms of reward-based learning in animals and humans. Here we applied the RL framework to the social domain. Specifically, we probed the performance of various RL models and nonlearning models to describe developmental differences in learning about another person's preferences.

Model space

Main models.

We tested three main models. Model 1 (no-learning) assumes that participants do not learn about the others' preferences over the course of the experiment. Specifically, Model 1 assumes that participants perform a simple linear transformation of their OPs to predict the preferences of the other persons as follows: Embedded Image where ER indicates estimated ratings of the other persons and OP indicates participants' own preferences. To crosscheck, we estimated this model using the MATLAB function regress, which gave the same results as our own implementation using fminsearch.

Model 2 (RL ratings) applied a variant of a Rescorla–Wagner RL rule to our task. Participants are assumed to adjust their upcoming rating of the other person's preference ERt+1 (for a given subcategory of items) on the basis of their current estimates (ERt) and the PE between this current estimate and the current Feedback (Ft), which is the other person's actual preference rating. The PE is weighted by the learning rate α, which is a free parameter as follows: Embedded Image with Embedded Image At the first occurrence of an item from a new subcategory, participants cannot infer the estimated rating from past experience. In these instances, we initialized ERt to the midpoint of the scale 5.5, which can be regarded as an uninformative prior.

Model 3 (combination) combines the logic of the former two models (Fig. 1C). Participants are assumed to update their estimates of the other persons' ratings as in Model 2. At the same time, participants are assumed to use their OPs for the current item to infer those of the other persons as in Model 1. Specifically, Model 3 includes the free weighting parameter γ, which formalizes the assumption that participants use a weighted combination of RL and their OPs to predict the others' preferences. Embedded Image Again, ERt was initialized to 5.5 for the first item of a new subcategory.

Supplemental models.

In addition to these three main models, we explored additional models, in particular plausible extensions of the combination model (Model 3), which we found to be the best-fitting model among the main models. Model 4 (RL-self-other-diff) assumes that participants use RL to update estimates of the differences (DIFF) between the other persons' ratings and their OPs. On each trial, participants use their current estimate of this difference DIFFt and their OP for the current item to infer the estimated rating of the other person. This contrasts to Models 2 and 3, in which participants are assumed to directly update the estimates of the other persons' ratings (regardless of the difference to their OPs) as follows: Embedded Image with Embedded Image and Embedded Image Similar to Models 2 and 3, DIFFt was initialized to 0 for the first item of a new subcategory (and additionally for each category).

Model 5 (RL ratings α-cat) was an extension of Model 2 (RL ratings) with the only difference that the model included three free parameters to capture different learning rates αactivities, αfashion, and αfood for the three main categories of the items.

A similar logic was incorporated in Models 6, 7, and 8. These models extended Model 3 (combination) to account for potential differences between item categories. Model 6 (Combination-α-cat) allowed for different learning rates αactivities, αfashion, and αfood but included a single weighting parameter γ. Model 7 (Combination-γ-cat) allowed for different weighting parameters γactivities, γfashion, and γfood but had a single learning rate α. Model 8 (Combination- α-γ-cat) incorporated both category-specific learning rates and category-specific weighting parameters. Models 9 and 10 extend Models 2 and 3 with a decay parameter, d. This decay parameter was added to account for the possibility that participants might forget about subcategories that they did not receive information about on the immediately preceding trial. Subcategory information (SI) was decayed to the initial ER of 5.5 according to the following rule: Embedded Image Model 9 is a “decay RL ratings model” with two free parameters, α and d. Model 10 is a “decay combination model” with three free parameters: α, γ, and d.

Model 3 assumes one overall α and one overall for all three profiles (i.e., all three runs) and for all categories. Models 11, 12, and 13 extend Model 3 such that the free parameters α and/or γ are allowed to vary by the preference profile or run. In Model 11, the learning rates α are estimated separately for each of the three profiles, that is, the three persons (αperson 1, αperson 2, and αperson 3), for whom participants made preference inferences, but included only one weighting parameter γ. Model 12 allowed for separate weighting parameters (γperson 1, γperson 2, and γperson 3), but only one learning rate α. In Model 13, both α and γ parameters were separately estimated for each of the three person's preference profile.

Testing initialization of estimated ratings

In Models 2 and 3, we initialized ERt to the midpoint of the scale (5.5) for the first item of a new subcategory. We additionally tested whether initializing these values of ERt to participants' OPs resulted in better fit. For this, we compared two model families with each other (Rigoux et al., 2014). The first family comprised Models 2 and 3 with ERt initialized to 5.5, whereas the second comprised Models 2 and 3 with ERt initialized to OP. This model comparison showed that initializing to 5.5 resulted in a better model fit (see Fig. 3B).

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Model comparisons. A, Fixed-effect comparisons of main and supplemental models reveal that the combination model is the best-fitting model in the adult and adolescent groups. That is, the difference in relative log-group Bayes factor (BF) between the combination model and next-best model is >3 for both groups. BF is calculated with respect to the No-learning model (bars for the poorly fitting RL models partly exceed the right-hand scale of the graph). Bounding model parameters between 0 and 1 did not change the results of the model comparison. B, Family random-effect analysis compares the model fit of the combination and RL models when initialized to 5.5 for the first item of a new subcategory versus participants' OPs. The exceedance probability plots show that the 5.5 initialization provides a better fit in both adult and adolescent groups. TEENS, Adolescents; ADU, adults.

Model estimation and comparison

We used linear least-squares estimation to determine best-fitting model parameters. Optimization used a nonlinear Nelder–Mead simplex search algorithm (implemented in the MATALB function fminsearch) to minimize the sum of squared errors of prediction over all trials for each participant. The maximum number of iterations allowed was set to 107. All other tolerances and stopping criteria were kept at default values. Each parameter was initialized at 0.5. All model estimations converged, and estimations converged to the same values when exploring different initializing parameters. We additionally rerun the models with constrained parameter ranges and obtained the identical pattern of results. For each model and each participant, we approximated model evidence by calculating the Bayesian Information Criterion (BIC), according to the following standard formula: Embedded Image where n is the number of trials and k the number of free parameters in the model. The latter is used to penalize model complexity. We report both fixed- and random-effects model comparison. For fixed-effects analyses, we computed log-group Bayes factors by summing BIC values for each tested model across participants and then subtracting the value of the reference model (Model 1). According to the convention used here, smaller log-group Bayes factors indicate more evidence for the respective model versus the reference model. For random-effects analyses, we used the Bayesian Model Selection (BMS) procedure implemented in the MATLAB toolbox SPM12 (http://www.fil.ion.ucl.ac.uk/spm/; spm_BMS) to calculate protected exceedance probabilities, which measure how likely it is that any given model is more frequent than all other models in the population. This procedure has been established for comparing models of functional connectivity in fMRI studies and can equally be applied for comparing models of participants' behavior (Stephan et al., 2010; Rigoux et al., 2014; Vossel et al., 2014; Korn et al., 2016).

Simulations of noise in participants' ratings

The underlying assumption of our task and thus of the computational models is that participants performing the task have a similar representation of the scale as the person who completed the preference survey. To test whether model estimates are robust when assuming noisy rating choices, we performed two simulations, which show that the model-derived estimates are robust against random noise.

First, we added five different levels of normally distributed random noise to the model estimates of adults and adolescents. The distributions had a mean of 0 with a varying SD between 1 and 2 in 0.25 increments. For each noise level, we generated 100 different random variables and added these to the choices predicted by the winning combination model (Model 3, see Results). In subsequent model comparisons, we recovered the learning rates from these noisy estimates. Averaged parameter estimates using the noisy model-estimated ratings did not differ from the noise-free parameter estimates (adults' model estimates: t(20) = −0.237, p = 0.815; adolescents' model estimates: t(23) = 0.092, p = 0.928).

Second, we performed the same type of simulations, but this time we added additional noise to participants' actual ratings. We fitted the winning combination model using this noisy data. Averaged parameter estimates using noisy actual ratings also corresponded to the noise-free parameter estimates reported in Results (adults' actual ratings: t(20) = −1.237, p = 0.231; adolescents' actual ratings: t(23) = −0.457, p = 0.652).

fMRI data acquisition

Images were collected at the Yale University Magnetic Resonance Research Center on a Siemens 3T Tim Trio scanner equipped with a 12-channel head-coil. Whole-brain T1-weighted anatomical images were acquired using an MPRAGE sequence (TR = 2530 ms; TE = 3.31 ms; flip angle = 7°; FOV = 256 mm; image matrix 256 mm2; voxel size = 1 mm3; 176 slices). Field maps were acquired with a double echo gradient echo field map sequence, using 51 slices covering the whole head (TR = 731 ms; TE = 4.92 and 7.38 ms; flip angle = 50°; FOV = 210 mm; image matrix = 84 mm2; voxel size = 2.5 mm3). The experimental paradigm data were acquired in three runs of 285 volumes each (TR = 2000 ms; TE = 25 ms; flip angle = 60°; FOV = 220 mm; image matrix = 64 mm2; voxel size = 3.4 × 3.4 × 4 mm3; 34 slices). The first five volumes of each run were discarded to obtain a steady-state longitudinal magnetization.

fMRI data analysis: preprocessing

fMRI data processing was conducted using FEAT (fMRI Expert Analysis Tool) version 6.00 of FSL. The fsl_prepare_field_map tool was used to correct for geometric distortions caused by susceptibility-induced field inhomogeneities. Further preprocessing steps included motion correction using MCFLIRT (Jenkinson et al., 2002), slice-timing correction, nonbrain removal using BET (Smith 2004), spatial smoothing using a Gaussian kernel of FWHM 5 mm, and high pass temporal filtering. Images were registered to the high-resolution structural and to the MNI template using FLIRT (fMRIB's Linear Registration Tool) (Jenkinson and Smith, 2001; Jenkinson et al., 2002). Because of significant group differences in mean ± SD absolute displacement (adults: 0.26 ± 0.14 mm; adolescents: 0.51 ± 0.42; Mann–Whitney U = 128.5, p = 0.005), group analyses additionally comprised this variable as a covariate of no interest.

fMRI data analysis: statistical model

The GLM for each participant included for each of the three runs two regressors for the distinct phases of the task: the rating and feedback phases. Our main question was how brain activity during these phases was modulated by parameters derived from the winning behavioral model on a trial-by-trial basis. Evidence that model parameters were reflected in brain activity would provide biological validity to the computational model of participants' behavioral responses. To address this question, we only entered the model-estimated ratings and PEs, and not participants' actual ratings and PEs, as parametric regressors into the GLM. That is, we investigated whether model-derived ratings correlated on a trial-by-trial basis with brain activity in rating phases and PEs with brain activity in feedback phases. We also included participants' OP ratings and the received feedback as parametric regressors (these two metrics constitute inputs into the winning model). OP ratings were additionally included as parametric modulators in rating phases. Feedback, displayed as integers, was entered as a control variable to account for variance explained by seeing different numbers on the screen, which avoids that such variance may be erroneously assigned to the PE regressor. Finally, the six head motion parameters obtained after realignment were entered into the model as additional regressors of no interest.

We did not orthogonalize regressors, to avoid allocating the shared variance to only one of the regressors (Mumford et al., 2015). Correlations between parametric regressors for both groups are reported in Table 1.

View this table:
  • View inline
  • View popup
Table 1.

Correlation between parametric regressors in fMRI analysisa

The runs were combined in a second-level within-subject analysis; and at the group level, we performed mixed-effects analyses on the contrast images using FLAME (fMRIB's Local Analysis of Mixed Effects Stages 1 and 2 with automatic outlier detection and deweighting) (Beckmann et al., 2003; Woolrich, 2008). We ran two group analyses. One analysis addressed group differences (adults vs adolescents) in the strength of correlations between ratings, PEs, and OPs with brain activity in rating and feedback phases. The second group analysis investigated individual differences across adolescence. We tested whether age and social reciprocal behavior measured with the SRS modulated brain activity related to ratings, PEs, and OPs in the adolescent group. Similar to the behavioral analyses, we tested linear and quadratic effects of age (in days transformed into age in years as a variable with 5 decimals) and SRS scores. We included age and SRS scores as linear and quadratic regressors. The age-squared and SRS-squared vectors were orthogonalized to the linear age and SRS vectors, respectively.

In light of the recent results by Eklund et al. (2016) showing that whole brain multiple-comparison cluster corrections at a p < 0.05 threshold is unacceptably prone to producing false-positive results, new quality standards have emerged demanding more conservative significance levels and a more thorough statistical reporting (Kessler et al., 2017; Nichols et al., 2017). In line with these recent guidelines, we report clusters of maximally activated voxels that survived family-wise error correction for multiple comparisons at a statistical threshold of p < 0.001 and z = 2.3.

Results

Behavior and main computational models

In line with our first hypothesis, we found that adolescents made less accurate predictions about the other persons' preferences compared with adults (i.e., they had higher average PE throughout the experiment) (Fig. 4). In both groups, PEs diminished over time and both groups were similarly able to generalize from the persons' preferences for individual items to predefined categories and even form an impression about the persons' character traits (as revealed by postscan questionnaires; see Materials and Methods; Fig. 2A).

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

A, Mean PEs decreased over time in both adult and adolescent groups. Pearson's correlation coefficients were as follows: r = −0.330, p = 0.0376 in the adult group and r = −0.377, 1.65 × 10−2 in the adolescent group. B, Adults had significantly lower mean PEs than adolescents (t(43) = −4.037, p = 2.19 × 10−4). TEENS, Adolescents; ADU, adults. Asterisk indicates a significance level of p < 0.01.

Crucially, we compared the suitability of a baseline nonlearning regression model (Model 1), a pure RL model (Model 2), and a “combination model” (Model 3) in describing participants' changes in predictions over time (Fig. 5C). In both groups, the winning combination model (Model 3) included two components: RL based on past feedback and participants' OPs for the item at hand. The two model parameters, the learning rate α and the weighting parameter γ, which formalizes the trade-off between RL and participants' OPs, were not significantly correlated within groups (adults: r = − 0.06, p = 0.811; adolescents: r = 0.182, p = 0.395) or over all participants (r = 0.131; p = 0.393).

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Model comparison. A, Gamma parameters encoding the tradeoff between RL and OP did not significantly differ between groups (Mann–Whitney U = 234, N1 = 21, N2 = 24, p = 0.682). B, Adolescents had lower learning rates (t(43) = −4.22, p = 2.44 × 10−3 Bonferroni corrected). Asterisk indicates a significance level of p < 0.01. C, Random-effect model comparison (using BIC) showed that the evidence for the combination model was higher in both groups. The combination model or a variant thereof also emerged as winner when considering a larger model space (see Materials and Methods). Random-effect group analysis comparing adults and adolescents using the same model (both Combination or Simple-RL) versus different models (one group Combination and the other one Simple-RL). The exceedance probabilities provide conclusive evidence for groups using the same versus a different model. D, Regression analyses revealed a quadratic relationship between participants' ages and learning rates. Dashed confidence bands indicate the 95% CIs for the fitted regression line. The age regressor, calculated from days, is a continuous variable with 5 decimals. In the figure axes, age is rounded to years. TEENS: Adolescents; ADU, adults.

Contrary to our second hypothesis, the groups did not differ in their tendency to base their predictions on RL versus their OPs (as determined by a lack of evidence for a difference in the weighting parameter γ; Fig. 5A) and a lack of group difference in the optimal model initialization (5.5 vs OP; Fig. 3B). Surprisingly, however, adolescents were less flexible than adults when making social predictions as indicated by lower learning rates (i.e., a reduced tendency to update based on immediate feedback; Fig. 5B). Notably, individual differences in learning rates indicated ongoing social development across adolescence: Learning rates decreased from early to mid-adolescence and increased thereafter. This quadratic effect was significant after controlling for the linear effect (Table 2; Fig. 5D).

View this table:
  • View inline
  • View popup
Table 2.

Hierarchical multiple regression regression testing the linear and nonlinear relationships of learning rates from the winning model with age and social traits in the adolescent group

Extended model comparison

A more comprehensive model comparison of 13 models showed clearly that, in both age groups, variants of the combination models outperformed the other model types considered here (Fig. 3A, blue bars). Specifically, in the adult group, the simplest variant of the combination model with one free parameter each for learning rate and weighting parameter (Model 3) best fitted participants' choices. In the adolescent group, a variant of the combination model with separate weighting parameters for each of the three preference profiles (Model 12) best described participants' choices according to a fixed-effects analysis. The simplest combination model variant (Model 3) performed second-best in adolescents. Adolescents' learning rates estimated by Model 12 did not significantly differ from those estimated by the Model 3 (t(23) = 0.224, p = 0.824) and were significantly lower than the learning rates of adults according to Model 12 (t(43) = 4.29, p = 1.86 × 10−4). A subsequent random-effects group comparison of Model 12 versus the simpler Model 3 did not yield conclusive evidence that adolescents indeed use a different variant of the combination model than the adult group. The probability of age groups using a different variant of the combination model is 0.5, and thus at chance level. For stringent comparability across both age groups, we investigated whether and how model parameters estimated by the simplest combination model (Model 3) are encoded in brain activity of both age groups.

Behavioral control analyses

Our behavioral result that adolescents showed more conservative updating than adults held against plausible alternative explanations. First, group differences in learning rates could arise if the adult preference profiles were easier to infer than the adolescent profiles. This was not the case: We tested how a nonsocial learning algorithm (a simple RL model) performs when predicting the adult and adolescent profiles (in the presented trial order for each participant). This model adjusts upcoming preference ratings by the veridical feedback about the other's preference, independent of participants' responses or other factors, such as their OPs. In our simulations, we found that adolescents and adults had an equal opportunity to learn on the task. There were no group differences in learning rates for adult and adolescent profiles (χ(1, N = 43)2 = 0.44, p = 0.509), demonstrating that an RL model per se can capture both adult and adolescent profiles equally well and that pure RL about adults and adolescents on this task results in similar updating behavior.

Another possibility for the observed group differences in learning rates could be that adolescents and adults differed in the extent to which they remembered the person's feedback on previous trials. To test that hypothesis, we modified the simple RL model and the combination model to include “forgetting” about the learned estimates (for a similar approach in the nonsocial domain, see Niv et al. (2015). The models accounted for forgetting by introducing a decay parameter. For each trial in which an item did not belong to a given subcategory, the decay parameter pulled the previously learned estimate of that subcategory toward the noninformative midpoint of the rating scale. Put differently, the decay parameter scales the forgetting according to the number of trials that passed without receiving feedback about the other person's preference for the given subcategory (see Materials and Methods). As described in the previous section on extended model comparison, for both adolescent and adult participants, the winning combination model outperformed the models, including the decay parameter, which rules out that forgetting feedback contributed to the observed group differences in learning rates.

Group differences in learning rates could not be due to the fact that the adolescent profiles or OPs were more rigid overall or in one of the three item categories. Preference profiles and OPs did not differ for either category and across categories between adult and adolescent groups (see Materials and Methods). Furthermore, we did not find evidence that participants who were more stable in their OPs regardless of age group performed better on our task. We did not find significant correlations between learning rates and overall variance (adults: r = −0.086; p = 0.711; adolescents: r = 0.044, p = 0.839) or between learning rates and variance of the three item categories separately in either group (adults: activities: r = −0.204, p = 0.374; fashion: r = 0.106, p = 0.648; food: r = −0.166, p = 0.471; adolescents: activities: r = 0.012, p = 0.955; fashion: r = 0.006, p = 0.978; food: r = 0.018, p = 0.978) and across groups (activities: r = −0.129, p = 0.400; fashion: r = 0.090, p = 0.558; food: r = −0.133, p = 0.386).

Finally, we tested whether the fact that participants saw feedback from other people could have biased their OP ratings, which were elicited after the scanner task. Previous studies showed that feedback from peers, mostly from experts or large groups, can affect one's own judgment (Meshi et al., 2012; Izuma and Adolphs, 2013). To exclude that the feedback about the other persons' preferences (received during the scanner task) influenced participants on OPs for the respective items, we calculated how much the other persons' preferences differed from participants' self-ratings. Crucially, we compared these self-other discrepancies between pictures for which participants had received feedback (114 of 120 total task items; 6 items could not be matched) with additional similar items for which they had not received feedback (114 of 244 remaining unseen items; for a complete list of matched pairs, see Table 3). We found no significant differences in either group (adolescents: t(113) = −0.82, p = 0.415; adults: t(113) = −1.19, p = 0.237), suggesting that participants did not shift their OP ratings of previously seen items to more closely match the preference ratings of the other person.

View this table:
  • View inline
  • View popup
Table 3.

Pictures of items seen in the preference task and matched unseen picture items

Brain systems for social learning

Our fMRI analyses investigated whether group differences in the extent to which parameters of the winning combination model are encoded in brain activity reflect the observed behavioral differences. We were particularly interested in potential developmental differences, that is, whether model-estimated predictions, PEs, and OPs were encoded to a greater extent in brain activity of adults versus adolescents and vice versa. Table 4 provides a comprehensive overview of performed analyses.

View this table:
  • View inline
  • View popup
Table 4.

Brain activity correlated with parameters of the winning modela

Paralleling the behavioral differences in updating, our fMRI data demonstrated a regional shift in encoding PEs from adolescence to adulthood. Activity in the fusiform cortex correlated more strongly with estimated PEs in adolescents compared with adults (Table 4; Fig. 6B). In contrast, mPFC activity represented the actual predictions (i.e., estimated preference ratings) more strongly in adults than in adolescents (Table 4; Fig. 6A). Age and social traits modulated the relationship between brain activity and PEs in the adolescent group. Activity in the bilateral fusiform cortex was less related to PEs in mid-adolescents (Table 5; Fig. 6C), and brain activity in both fusiform cortex and mPFC was more closely linked to PEs for adolescents with less social traits (Table 5; Fig. 6D).

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

Group differences in neural encoding of model-estimated ratings and PE on a trial-by-trial basis. A, Estimated ratings according to the winning model correlated more strongly with brain activity in adults compared with adolescents. B, Estimated PEs correlated more strongly with brain activity in adolescents compared with adults. C, Quadratic relationship between the magnitude to which adolescents encoded PEs in brain activity and age. The age regressor, calculated from days, is a continuous variable with 5 decimals. In the figure axes, age is rounded to years. D, The magnitude to which adolescents' brain activity encoded PEs correlated positively with the SRS total score (higher values indicate less social traits). TEENS, Adolescents; ADU, adults.

View this table:
  • View inline
  • View popup
Table 5.

Relationship between brain activity encoding model-based variables and social traits in the adolescent groupa

Discussion

We devised a novel, ecologically valid paradigm, in which adults and adolescents rated preferences of persons from their respective age groups. They subsequently received veridical feedback about these persons' actual preferences. Computational model comparison revealed that, of the tested models, the combination strategy best approximated participants' preference inferences across age groups. Participants adjusted ratings based on previous feedback and their OP for the item at hand.

Participants were not encouraged to think about their OPs before or during the task. The fact that they did supports the notion of egocentric processing of social information (i.e., humans typically rely on their OPs and experiences to make inferences about others' mental states) (Gallese and Goldman, 1998; Mitchell, 2009). Contrary to our expectations and previous studies (Lapsley and Murphy, 1985; Frankenberger, 2000), we found that adolescents were not more egocentrically inclined. The extent to which participants relied on their OPs to rate those of others did not differ between the two age groups.

Adolescents, however, were overall less accurate (i.e., they had on average higher absolute PEs) and were more conservative reinforcement learners, as indicated by lower learning rates. Interestingly, Davidow et al. (2016) found that adolescents show more conservative updating behavior (i.e., lower learning rates) in a nonsocial RL task. Because of the probabilistic nature of their task, this led to better learning performance of adolescents compared with adults. In our task, lower learning rates of adolescents were associated with less accurate preference predictions.

Most studies have shown that adolescents' choices are suboptimal in both social and nonsocial domains. In nonsocial reward settings, nonoptimal choices typically arise from greater impulsivity and cognitive flexibility (Galvan et al., 2006; Cohen et al., 2010). This is thought to be caused by an asymmetry in the underlying neurodevelopment: subcortical regions that support reward processes, such as the striatum, mature earlier than regions for cognitive control, in particular the mPFC (Casey et al., 2008; Steinberg, 2008). An emerging literature shows that anatomical and functional development of the mPFC also plays an important role for ongoing social development across adolescence. mPFC development, however, seems to affect social decisions in a different way than nonsocial decisions. In line with our results, studies have found that adolescents are less flexible when switching between their own and another person's perspective (Choudhury et al., 2006) and that social interactions are more effortful, which is why adolescents tend to incur performance deficits compared with adults (Mills et al., 2015).

The U-shaped relationship between age and learning rates on our preference task further suggests that conservative updating may be specific for adolescence. Mid-adolescents showed the most conservative updating behavior. This finding cannot be explained by differences in preference rigidity between the preference profiles or OP ratings of the two age groups. On average, preference distributions for both preference profiles and OPs did not differ between groups. Adolescent-specific performance peaks or valleys have been observed in a number of decision making contexts, ranging from risk taking to reversal learning and feedback processing (van der Schaaf et al., 2011; Casey and Caudle, 2013; Crowley et al., 2014; Jones et al., 2014). These extremes likely reflect nonlinear changes in the neurobiological mechanisms underlying task performance (Giedd et al., 1999; Luciana, 2013; Braams et al., 2015). For instance, longitudinal analyses confirmed that quadratic age patterns for nucleus accumbens activity to rewards, coincided with the same quadratic pattern for risk tasking in adolescents (Braams et al., 2015).

Consistent with the notion that adolescent neurodevelopment supports changes in social learning, we find that the extent to which variables of the winning learning model are encoded in brain activity differs between age groups. The bilateral fusiform cortex, which showed greater PE sensitivity in adolescents compared with adults, has been repeatedly implicated in encoding PEs in nonsocial and social contexts (Garrison et al., 2013; Gu et al., 2016). The observation of larger PE signals in the adolescent group concurs with previous reports that adolescents' performance is more strongly influenced by worse-than-expected feedback compared with better-than-expected feedback (van Duijvenvoorde et al., 2008; Hauser et al., 2015). In our study, negative feedback constitutes feedback that strongly deviates from the initial rating, thereby producing a large PE. That is, participants' goal was to accurately predict the other persons' preferences; therefore, both higher and lower predictions indicated undesired inaccuracy. In line with the literature, we find ongoing nonlinear neural development across adolescence (Jones et al., 2014). Specifically, PE encoding in the fusiform cortex decreased with adolescents' age, whereby mid-adolescence showed less PE encoding than younger and older peers. This nonlinear relationship between neural encoding of PEs and age paralleled the U-shaped relationship between adolescents' learning rates and age. The adolescent-specific findings suggest that adolescence, in particular mid-adolescence, may be a unique developmental period with respect to social cognition.

Our approach of combining computational modeling and neuroimaging sensitively detected nonlinear, adolescent-specific, neurodevelopment. We acknowledge the relatively small sample size of adolescents and adults, who were recruited from the Yale University area. We also acknowledge the possibility of untested models to account for social learning about other persons' preferences. While the sample sizes and the model selection strategies are typical for most current studies, future studies should replicate and further fine-tune the behavioral and neural models of social learning in larger and more diverse samples.

Importantly, adolescents who showed increased neural responses to PEs were reported to have less social traits. This correlation could indicate that adolescents with more social traits do not need to rely on PEs as much as adolescents with less social traits. Alternatively, the correlation between neural encodings of PEs and social traits could suggest that overly strong PE sensitivity may sometimes hinder, rather than support, social learning.

Compared with adolescents, adults' neural responses in the mPFC were more closely related to their inferences about others' preferences. This perigenual part of the mPFC has been repeatedly implicated in ToM and strategic decision making, in particular in keeping track of another player's mental state (Behrens et al., 2009; Rosenblau et al., 2016). The fact that activity in the mPFC, which is one of the latest-maturing brain regions, correlated with adults' ratings of another person's mental state, but did not correlate significantly with ratings of adolescents, suggests that the mPFC is more attuned to higher-level social inferences in adults compared with adolescents.

In conclusion, we identify a computational model that describes ongoing development in updating inferences about another person. In contrast to nonsocial reward settings (Cohen et al., 2010), adolescents were overall more conservative; that is, they made smaller updates based on immediate social feedback. Instead, they averaged feedback over a longer time horizon leading them to assume overly rigid preference structures. This developmental change was accompanied by a shift in encoding predictions and the errors thereof within medial prefrontal structures. The fact that neural encoding of OPs and PEs scaled with parent-reported social traits in everyday settings provides external validity for our task design and the computational model. Future studies could profitably apply a similar approach to investigate social decision making of adolescents suffering from psychiatric conditions.

Footnotes

  • This work was supported by the Hilibrand Foundation to K.A.P. and G.R., the Carbonell Family to K.A.P., and National Institute of Mental Health Grant R01 MH100028. C.W.K. was supported by the SFB TRR 169. We thank Allison Jack, Daeyeol Lee, and Yael Niv for valuable discussions; Heidi Tsapelas for help with recruitment; Jessica Reed and Abigail Dutton for assistance with data acquisition; Megan Braconnier and Sebiha Abdullahi for help with data analysis.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Dr. Gabriela Rosenblau, Autism and Neurodevelopmental Disorders Institute, George Washington University and Children's National Health System, 2115 G Street NW, Washington, DC 2005. grosenblau{at}gwu.edu

References

  1. ↵
    1. Beckmann CF,
    2. Jenkinson M,
    3. Smith SM
    (2003) General multilevel linear modeling for group analysis in fMRI. Neuroimage 20:1052–1063. doi:10.1016/S1053-8119(03)00435-X pmid:14568475
    OpenUrlCrossRefPubMed
  2. ↵
    1. Behrens TE,
    2. Hunt LT,
    3. Woolrich MW,
    4. Rushworth MF
    (2008) Associative learning of social value. Nature 456:245–249. doi:10.1038/nature07538 pmid:19005555
    OpenUrlCrossRefPubMed
  3. ↵
    1. Behrens TE,
    2. Hunt LT,
    3. Rushworth MF
    (2009) The computation of social behavior. Science 324:1160–1164. doi:10.1126/science.1169694 pmid:19478175
    OpenUrlAbstract/FREE Full Text
  4. ↵
    1. Blakemore C
    (2007) Straight talk from Colin Blakemore. Nat Med 13:1125. doi:10.1038/nm1007-1125 pmid:17917643
    OpenUrlCrossRefPubMed
  5. ↵
    1. Blakemore SJ
    (2010) The developing social brain: implications for education. Neuron 65:744–747. doi:10.1016/j.neuron.2010.03.004 pmid:20346751
    OpenUrlCrossRefPubMed
  6. ↵
    1. Blakemore SJ,
    2. Mills KL
    (2014) Is adolescence a sensitive period for sociocultural processing? Annu Rev Psychol 65:187–207. doi:10.1146/annurev-psych-010213-115202 pmid:24016274
    OpenUrlCrossRefPubMed
  7. ↵
    1. Braams BR,
    2. van Duijvenvoorde AC,
    3. Peper JS,
    4. Crone EA
    (2015) Longitudinal changes in adolescent risk-taking: a comprehensive study of neural responses to rewards, pubertal development, and risk-taking behavior. J Neurosci 35:7226–7238. doi:10.1523/JNEUROSCI.4764-14.2015 pmid:25948271
    OpenUrlAbstract/FREE Full Text
  8. ↵
    1. Brunyé TT,
    2. Hayes JF,
    3. Mahoney CR,
    4. Gardony AL,
    5. Taylor HA,
    6. Kanarek RB
    (2013) Get in my belly: food preferences trigger approach and avoidant postural asymmetries. PLoS One 8:e72432. doi:10.1371/journal.pone.0072432 pmid:24023618
    OpenUrlCrossRefPubMed
  9. ↵
    1. Casey B,
    2. Caudle K
    (2013) The teenage brain: self control. Curr Dir Psychol Sci 22:82–87. doi:10.1177/0963721413480170 pmid:25284961
    OpenUrlCrossRefPubMed
  10. ↵
    1. Casey BJ,
    2. Jones RM,
    3. Hare TA
    (2008) The adolescent brain. Ann N Y Acad Sci 1124:111–126. doi:10.1196/annals.1440.010 pmid:18400927
    OpenUrlCrossRefPubMed
  11. ↵
    1. Choudhury S,
    2. Blakemore SJ,
    3. Charman T
    (2006) Social cognitive development during adolescence. Soc Cogn Affect Neurosci 1:165–174. doi:10.1093/scan/nsl024 pmid:18985103
    OpenUrlCrossRefPubMed
  12. ↵
    1. Cohen JR,
    2. Asarnow RF,
    3. Sabb FW,
    4. Bilder RM,
    5. Bookheimer SY,
    6. Knowlton BJ,
    7. Poldrack RA
    (2010) A unique adolescent response to reward prediction errors. Nat Neurosci 13:669–671. doi:10.1038/nn.2558 pmid:20473290
    OpenUrlCrossRefPubMed
  13. ↵
    1. Constantino JN,
    2. Gruber CP
    (2005) Social Responsiveness Scale (SRS). Torrance, CA: Western Psychological Services.
  14. ↵
    1. Constantino JN,
    2. Gruber CP
    (2012) Social Responsiveness Scale, Ed 2. Torrance, CA: Western Psychological Services.
  15. ↵
    1. Crick NR,
    2. Dodge KA
    (1994) A review and reformulation of social information-processing mechanisms in children's social adjustment. Psychol Bull 115:74–101. doi:10.1037/0033-2909.115.1.74
    OpenUrlCrossRef
  16. ↵
    1. Crone EA,
    2. Dahl RE
    (2012) Understanding adolescence as a period of social-affective engagement and goal flexibility. Nat Rev Neurosci 13:636–650. doi:10.1038/nrn3313 pmid:22903221
    OpenUrlCrossRefPubMed
  17. ↵
    1. Crowley MJ,
    2. van Noordt SJ,
    3. Wu J,
    4. Hommer RE,
    5. South M,
    6. Fearon RM,
    7. Mayes LC
    (2014) Reward feedback processing in children and adolescents: medial frontal theta oscillations. Brain Cogn 89:79–89. doi:10.1016/j.bandc.2013.11.011 pmid:24360036
    OpenUrlCrossRefPubMed
  18. ↵
    1. Davidow JY,
    2. Foerde K,
    3. Galván A,
    4. Shohamy D
    (2016) An upside to reward sensitivity: the hippocampus supports enhanced reinforcement learning in adolescence. Neuron 92:93–99. doi:10.1016/j.neuron.2016.08.031 pmid:27710793
    OpenUrlCrossRefPubMed
  19. ↵
    1. Dayan P,
    2. Niv Y
    (2008) Reinforcement learning: the good, the bad and the ugly. Curr Opin Neurobiol 18:185–196. doi:10.1016/j.conb.2008.08.003 pmid:18708140
    OpenUrlCrossRefPubMed
  20. ↵
    1. Eklund A,
    2. Nichols TE,
    3. Knutsson H
    (2016) Cluster failure: why fMRI inferences for spatial extent have inflated false-positive rates. Proc Natl Acad Sci U S A 113:7900–7905. doi:10.1073/pnas.1602413113 pmid:27357684
    OpenUrlAbstract/FREE Full Text
  21. ↵
    1. Frankenberger KD
    (2000) Adolescent egocentrism: a comparison among adolescents and adults. J Adolesc 23:343–354. doi:10.1006/jado.2000.0319 pmid:10837112
    OpenUrlCrossRefPubMed
  22. ↵
    1. Gallese V,
    2. Goldman A
    (1998) Mirror neurons and the simulation theory of mind-reading. Trends Cogn Sci 2:493–501. doi:10.1016/S1364-6613(98)01262-5 pmid:21227300
    OpenUrlCrossRefPubMed
  23. ↵
    1. Galvan A
    (2010) Adolescent development of the reward system. Front Hum Neurosci 4:6. doi:10.3389/neuro.09.006.2010 pmid:20179786
    OpenUrlCrossRefPubMed
  24. ↵
    1. Galvan A,
    2. Hare TA,
    3. Parra CE,
    4. Penn J,
    5. Voss H,
    6. Glover G,
    7. Casey BJ
    (2006) Earlier development of the accumbens relative to orbitofrontal cortex might underlie risk-taking behavior in adolescents. J Neurosci 26:6885–6892. doi:10.1523/JNEUROSCI.1062-06.2006 pmid:16793895
    OpenUrlAbstract/FREE Full Text
  25. ↵
    1. Garrison J,
    2. Erdeniz B,
    3. Done J
    (2013) Prediction error in reinforcement learning: a meta-analysis of neuroimaging studies. Neurosci Biobehav Rev 37:1297–1310. doi:10.1016/j.neubiorev.2013.03.023 pmid:23567522
    OpenUrlCrossRefPubMed
  26. ↵
    1. Garvert MM,
    2. Moutoussis M,
    3. Kurth-Nelson Z,
    4. Behrens TE,
    5. Dolan RJ
    (2015) Learning-induced plasticity in medial prefrontal cortex predicts preference malleability. Neuron 85:418–428. doi:10.1016/j.neuron.2014.12.033 pmid:25611512
    OpenUrlCrossRefPubMed
  27. ↵
    1. Giedd JN,
    2. Blumenthal J,
    3. Jeffries NO,
    4. Castellanos FX,
    5. Liu H,
    6. Zijdenbos A,
    7. Paus T,
    8. Evans AC,
    9. Rapoport JL
    (1999) Brain development during childhood and adolescence: a longitudinal MRI study. Nat Neurosci 2:861–863. doi:10.1038/13158 pmid:10491603
    OpenUrlCrossRefPubMed
  28. ↵
    1. Gu Y,
    2. Hu X,
    3. Pan W,
    4. Li Y,
    5. Yang C,
    6. Chen A
    (2016) Neural activities underlying feedback express salience prediction errors for appetitive and aversive stimuli. Neuroimage 6:34032. doi:10.1038/srep34032 pmid:27694920
    OpenUrlCrossRefPubMed
  29. ↵
    1. Hampton AN,
    2. Bossaerts P,
    3. O'Doherty JP
    (2008) Neural correlates of mentalizing-related computations during strategic interactions in humans. Proc Natl Acad Sci U S A 105:6741–6746. doi:10.1073/pnas.0711099105 pmid:18427116
    OpenUrlAbstract/FREE Full Text
  30. ↵
    1. Hartley CA,
    2. Somerville LH
    (2015) The neuroscience of adolescent decision-making. Curr Opin Behav Sci 5:108–115. doi:10.1016/j.cobeha.2015.09.004 pmid:26665151
    OpenUrlCrossRefPubMed
  31. ↵
    1. Hauser TU,
    2. Iannaccone R,
    3. Walitza S,
    4. Brandeis D,
    5. Brem S
    (2015) Cognitive flexibility in adolescence: neural and behavioral mechanisms of reward prediction error processing in adaptive decision making during development. Neuroimage 104:347–354. doi:10.1016/j.neuroimage.2014.09.018 pmid:25234119
    OpenUrlCrossRefPubMed
  32. ↵
    1. Hays RD,
    2. Reviki DA
    (2005) Reliability and validity (including responsiveness). In: Assessing quality of life in clinical trials: methods and practice (Fayers PM, Hays RD, eds). New York, NY: Oxford UP.
  33. ↵
    1. Izuma K,
    2. Adolphs R
    (2013) Social manipulation of preference in the human brain. Neuron 78:563–573. doi:10.1016/j.neuron.2013.03.023 pmid:23664619
    OpenUrlCrossRefPubMed
  34. ↵
    1. Jenkinson M,
    2. Smith S
    (2001) A global optimisation method for robust affine registration of brain images. Med Image Anal 5:143–156. doi:10.1016/S1361-8415(01)00036-6 pmid:11516708
    OpenUrlCrossRefPubMed
  35. ↵
    1. Jenkinson M,
    2. Bannister P,
    3. Brady M,
    4. Smith S
    (2002) Improved optimization for the robust and accurate linear registration and motion correction of brain images. Neuroimage 17:825–841. doi:10.1006/nimg.2002.1132 pmid:12377157
    OpenUrlCrossRefPubMed
  36. ↵
    1. Jones RM,
    2. Somerville LH,
    3. Li J,
    4. Ruberry EJ,
    5. Powers A,
    6. Mehta N,
    7. Dyke J,
    8. Casey BJ
    (2014) Adolescent-specific patterns of behavior and neural activity during social reinforcement learning. Cogn Affect Behav Neurosci 14:683–697. doi:10.3758/s13415-014-0257-z pmid:24550063
    OpenUrlCrossRefPubMed
  37. ↵
    1. Kessler D,
    2. Angstadt M,
    3. Sripada CS
    (2017) Reevaluating “cluster failure” in fMRI using nonparametric control of the false discovery rate. Proc Natl Acad Sci U S A 114:E3372–E3373. doi:10.1073/pnas.1614502114 pmid:28420796
    OpenUrlFREE Full Text
  38. ↵
    1. Korn CW,
    2. Prehn K,
    3. Park SQ,
    4. Walter H,
    5. Heekeren HR
    (2012) Positively biased processing of self-relevant social feedback. J Neurosci 32:16832–16844. doi:10.1523/JNEUROSCI.3016-12.2012 pmid:23175836
    OpenUrlAbstract/FREE Full Text
  39. ↵
    1. Korn CW,
    2. Rosenblau G,
    3. Rodriguez Buritica JM,
    4. Heekeren HR
    (2016) Performance feedback processing is positively biased as predicted by attribution theory. PLoS One 11:e0148581. doi:10.1371/journal.pone.0148581 pmid:26849646
    OpenUrlCrossRefPubMed
  40. ↵
    1. Laible DJ,
    2. Murphy TP,
    3. Augustine M
    (2014) Adolescents' aggressive and prosocial behaviors: links with social information processing, negative emotionality, moral affect, and moral cognition. J Genet Psychol 175:270–286. doi:10.1080/00221325.2014.885878 pmid:25175531
    OpenUrlCrossRefPubMed
  41. ↵
    1. Lapsley DK,
    2. Murphy MN
    (1985) Another look at the theoretical assumptions of adolescent egocentrism. Dev Rev 5:201–217. doi:10.1016/0273-2297(85)90009-7
    OpenUrlCrossRef
  42. ↵
    1. Luciana M
    (2013) Adolescent brain development in normality and psychopathology. Dev Psychopathol 25:1325–1345. doi:10.1017/S0954579413000643 pmid:24342843
    OpenUrlCrossRefPubMed
  43. ↵
    1. Malti T,
    2. Krettenauer T
    (2013) The relation of moral emotion attributions to prosocial and antisocial behavior: a meta-analysis. Child Dev 84:397–412. doi:10.1111/j.1467-8624.2012.01851.x pmid:23005580
    OpenUrlCrossRefPubMed
  44. ↵
    1. Meshi D,
    2. Biele G,
    3. Korn CW,
    4. Heekeren HR
    (2012) How expert advice influences decision making. PLoS One 7:e49748. doi:10.1371/journal.pone.0049748 pmid:23185425
    OpenUrlCrossRefPubMed
  45. ↵
    1. Mills KL,
    2. Dumontheil I,
    3. Speekenbrink M,
    4. Blakemore SJ
    (2015) Multitasking during social interactions in adolescence and early adulthood. R Soc Open Sci 2:150117. doi:10.1098/rsos.150117 pmid:26715991
    OpenUrlAbstract/FREE Full Text
  46. ↵
    1. Mitchell JP
    (2009) Inferences about mental states. Philos Trans R Soc Lond B Biol Sci 364:1309–1316. doi:10.1098/rstb.2008.0318 pmid:19528012
    OpenUrlAbstract/FREE Full Text
  47. ↵
    1. Montague PR,
    2. King-Casas B,
    3. Cohen JD
    (2006) Imaging valuation models in human choice. Annu Rev Neurosci 29:417–448. doi:10.1146/annurev.neuro.29.051605.112903 pmid:16776592
    OpenUrlCrossRefPubMed
  48. ↵
    1. Mrazek PJ,
    2. Haggerty RJ
    (1994) Reducing risks for mental disorders: frontiers for preventive intervention research. Washington, DC: National Academies.
  49. ↵
    1. Mumford JA,
    2. Poline JB,
    3. Poldrack RA
    (2015) Orthogonalization of regressors in fMRI models. PLoS One 10:e0126255. doi:10.1371/journal.pone.0126255 pmid:25919488
    OpenUrlCrossRefPubMed
  50. ↵
    1. Nichols TE,
    2. Das S,
    3. Eickhoff SB,
    4. Evans AC,
    5. Glatard T,
    6. Hanke M,
    7. Kriegeskorte N,
    8. Milham MP,
    9. Poldrack RA,
    10. Poline JB,
    11. Proal E,
    12. Thirion B,
    13. Van Essen DC,
    14. White T,
    15. Yeo BT
    (2017) Best practices in data analysis and sharing in neuroimaging using MRI. Nat Neurosci 20:299–303. doi:10.1038/nn.4500 pmid:28230846
    OpenUrlCrossRefPubMed
  51. ↵
    1. Niv Y,
    2. Daniel R,
    3. Geana A,
    4. Gershman SJ,
    5. Leong YC,
    6. Radulescu A,
    7. Wilson RC
    (2015) Reinforcement learning in multidimensional environments relies on attention mechanisms. J Neurosci 35:8145–8157. doi:10.1523/JNEUROSCI.2978-14.2015 pmid:26019331
    OpenUrlAbstract/FREE Full Text
  52. ↵
    1. Payakachat N,
    2. Tilford JM,
    3. Kovacs E,
    4. Kuhlthau K
    (2012) Autism spectrum disorders: a review of measures for clinical, health services and cost-effectiveness applications. Expert Rev Pharmacoecon Outcomes Res 12:485–503. doi:10.1586/erp.12.29 pmid:22971035
    OpenUrlCrossRefPubMed
  53. ↵
    1. Pfeifer JH,
    2. Blakemore SJ
    (2012) Adolescent social cognitive and affective neuroscience: past, present, and future. Soc Cogn Affect Neurosci 7:1–10. doi:10.1093/scan/nsr099 pmid:22228750
    OpenUrlCrossRefPubMed
  54. ↵
    1. Rigoux L,
    2. Stephan KE,
    3. Friston KJ,
    4. Daunizeau J
    (2014) Bayesian model selection for group studies–revisited. Neuroimage 84:971–985. doi:10.1016/j.neuroimage.2013.08.065 pmid:24018303
    OpenUrlCrossRefPubMed
  55. ↵
    1. Rosenblau G,
    2. Kliemann D,
    3. Lemme B,
    4. Walter H,
    5. Heekeren HR,
    6. Dziobek I
    (2016) The role of the amygdala in naturalistic mentalising in typical development and in autism spectrum disorder. Br J Psychiatry 208:556–564. doi:10.1192/bjp.bp.114.159269 pmid:26585095
    OpenUrlAbstract/FREE Full Text
  56. ↵
    1. Ruff CC,
    2. Fehr E
    (2014) The neurobiology of rewards and values in social decision making. Nat Rev Neurosci 15:549–562. doi:10.1038/nrn3776 pmid:24986556
    OpenUrlCrossRefPubMed
  57. ↵
    1. Shaw DJ,
    2. Grosbras MH,
    3. Leonard G,
    4. Pike GB,
    5. Paus T
    (2012) Development of the action observation network during early adolescence: a longitudinal study. Soc Cogn Affect Neurosci 7:64–80. doi:10.1093/scan/nsq105 pmid:21278194
    OpenUrlCrossRefPubMed
  58. ↵
    1. Smith SM,
    2. Jenkinson M,
    3. Woolrich MW,
    4. Beckmann CF,
    5. Behrens TE,
    6. Johansen-Berg H,
    7. Bannister PR,
    8. De Luca M,
    9. Drobnjak I,
    10. Flitney DE, et al
    . (2004) Advances in functional and structural MR image analysis and implementation as FSL. Neuroimage [Suppl 1] 23:S208–S219.
    OpenUrl
  59. ↵
    1. Somerville LH,
    2. Jones RM,
    3. Ruberry EJ,
    4. Dyke JP,
    5. Glover G,
    6. Casey BJ
    (2013) The medial prefrontal cortex and the emergence of self-conscious emotion in adolescence. Psychol Sci 24:1554–1562. doi:10.1177/0956797613475633 pmid:23804962
    OpenUrlCrossRefPubMed
  60. ↵
    1. Steinberg L
    (2008) A social neuroscience perspective on adolescent risk-taking. Dev Rev 28:78–106. doi:10.1016/j.dr.2007.08.002 pmid:18509515
    OpenUrlCrossRefPubMed
  61. ↵
    1. Stephan KE,
    2. Penny WD,
    3. Moran RJ,
    4. den Ouden HE,
    5. Daunizeau J,
    6. Friston KJ
    (2010) Ten simple rules for dynamic causal modeling. Neuroimage 49:3099–3109. doi:10.1016/j.neuroimage.2009.11.015 pmid:19914382
    OpenUrlCrossRefPubMed
  62. ↵
    1. van der Schaaf ME,
    2. Warmerdam E,
    3. Crone EA,
    4. Cools R
    (2011) Distinct linear and non-linear trajectories of reward and punishment reversal learning during development: relevance for dopamine's role in adolescent decision making. Dev Cogn Neurosci 1:578–590. doi:10.1016/j.dcn.2011.06.007 pmid:22436570
    OpenUrlCrossRefPubMed
  63. ↵
    1. van Duijvenvoorde AC,
    2. Zanolie K,
    3. Rombouts SA,
    4. Raijmakers ME,
    5. Crone EA
    (2008) Evaluating the negative or valuing the positive? Neural mechanisms supporting feedback-based learning across development. J Neurosci 28:9495–9503. doi:10.1523/JNEUROSCI.1485-08.2008 pmid:18799681
    OpenUrlAbstract/FREE Full Text
  64. ↵
    1. Vossel S,
    2. Mathys C,
    3. Daunizeau J,
    4. Bauer M,
    5. Driver J,
    6. Friston KJ,
    7. Stephan KE
    (2014) Spatial attention, precision, and Bayesian inference: a study of saccadic response speed. Cereb Cortex 24:1436–1450. doi:10.1093/cercor/bhs418 pmid:23322402
    OpenUrlCrossRefPubMed
  65. ↵
    1. Woolrich M
    (2008) Robust group analysis using outlier inference. Neuroimage 41:286–301. doi:10.1016/j.neuroimage.2008.02.042 pmid:18407525
    OpenUrlCrossRefPubMed
  66. ↵
    1. Yang DY,
    2. Rosenblau G,
    3. Keifer C,
    4. Pelphrey KA
    (2015) An integrative neural model of social perception, action observation, and theory of mind. Neurosci Biobehav Rev 51:263–275. doi:10.1016/j.neubiorev.2015.01.020 pmid:25660957
    OpenUrlCrossRefPubMed
View Abstract
Back to top

In this issue

The Journal of Neuroscience: 38 (4)
Journal of Neuroscience
Vol. 38, Issue 4
24 Jan 2018
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Advertising (PDF)
  • Ed Board (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
A Computational Account of Optimizing Social Predictions Reveals That Adolescents Are Conservative Learners in Social Contexts
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
A Computational Account of Optimizing Social Predictions Reveals That Adolescents Are Conservative Learners in Social Contexts
Gabriela Rosenblau, Christoph W. Korn, Kevin A. Pelphrey
Journal of Neuroscience 24 January 2018, 38 (4) 974-988; DOI: 10.1523/JNEUROSCI.1044-17.2017

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
A Computational Account of Optimizing Social Predictions Reveals That Adolescents Are Conservative Learners in Social Contexts
Gabriela Rosenblau, Christoph W. Korn, Kevin A. Pelphrey
Journal of Neuroscience 24 January 2018, 38 (4) 974-988; DOI: 10.1523/JNEUROSCI.1044-17.2017
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • adolescence
  • fMRI
  • mPFC
  • mental state inference
  • preferences
  • reinforcement learning

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Regional Excitatory-Inhibitory Balance Relates to Self-Reference Effect on Recollection via the Precuneus/Posterior Cingulate Cortex–Medial Prefrontal Cortex Connectivity
  • Modulation of dopamine neurons alters behavior and event encoding in the nucleus accumbens during Pavlovian conditioning
  • Hippocampal sharp-wave ripples decrease during physical actions including consummatory behavior in immobile rodents
Show more Research Articles

Behavioral/Cognitive

  • Regional Excitatory-Inhibitory Balance Relates to Self-Reference Effect on Recollection via the Precuneus/Posterior Cingulate Cortex–Medial Prefrontal Cortex Connectivity
  • Neocortical and Hippocampal Theta Oscillations Track Audiovisual Integration and Replay of Speech Memories
  • Anchoring functional connectivity to individual sulcal morphology yields insights in a pediatric study of reasoning
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.