Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Research Articles, Systems/Circuits

Neural Fingerprints Underlying Individual Language Learning Profiles

Gangyi Feng, Jinghua Ou, Zhenzhong Gan, Xiaoyan Jia, Danting Meng, Suiping Wang and Patrick C. M. Wong
Journal of Neuroscience 1 September 2021, 41 (35) 7372-7387; https://doi.org/10.1523/JNEUROSCI.0415-21.2021
Gangyi Feng
1Department of Linguistics and Modern Languages, Chinese University of Hong Kong, Shatin, N.T, Hong Kong SAR, China
2Brain and Mind Institute, Chinese University of Hong Kong, Shatin, N.T, Hong Kong SAR, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jinghua Ou
3Department of Linguistics, University of Chicago, Chicago, 60637, Illinois
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Zhenzhong Gan
1Department of Linguistics and Modern Languages, Chinese University of Hong Kong, Shatin, N.T, Hong Kong SAR, China
4Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, China; School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, 510631, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Zhenzhong Gan
Xiaoyan Jia
4Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, China; School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, 510631, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Danting Meng
4Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, China; School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, 510631, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Suiping Wang
4Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, China; School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, 510631, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Patrick C. M. Wong
1Department of Linguistics and Modern Languages, Chinese University of Hong Kong, Shatin, N.T, Hong Kong SAR, China
2Brain and Mind Institute, Chinese University of Hong Kong, Shatin, N.T, Hong Kong SAR, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

Human language learning differs significantly across individuals in the process and ultimate attainment. Although decades of research exploring the neural substrates of language learning have identified distinct and overlapping neural networks subserving learning of different components, the neural mechanisms that drive the large interindividual differences are still far from being understood. Here we examine to what extent the neural dynamics of multiple brain networks in men and women across sessions of training contribute to explaining individual differences in learning multiple linguistic components (i.e., vocabulary, morphology, and phrase and sentence structures) of an artificial language in a 7 d training and imaging paradigm with functional MRI. With machine-learning and predictive modeling, neural activation patterns across training sessions were highly predictive of individual learning success profiles derived from the four components. We identified four neural learning networks (i.e., the Perisylvian, frontoparietal, salience, and default-mode networks) and examined their dynamic contributions to the learning success prediction. Moreover, the robustness of the predictions systematically changes across networks depending on specific training phases and the learning components. We further demonstrate that a subset of network nodes in the inferior frontal, insular, and frontoparietal regions increasingly represent newly acquired language knowledge, while the multivariate connectivity between these representation regions is enhanced during learning for more successful learners. These findings allow us to understand why learners differ and are the first to attribute not only the degree of success but also patterns of language learning across components, to neural fingerprints summarized from multiple neural network dynamics.

SIGNIFICANCE STATEMENT Individual differences in learning a language are widely observed not only within the same component of language but also across components. This study demonstrates that the dynamics of multiple brain networks across four imaging sessions of a 7 d artificial language training contribute to individual differences in learning-outcome profiles derived from four language components. With machine-learning predictive modeling, we identified four neural learning networks, including the Perisylvian, frontoparietal, salience, and default-mode networks, that contribute to predicting individual learning-outcome profiles and revealed language-component-general and component-specific prediction patterns across training sessions. These findings provide significant insights in understanding training-dependent neural dynamics underlying individual differences in learning success across language components.

  • individual differences
  • language learning
  • neural fingerprint
  • neural network dynamics
  • predictive modeling

Introduction

Learning is a dynamic process entailing the interaction of multiple neural systems (Zatorre, 2013). Although neuroscience research in recent decades has begun to shed light on learning-related neural dynamics across different domains (Costa et al., 2004; Kleim et al., 2004; Peach and Wong, 2004; Yin et al., 2009; Ettlinger et al., 2014; Mohebi et al., 2019), it has yet to characterize the neural dynamics underlying tremendous individual differences in learning.

In recent years, one area of research where individual differences in both brain and behavior have been a focus is language and its various components (P. C. Wong et al., 2007; Price, 2010; F. C. Wong et al., 2011; Morgan-Short et al., 2012, 2015; Yang et al., 2015; Deng et al., 2016; Stamps, 2016; Kepinska et al., 2017a; Birdsong, 2018; Kidd et al., 2018; Feng et al., 2019). However, these previous studies have failed to consider neural dynamics, because the assessment was limited to one or two time points (e.g., a pretraining and-post-training design), and their focus was on one language component. Learning different components of language is associated with partially shared and distinct cognitive and neural processes (Skehan, 2016; Saito, 2017; Tagarelli et al., 2019); therefore, each learner seems to have a unique “learning profile” when considering multiple learning components at the same time. Here we use the term learning profile to refer to each learner's learning attainment pattern across language components. A comprehensive understanding of the neurocognitive architectures or “fingerprints” underlying individual language learning profiles would require assessments of the neural network systems response to training over time and the study of the learning process across components of language (Herholz, 2013; Zatorre, 2013; Herholz et al., 2016). The term fingerprint is used here to specifically highlight the intricate neurocognitive details unique to individual learners.

The main goal of this study is to identify the neurocognitive fingerprints underlying individual learning profiles. We examine the extent to which different neural networks contribute to the individual differences in learning across language components and learning phases. Adult learners learned an artificial language that includes sound-symbol associations (i.e., words), and morphologic, phrasal, and sentence word-order rules (Fig. 1A). BOLD data were collected during the process of learning across 4 d (days 1, 2, 3, and 7) of the 7 d training (Fig. 1B).

Our design enables us to identify neural fingerprints of individual learning profiles. We hypothesize that the neural fingerprints of individual learning profiles entail multiple neural networks that only partly overlap with the classic frontotemporal-hippocampal regions for word meanings and grammatical rules (Musso et al., 2003; Opitz and Friederici, 2003, 2004; Breitenstein et al., 2005; Newman-Norlund et al., 2006; Hauser et al., 2012; Tagarelli et al., 2019). This multiple network hypothesis of individual learning profiles is supported by recent findings showing distributed neural systems engaging in learning and processing linguistic knowledge across language components (Ullman, 2004; Davis and Gaskell, 2009; Chandrasekaran et al., 2014a), which may include the following: (1) the Perisylvian language network (PSN), consisting of core regions in the inferior frontal and posterior temporal areas for lexicon and grammar learning (Tagarelli et al., 2019; Morgan-Short, 2020); (2) the reward-related corticostriatal salience network (SAN), which concerns procedure-based aspects of language acquisition and automatization (Ullman, 2004); (3) the default-mode network (DMN) consisting of the mPFC, posterior cingulate cortex, angular gyrus, and anterior and medial temporal regions (Greicius et al., 2003), which contributes to resource allocation (Spreng, 2012) and memory consolidation (Xue, 2018); and (4) the frontoparietal network (FPN), consisting of the prefrontal and inferior parietal lobule, which is flexibly connected to the above three networks because of its role in executive, working memory, and attention (Braver and Barch, 2006; Cocchi et al., 2013; Cole et al., 2013). We examine whether the dynamics of these candidate networks across learning phases jointly contribute to learning profile, and whether different networks come into play at different time points of learning and contribute differently to the learning profile prediction.

Materials and Methods

Participants

We recruited 33 healthy young adults (14 males; ages: 19-27 years, mean = 22.34 years) to participate in the fMRI training experiment. They were all native speakers of Mandarin. All participants were college students from South China Normal University and had a formal learning experience in English as a second language (years of learning: mean = 15.15, SD = 2.00). None of the subjects had previously studied a Romance language or been immersed in a Romance language environment for more than 3 weeks, given that the artificial language was designed to be similar to Romance languages. One participant began the study but dropped out in the second session. None of the participants reported having a history of hearing or neurologic disorders. All participants signed the consent form approved by the ethics review board of the School of Psychology at the South China Normal University and the Joint Chinese University of Hong Kong–New Territories East Cluster Clinical Research Ethics Committee before participating in the experiment.

Artificial language materials

The artificial language, Brocanto2 (Morgan-Short, 2007), has a productive structure that is consistent with natural language; that is, novel sentences can be generated, spoken, and understood within a meaningful context (Morgan-Short et al., 2012). There are 13 lexical items in Brocanto2: four nouns (pleck, neep, blom, vode), two adjectives (triose/o, neime/o), one article (li/u), four verbs (kiln, nim, yab, praz), and two adverbs (noyka, zayma). All Brocanto2 words were recorded in isolation, rather than as part of a phrase or sentence, and were subsequently concatenated to form phrases or sentences with 300 ms gaps between each word (for sample stimuli, see Table 1 and Fig. 1A). Each Brocanto2 sentence was exemplified by a move on a board game displayed on a computer screen (Fig. 1A, bottom). The four tokens were represented by distinct symbols, which correspond to the four nouns in Brocanto2. Each token was presented within a circle or a square background (describing the adjectives). The tokens can be moved, swapped, captured, and released, with each of these actions corresponding to a verb. The tokens can also move horizontally or vertically, corresponding to the adverbs. Each noun in Brocanto2 has a formal grammatical gender designation, either masculine or feminine. Both adjectives and articles appear post-nominally and are morphologically marked to agree with the grammatical gender of the noun. Brocanto2 uses a fixed subject-object-verb (SOV) word order. Adverbs, when used, appear at the end of the sentence, immediately following the verb.

View this table:
  • View inline
  • View popup
Table 1.

Examples of the grammatical and ungrammatical Brocanto2 stimuli for the three grammar learning tasks

The grammar training materials consist of 288 Brocanto2 phrases or sentences, with half of the trials (i.e., 144) grammatical and the other half ungrammatical. The 144 grammatical sentences consist of three trial types: 48 noun phrases (NPs), 48 subject-verb (SV), and 48 SOV-adverb sentences. The ungrammatical trials, each derived from a grammatical trial, contain gender agreement or phase order or sentence structure violations (for violation samples, see Table 1). The grammatical violation is always the gender agreement for NPs, and the post-nominal modifiers order for SV sentences, and the word order for SOV sentences. For the grammar errors in NPs, it is either the article or the adjective that does not agree with the gender of the noun. For the SV sentences, the word order within the subject NP was completely scrambled to generate violation variants, while the sentence order SV is never violated. To generate grammar errors in the SOV sentences, the words were scrambled between the verb, object noun, and adverb, while the subject noun always occurred as the first word in the sentences. It is important to point out that, in the learning phase, the auditorily presented sentences (albeit half contained grammatical violations) always matched with the game moves shown on the screen; whereas in the generalization tests, mismatches between the game moves and the meaning of the sentences were used to test subjects' ability to apply the grammar they had learned to novel sentences (for details, see below). The complete artificial language training paradigm consists of vocabulary exposure and recall tests, grammar learning, and two generalization tests (one for testing the three grammatical rules and the other for the semantics of the sentences).

Experimental design and statistical analysis

Overall training procedure

Participants were scheduled to take part in seven experimental language training sessions over 7 consecutive days (for the training and imaging schema, see Fig. 1B). During Session 1, participants were first provided informed consent, filled out background questionnaires, and completed a cognitive battery, including tests of IQ, working memory, declarative, and procedural learning abilities. After completing the cognitive test battery, participants commenced the first session of the artificial language training. Each language training session consisted of a passive vocabulary exposure session, three grammar learning sessions, and a vocabulary test. On the last training day (i.e., Session 7), participants completed generalization tests outside the scanner after completing the artificial language training. Each element of the training protocol is described in detail below.

Vocabulary and grammar training procedures

At the beginning of each language training session (i.e., day), participants first completed a vocabulary exposure session (i.e., the Vb learning task). Each Brocanto2 lexical item was presented auditorily, accompanied by a visual symbol that represented its meaning (Fig. 1B). Each item was randomly presented 4 times during the exposure phase, lasting for 7 min. A vocabulary test was administered after the completion of language training outside the scanner in each session. Participants were asked to state out loud the lexical item that corresponded to the visual symbol shown on the computer screen. Each symbol representing a lexical item was presented twice.

Following vocabulary exposure, participants began the grammar training tasks (Fig. 1B, bottom right). The grammar training was administered in the format of a grammatical judgment task (GJT) with trial-by-trial corrective feedback. Learning was assumed to occur implicitly, and no explicit grammar rules or explanations regarding any aspect of the language were provided. Instead, participants were exposed to auditorily presented phrases and sentences of the language. As each phrase or sentence was presented, participants also viewed the corresponding game token or move. They were then asked to judge the acceptability of the phrases or sentences (i.e., GTJ). Participants were instructed to make a judgment based on their immediate intuitive impression (i.e., guessing based on “gut feeling”).

For each grammar learning task (e.g., SOV), the trials were divided into two experimental blocks (i.e., scanning runs), with grammatical and their matched ungrammatical phrases or sentences occurring in different blocks. Thus, the grammar training phase contained six experimental blocks, with two blocks of NPs, two blocks of SV sentences, and two blocks of SOV sentences for each day. The participants always started with the NP blocks, followed by the SV sentence blocks, and lastly the SOV sentence blocks. Each block consisted of 48 phrases or sentences, of which half (i.e., 24) were grammatical and the other half ungrammatical. The presentation order of the stimuli within each block was randomized across participants. For each trial in the GJT, a fixation cross was first presented for 100 ms, followed by a 100 ms blank. Sentences or phrases were then presented auditorily, and the corresponding game tokens or moves were shown on the screen simultaneously. The stimulus presentation time was different for each trial type: NP for 2400 ms, SV sentence for 6300 ms, and SOV sentence for 7600 ms. A prompt asking for a grammaticality judgment (“?”) appeared for 2000 ms after the final word of each sentence, during which participants responded. After a response was made, participants received corrective feedback on whether their response was correct or incorrect. To better estimate the brain responses related to stimulus and feedback presentation separately, a random jittered interval (0-4000 ms) was added between each response and feedback presentation. The feedback was shown on the screen for 1000 ms. After the feedback screen, a blank screen that jittered variously from 2000 to 6000 ms (i.e., intertrial interval) was shown before participants moved on to the next trial.

Generalization tests

In the last training day (i.e., Session 7), participants were tested on their ability to apply the grammar they had learned to novel Brocanto2 sentences in two generalization tests: a GJT and a picture-sentence mapping task. For the GJT, participants judged the acceptability of 144 unseen Brocanto2 phrases or sentences (i.e., stimuli not used in the training phase), consisting of 72 grammatical and 72 ungrammatical trials. As in the training phase, the ungrammatical phrases or sentences were derived from their corresponding grammatical phrases or sentences, containing violations of gender agreement in NP, and word order in SV and SVO sentences. Each ungrammatical sentence only contained one type of grammatical violation. In this test, the game tokens and moves always matched with the meaning of the sentences; therefore, participants only need to detect grammatical abnormities. The procedure of the test was identical to that of the GJT as used in the fMRI training phase. For the picture-sentence matching task, participants had to judge whether the Brocanto2 sentences that they heard correctly described the game tokens and moves displayed on the screen. Subjects were instructed to focus on the correspondence between sentences and game tokens/moves. The test comprised 144 novel Brocanto2 sentences (used neither in the training phase nor in the generalization GJT), of which 72 sentences correctly described the game moves while the other half incorrectly described the game moves. The 72 incorrect trials were each derived from a correct trial, and can be classified into three types: (1) the sentences contained a grammatical violation, but the game moves matched the meaning of the sentences; (2) sentences were grammatically correct, but the game moves did not match the meaning of the sentences (e.g., the word vode presented but the visual symbol pleck was shown on the screen); and (3) the sentences were ungrammatical, and a mismatch between game moves and sentence meanings. The procedure was identical to that of GJT. All the experimental learning tasks and generalization tests were created and presented with E-Prime 2.0 (Psychology Software Tools).

Cognitive assessments

Before the language training, we administered a battery of standardized cognitive tests, including an IQ assessment (Test of nonverbal intelligence, fourth edition), working memory tests (Digit Span Backward and Reading Span tests), and declarative and procedure learning ability tests, including the vocabulary learning subtest in the LLAMA Language Aptitude Test, the Continuous Visual Memory Task (CVMT), and the Weather Prediction Task (WPT). A description of each of the five tests is provided below.

Digit Span Backward

This test assesses verbal working memory (Wechsler, 1997) and involves the processing and storage of digits. Participants were asked to repeat the digits that they heard in reverse order. The test consisted of seven blocks (two trials per block) with increasing digits (from 2-8 digits). If a participant could correctly repeat one trial or both trials for a block, she/he could move on to the next block. If a participant failed both trials of a block, the test was discontinued. An individual's digit span was defined as the longest series of digits that she/he could repeat (even only once).

Reading Span

This test measures working memory capacity and involves the processing and storage of sentences and words. A Chinese version of the auditory reading span task was used, which was modeled after the original version (Daneman and Carpenter, 1980). In this task, participants had to judge whether the sentence statement was correct and to remember the last word of each sentence for later recall. When a sentence was presented on the screen, participants judged the sentence by pressing the “correct” or “wrong” key on the keyboard. Once a judgment was made, the next sentence was presented. After all sentences in a trial had been presented, the participant had to recall the last word of each sentence in order. Participants' responses were recorded by the experimenter. There were three trials in each block and five blocks in total. The number of sentences for one trial increased from two to three, and so on to six, across the five blocks. The test was discontinued if a participant failed to recall the last words for two trials within a block. A participant's reading span was defined as the largest number of last words that she/he could recall for a single trial.

LLAMA Language Aptitude Test

The LLAMA test (Meara, 2005) was developed based on the standardized Modern Language Aptitude Test (MLAT) (Carroll and Sapon, 1959), which incorporates four separate elements: vocabulary learning, phonetic memory, sound-symbol correspondence, and grammatical inferencing. The subtest of vocabulary learning was used in this study as a measure of verbal declarative learning ability. Subjects were asked to learn 20 word-object association pairs that consisted of pseudo-Kurdish words and their corresponding objects within 2 min in a computer interface. All 20 objects were displayed simultaneously on the screen and an object's name was shown by clicking on the object. Subjects were permitted to click on the objects as many times as they wished within the 2-minute learning phase. After the learning phase, each of the 20 objects' names was presented in turn, and for each name, the subject had to click on the corresponding object on the screen. Five points were scored for each object correctly identified, with a maximum score of 100.

CVMT

The CVMT was used to measure nonverbal declarative memory (Trahan and Larrabee, 1988). Participants viewed a series of complex, abstract designs and indicated whether each design was novel (“new”) or had appeared previously (“old”). The “old” items consisted of seven target designs, presented 7 times interspersed among 63 “new” distractor items. All items were presented in a randomized order, which was constant for all participants. Participants' responses were used to calculate a CVMT d′ score, with a higher score indicating better declarative learning ability.

WPT

The WPT is an implicit, probability-based task where participants predict the weather (“sunshine” or “rain”) based on the patterns of four different “tarot cards” presented on a computer screen (Foerde et al., 2006). Each combination of cards represents a different probability for “sunshine” or “rain.” For example, a screen showing a card of squares, a card of circles, and a card of pentagons represents a 75% chance of rain. A total of 320 trials were divided into eight blocks. Neither the sunshine nor rain stimulus occurred more than 4 times in a row. After a response was given, the correct response was shown on the screen. The weather prediction accuracy on the final block was used for analyses.

Learning progress estimation

The learning progress of each subject in each learning task was estimated by three metrics: (1) initial learning performance (IP), (2) learning rate (LR), and (3) learning outcome (LO). IP is defined as the learning performances (i.e., the word recall test accuracies for Vb and GJT accuracies for the three grammar learning tasks) on the first day of training. IP reflects the initial gain of training. To estimate learning gains between days 2 and 7, we calculated LR for each subject by fitting each learning curve with a quadratic Equation 1. y=ax2 + bx + c(1)

Three parameters were estimated where a determines the curvature of the parabola, b indicates the slope, while c is the y intercept. Both parameters a and b relate to the learning trajectory pattern; therefore, we combined the two parameters with a formula to determine LR: LR = |a| × (1 + |b|). Thus, learners with higher LRs indicate a faster and more increment in learning accuracy between days 2 and 7 of the training. Finally, LO is defined as the generalization test performances for the three grammatical rules (i.e., gender agreement in NPs, post-nominal modifiers order in SV sentences, and SOV order in SOV sentences) and the day 7 vocabulary test scores for the Vb learning task. IP, LR, and LO are complementary measures in describing the pattern of learning trajectories.

Imaging acquisition

MRI data were acquired using a Siemens 3T Tim Trio MRI system with a 12-channel head coil in the Brain Imaging Center at South China Normal University. The functional images were recorded by a T2*-weighted gradient EPI pulse sequence (TR = 2000 ms, TE = 30 ms, flip angle = 90°, 37 slices, FOV = 224 × 224 mm2, in-plane resolution = 3.5 × 3.5 mm2, slice thickness = 3.5 mm with 0.7 mm gap, acceleration factor = 2). T1-weighted high-resolution structural images were acquired using an MPRAGE sequence (176 slices, TR = 1900 ms, TE = 2.53 ms, flip angle = 9°, voxel size = 1 × 1 × 1 mm3). Imaging data were collected during the language training on days 1, 2, 3, and 7 for each participant (for the imaging schema, see Fig. 1B). Resting-state fMRI and diffusion tensor imaging were also collected for every imaging session.

Univariate activation analysis

All functional images were preprocessed using SPM12 (Wellcome Department of Imaging Neuroscience; www.fil.ion.ucl.ac.uk/spm/) following a pipeline described in previous studies (Feng et al., 2021b, 2015). The functional images were corrected for head movement. The high-resolution anatomic images were registered to the mean functional image and further normalized into the MNI space using the segmentation-normalization procedure. The realigned functional images were spatially smoothed using a Gaussian kernel (FWHM = 4 mm) and were entered into a GLM for univariate activation analysis. Specifically, for the subject-level analysis, a GLM with a design matrix including two regressors of interest (i.e., stimulus and feedback presentations) was constructed for each grammar learning task and imaging session. For the Vb learning task, only the stimulus regressor was included because no feedback was provided. The regressors corresponding to the onsets and durations of the trials from each task were convolved with the canonical HRF. Low-frequency drifts were removed by a temporal high-pass filter (cutoff at 128 s). The six head-movement parameters and the session mean were added into the models as nuisance regressors. The gray-matter image generated from the segmentation procedure was converted into a binary inclusive mask to define voxels of interest for each participant. For the group-level analysis, the one-sample t test was used, each statistical brain map was initially thresholded at voxel-wise p = 0.001, and all reported brain regions were corrected at the cluster-level p = 0.05 using the family-wise error rate approach as implemented in the SPM package.

Predictive modeling analysis

To determine whether the brain activations related to stimulus presentation during learning are significantly predictive of LO profiles, we used the multioutput Least Squares Support Vector Regression (LS-SVR) as the prediction algorithm and 10-fold cross-validation (CV) procedure to train and validate prediction models. The brain activations (i.e., t statistics of stimulus vs baseline) obtained from each learning task and imaging session were used as predictive features for training and testing prediction models. Brain activation maps from all subjects were combined into an S × F matrix where S is the number of subjects and F is the number of features (i.e., collapsing voxels across tasks and training sessions). Each value in the matrix represents the level of activation of a voxel in a specific learning task and training session. We used a nested 10-fold CV procedure for feature selection and fusion, model construction, and validation (for a graphical illustration of the procedure, see Fig. 2). This CV procedure avoids obtaining overfit models with many noisy features and ensures that the trained models can be tested with unseen data (i.e., model generalization ability) (Feng et al., 2018b). The nested CV procedure consisted of two levels of nesting (i.e., inner and outer) for feature selection, fusion, and model validation. At the inner level, we used the Pearson's correlation analysis and a feature selection procedure to remove irrelevant (uninformative) features based on each training set (i.e., 90% of the subjects) with a cutoff threshold of p = 0.01, where features showing significant correlations with LOs in at least one grammar task were selected. Therefore, the predictive powers of the models reflect how well those selected voxels performed in predicting learners' LO profiles. Different feature selection thresholds (i.e., p = 0.01 and 10% of total features) were tested to assess the consistency and stability of the models' predictive powers. To reduce the dimensionality of the data and fuse the survived features across the four tasks and sessions, we conducted the principal component analysis with the selected features and further selected the outcome-correlated principal components (p < 0.05) for model training. It is important to note that the feature selection and fusion procedures were conducted only with the training sets, which were independent of the outer-level model testing. In other words, 90% (i.e., ninefold) of the data were used for model training while the hold-out 10% were for testing, repeating 10 times (i.e., 10-fold CV). This CV procedure ensures accurate estimation of the model prediction. The LS-SVR algorithm with default parameters (i.e., C = 1, γ = 1/number of features) was used to access the multivariate predictive power of those neural features. We used functions from a MATLAB package LIBSVM (Chang and Lin, 2011) and in combination with in-house scripts to conduct the predictive modeling analyses. We examined the predictive power by calculating Pearson's correlation between the predicted and observed scores (i.e., r[observed, predicted]). The statistical significance of the predictions was evaluated using a nonparametric permutation procedure.

To test whether the predictive power of each model occurred by chance, we used a nonparametric permutation procedure to generate a null distribution of the predictive scores by fully shuffling the features (i.e., predictors) and LOs across learners for each CV. Each feature and LO were permuted across participants independently to generate a fully randomized data matrix. The 10-fold CV was conducted based on the randomized dataset. The data randomization and CV procedures were repeated 10,000 times, and the 95th percentile points of each distribution were used as a critical value for a one-tailed nonparametric test against the null hypothesis with p = 0.05. To further test the stability of the prediction, we used a bootstrapping procedure by dividing all the learners into 10 folds randomly and repeating the 10-fold CV procedure with 10,000 iterations. Each CV prediction would be slightly different because the composition of the training and testing subjects was different for each iteration.

We conducted a power analysis to estimate the sample size required for the predictive modeling. Our power analysis was conducted based on effect sizes reported by previous studies that examined the correlation between individual differences in language learning and neural responses. Eight studies that reported Pearson's correlation coefficients (r values) between learning and neural activation (task- and/or resting-state fMRI responses) were included (Musso et al., 2003; Finn et al., 2013; Morgan-Short et al., 2015; Deng et al., 2016; Weber et al., 2016; Barbeau et al., 2017; Kepinska et al., 2017b; Nevat et al., 2017). These r values ranged from 0.40 to 0.66 (mean = 0.54, SD = 0.08), with sample sizes between 8 and 40 (mean = 18.89, SD = 9). The mean r value was used in our power calculation, which resulted in a sample size of 28 at an α level of 0.05. Because there is no standard way of conducting power analysis for constructing multivariate predictive models as we conducted here and correlational approaches often overestimate the true effect size of prediction, we conservatively assumed a power of 95% and arbitrarily increased our sample by 15% in our final estimation.

Multivoxel pattern classification (MVPC) analysis for grammaticality decoding

We used MVPC to examine the extent to which the neural representations of the newly learned grammar rules emerged during training. Both searchlight- and ROI-based MVPC were conducted. To perform MVPC, we first estimated the single-trial brain activities with the least-squares-single (LSS) GLM approach (Mumford et al., 2012). The LSS approach was designed to model single-trial brain activities for each target event while controlling for the variance of other covariant events in the same block (i.e., fMRI run). Specifically, for each trial, a design matrix was constructed with a regressor of interest targeting the stimulus presentation. A regressor of noninterest consisted of other events (including feedback of the target trial and both feedback and stimulus presentation of other trials), six head-movement regressors, and a session mean regressor were included for each block individually. The LSS GLM analyses were performed on the functional images following realignment but without normalization or smoothing. Therefore, 288 subject-level GLM models (i.e., 96 trials per grammar task) were constructed and estimated for each subject and imaging session. This was a computationally intensive analysis and required a week to accomplish with a 16-core Intel Xeon processor for all participants. The t statistic brain images were calculated by contrasting the target regressor with baseline and further used for MVPC. The t statistic was used because it combines the effect size weighted by error variance and is therefore less affected by highly variable single-trial estimates than β estimation (Misaki et al., 2010).

The searchlight algorithm (Kriegeskorte et al., 2006), implemented in the CoSMoMVPA toolbox (Oosterhof et al., 2016), was used to identify brain areas whose local activation patterns can be used to classify trials based on their grammaticality. At each voxel, stimulus-induced activation patterns (t values) within a spherical searchlight (3-voxel-radius sphere, which contains ∼90 voxels on average) were extracted for all items for NP, SV, and SOV blocks separately. Different spherical sizes (e.g., 4-voxel-radius sphere) were also used to ensure that the classification accuracies did not significantly differ according to the size chosen. Therefore, in each spherical searchlight, a V × I × B matrix was generated, where V refers to the number of voxels, I refers to the number of stimulus items, and B refers to the number of blocks (e.g., 90 × 48 × 2). Finally, this matrix was entered into a linear support vector machine (SVM) classifier implemented in the LIBSVM toolbox (Chang and Lin, 2011) for model training and testing with the leave-one-block-out CV procedure, in which the SVM classifier was trained with data from one block, and the trained classifier was tested with data from another block, repeated twice. For the searchlight-based MVPC, the mean classification accuracy was calculated and mapped back to the voxel at the center of each sphere. We conducted the same MVPC procedure across all voxels of interest in the brain and generated classification accuracy maps for each imaging session and subject. For the group-level analysis, the classification accuracy maps were first normalized to the MNI space using the parameters estimated from the segmentation-normalization procedure, and then fed into a one-sample t test (against chance accuracy). In addition to the searchlight MVPC, we conducted ROI-based MVPC analysis by calculating the mean MVPC accuracy across voxels within each ROI for each grammar task, training session, and subject. The linear mixed-effects regression analysis was used to evaluate four fixed effects (i.e., main effects of learning task, training day, outcome, and day × outcome interaction).

Interregional representational similarity (RS) analysis and LO prediction

To further examine the interregional interactions between each pair of predictive regions and to which extent such interactions contribute to individual differences in learning success, we calculated the interregional RSs and used them as predictive features to predict individual LOs. RS is an index reflecting the similarity between two regions according to their neural representational dissimilarity matrices (nRDMs). Higher similarity in nRDM indicates greater similarity in terms of multivoxel representation of the training items, which also reflects the degree of information sharing between regions. To calculate RS, we first computed the nRDM for each target region based on their multivoxel activation patterns using a 1 minus Pearson's correlation approach. Therefore, nRDMs (i.e., 96 × 96 matrices; 96 trials per training session for each grammar task) of the ROIs were generated for each imaging session, grammar learning task, and participant. To control for the effects of the trial duration, the number of words, presentation block, grammaticality, and type of grammar violation derived from the stimulus sets, we used the partial Pearson's correlation to calculate the interregional RSs based on each pair of nRDMs while controlling for the variances of those factors. These interregional RSs were then entered into the 10-fold cross-task CV predictive modeling to access the overall predictive powers of the RSs and identify the significant predictive edges between ROIs.

Results

Behavioral learning performances

Four language components of Brocanto2 were learned by a group of adult participants across 7 d (for the sample stimuli and experimental procedure, see Materials and Methods; Fig. 1). Learning performances of the four learning tasks were accessed by the offline vocabulary recall test (for the vocabulary [Vb] learning task) and the online GJTs (for the NP, SV, and SOV grammatical rule learning tasks), respectively. Behavioral performances of the vocabulary recall test and GJT significantly increased over training sessions (for individual learning curves, see Fig. 3A). For the vocabulary test score, a significant main effect of training day (F(1,191) = 366, p < 2.20 × 10−16, ηp2 = 0.66, 95% CI = [0.58, 0.71]) was revealed using the linear mixed-effects regression (LMER). For the GJT accuracy (ACC) and reaction time (RT), we constructed LMER models with training days (1-7) and grammar learning tasks (i.e., NP, SV, and SOV tasks) as two fixed factors, and subject as a random factor. For ACC, we observed a significant main effect of training day (F(1,191) = 328.21, p < 2.2 × 10−16, ηp2 = 0.63, 95% CI = [0.55, 0.69]) and a significant main effect of learning task (F(2,267) = 11.30, p = 1.95 × 10−5, ηp2 = 0.08, 95% CI = [0.02, 0.14]). No significant day × task interaction effect was found (F(2,382) = 1.29, p = 0.28, ηp2 = 0.007, 95% CI = [0.00, 0.03]). Planned comparisons were conducted between different grammar tasks while the p values were corrected with the Bonferroni correction approach. We found that performance in the SOV task was significantly better than that of NP (t(62) = 6.59, p < 0.0001, Cohen's d = 1.07) and SV (t(62) = 6.05, p < 0.0001, d = 0.98), whereas the NP and SV did not significantly differ (t(62) = 0.55, p = 0.85, d = 0.08) in overall GJT scores. For RT, learners responded increasingly faster over training sessions (main effect of training day: F(1,191) = 75.75, p = 1.49 × 10−15, ηp2 = 0.28, 95% CI = [0.18, 0.38]). We also found a significant main effect of task (F(2,297) = 35.81, p = 1.17 × 10−14, ηp2 = 0.19, 95% CI = [0.12, 0.27]). No significant day × task interaction effect (F(2,382) = 1.44, p = 0.24, ηp2 = 0.008, 95% CI = [0.00, 0.03]) was observed. Planned comparisons showed that RTs in NP task were significantly longer than those of the SV (t(62) = 6.10, p < 0.0001, d = 0.91) and SOV (t(62) = 11.53, p < 0.0001, d = 1.72) tasks, whereas RTs in SV were significantly longer than SOV (t(62) = 5.43, p < 0.0001, d = 0.81). These results indicate that the group-level learning performances differed between learning tasks.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Artificial language training procedure, experimental design, and fMRI schema. A, Language elements, sample stimuli, and grammar rules. Bottom, Sample game token moves exemplifying the meaning of a Brocanto2 sentence. In this example, the spoken sentence was presented together with the visual symbol moves, meaning “The square pleck is captured vertically by the round vode.” B, The 7 d training and imaging schema and the fMRI experiment procedure for vocabulary and grammar learning tasks. The vocabulary stimuli were passively exposed to the learners (BI). A feedback-based training paradigm was used for the three grammar learning tasks (BII). The same fMRI experiment procedure was applied to the four imaging sessions.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Predictive modeling for LO profile prediction with a CV procedure. The 10-fold CV procedure was used to construct and validate prediction models. Initial feature selection was applied with Pearson's correlation to remove noninformative neural features for each imaging session and task, respectively. All the selected features from different days and tasks in the training set (90%) were fused using principal component analysis (PCA) to reduce data dimension. Multioutput linear SVR was used to model the relationship between the PCs and LO profiles in the training set. The held-out 10% of the unseen testing dataset was used to evaluate the predictive performance. This process was repeated 10 times so that each testing fold was used to evaluate the trained models. A final predictive score was calculated using Pearson's correlation between the predicted and observed outcomes. This 10-fold CV procedure was repeated 10,000 times with a bootstrapping approach to evaluate the stability of the predictive power. Permutation procedure was applied to generate null (chance) distribution to access the statistical significance of the predictions.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Behavioral learning performance and individual variability for each learning task. A, Day-by-day behavioral learning performances for the Vb, NP, SV, and SOV tasks. Individual learning curves were shown for each task over training days. G, Generalization score. A mean learning curve was shown in red for each task. B, The intersubject variability in LOs for each task. The SD of the outcomes was calculated for each learning task separately based on the bootstrapping approach (randomly selected 50% of the subjects) with 10,000 iterations. Error bar indicates SD. **p < 0.001, nonparametric test.

LOs were defined as the word recall accuracy on day 7 for the Vb and generalization GJT scores for the three grammar learning tasks, respectively. LOs were close to the ceiling for Vb for most of the learners. For the grammar LOs, a significant main effect of learning task was found (F(2,62) = 21.19, p = 9.70 × 10−8, ηp2 = 0.41, 95% CI = [0.21, 0.55]). Planned comparisons between tasks showed that LOs in NP (mean ACC = 65.1%) were significantly worse than SV (mean ACC = 77.3%; t(62) = 5.49, p < 0.0001, d = 1.37) and SOV learning tasks (mean ACC = 77.9%; t(62) = 5.78, p < 0.0001, d = 1.45). LOs in the SOV task were not significantly better than that in the SV task (t(62) = 0.29, p = 0.95, d = 0.07). Moreover, significantly larger interindividual variabilities in LO were observed for the grammar learning tasks than that of the Vb task (p values < 0.001, bootstrapping nonparametric test; see Fig. 3B).

Individual learning profiles and day-by-day learning-related brain activations

We characterized individual learning profiles by simultaneously considering the LOs across the four language components (Fig. 4A). There were large interindividual differences in the LO profile (Fig. 4A, right; for the learning profile of each learner, see also Extended Data Fig. 4-1). To examine how individual differences in learning trajectory were associated with individual differences in LO, we estimated two indices reflecting learning trajectory (IP and LR), which summarized test scores across 7 training days. IP and LR are two complementary measures in describing individual learning trajectories. IP represents the learning gains on the first day of training, whereas LR was estimated by fitting a quadratic equation to each learning curve (Fig. 4B), reflecting learning gains between days 2 and 7. This quadratic learning-curve modeling outperformed simple linear regression and power function in goodness of fit across four tasks (p values < 0.01). LRs in the Vb task were significantly higher than the grammar learning tasks (main effect of task: F(3,93) = 33.70, p = 7.84 × 10−15, ηp2 = 0.52, 95% CI = [0.38, 0.62]). LRs of the three grammar tasks did not significantly differ from each other (Bonferroni-corrected p values > 0.1).

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Individual learning profiles, learning trajectory modeling, and brain activations across training days and learning tasks. A, Spider plots reveal individual learning profiles derived from the outcomes of the four learning tasks. Left panels, Three representative learners' profiles. Black represents an overall successful learner. Blue represents a learner with good vocabulary but poor grammar acquisition. Red represents a learner with good word-order grammar but poor vocabulary and gender-agreement acquisition. Right panels, LO profiles for all learners (for each learner's profile, see Extended Data Figure 4-1). B, LR modeling and IP. LRs were estimated using the quadratic curve fitting for individual learning curves across 7 training days. Higher LRs reflect faster and more gains in performance between days 2 and 7, whereas higher IPs reflect more gains in performance on day 1. C, Pearson's correlations between individual differences in learning trajectory measures (i.e., IP and LR) and LOs within and across learning tasks. *Uncorrected p < 0.01. D, Univariate brain activations during the stimulus presentation for each task across four imaging sections. Unthresholded whole-brain maps are displayed for visualization purposes. Warm red represents more activations during stimulus presentation than baseline. Cool blue represents the reverse contrast.

Figure 4-1

The LO profile for each of the learners. Each dimension of the spider plot represents the LOs of a task based on rank. The LOs were converted from raw accuracy to rank to ensure they are comparable across learning tasks (i.e., Vb, NP, SV, and SOV). Download Figure 4-1, TIF file.

The measures of learning trajectory (i.e., IP and LR) were closely related to LOs both within and across learning tasks (Fig. 4C). Higher IPs in the Vb task were moderately associated with better LOs across grammar learning tasks, indicating that better initial vocabulary gains may facilitate grammar acquisition. However, higher LRs in the Vb task were associated with poorer LOs in the grammar learning tasks, especially for the NP task. Within and across the three grammar tasks, higher LRs were associated with better LOs, suggesting that faster learners during training were often more successful learners in learning grammatical rules. To further demonstrate the relationships between the learning trajectory patterns summarized from the four tasks and LO profiles, we calculated the dissimilarities (i.e., Euclid distances) between each pair of learners (i.e., interindividual differences), based on each learner's learning scores (i.e., IP, LR, and outcome, respectively) in the four learning tasks. We then generated a 32 × 32 interlearner dissimilarity matrix for each of the learning indices. We found that the interlearner differences in LO profile were significantly associated with interlearner differences in IP (r = 0.189, p = 2.36 × 10−5) and LR (r = 0.321, p = 2.29 × 10−13). These behavioral correlation results demonstrated a close relationship between learning trajectory patterns and LO profiles.

Brain activation patterns during online learning of the four components across four imaging sessions of 7 training days are illustrated in Figure 4D. Distributed frontal, parietal, temporal, and visual occipital regions that related to language, auditory, and visual processes were activated similarly across learning tasks and imaging sessions. By visual inspection, more activations of these regions were observed for the three grammar learning tasks than for the Vb learning task. The activation patterns were also consistent across imaging sections, with a minor decrease in activation following training. In addition, these stimulus-related activations were distinct from the feedback-processing-related activations for the three grammar learning tasks (for the feedback-related activation maps, see Fig. 5).

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Feedback-processing-related activation maps in each grammar task and training day. Online feedback was provided for the grammar learning tasks only. The feedback-related activation patterns were distinct from the language-stimulus-related activations shown in Figure 4D. Unthresholded whole-brain maps are displayed for visualization purposes.

Neural dynamics of multiple networks predict individual learning profiles

To identify the neurocognitive fingerprints underlying individual learning profiles, we used the 10-fold CV procedure with the multioutput LS-SVR algorithm (for the predictive modeling procedure, see Fig. 2) to construct and evaluate prediction models with cognitive and neural measures (i.e., predictive features). The multioutput SVR can be used to model the relationships between multivariate predictive features (e.g., activation patterns across days and tasks) and multiple LOs derived from our four learning tasks simultaneously.

We used the non-neural (i.e., cognitive and memory measures) and neural measures (i.e., activation patterns) to build prediction models separately to predict LO profiles. We then evaluated the model performances and identified the significant neural predictive features (i.e., voxels in a specific training day) contributing to the model prediction. Bootstrapping and permutation procedures were used to determine the stability and statistical significance of the models (see Materials and Methods). We found that neural activation patterns derived from the four learning tasks and across the four imaging sessions were highly predictive of LO profiles overall (Fig. 6A, red distribution: median r[observed, predicted] = 0.352, p = 0.0001, permutation test). By contrast, neither cognitive nor memory measures were predictive of learning profiles (Fig. 6A, gray distributions: p values > 0.1, permutation test). The prediction model with neural measures outperformed models with cognitive and memory measures in profile predictions (p values < 0.001). For LO prediction of each learning task, we found that the neural activation patterns were significantly predictive of LOs in NP (median r[observed, predicted] = 0.444, p = 0.003; permutation test), SV (median r[observed, predicted] = 0.611, p = 0.0001), and SOV (median r[observed, predicted] = 0.563, p = 0.0002) task but not in the Vb task (median r[observed, predicted] = −0.209, p = 0.846) (Fig. 6B). The nonsignificant prediction for Vb was mainly because of the ceiling outcomes and small interindividual variability (see Fig. 3). Moreover, to further identify the predictive contribution of the neural patterns in each imaging session for each task, we conducted outcome predictive modeling with neural activation patterns derived from each imaging session and learning task separately. Activation patterns derived from different imaging sessions showed varying degrees of predictive powers across the four learning tasks (Fig. 6C). No significant outcome prediction was found across imaging sessions for the Vb task. For the NP task, the neural patterns were significantly predictive of outcomes only on the second day of training, whereas significant predictions were found on days 1, 2, and 7 for the SV task and on days 1, 3, and 7 for the SOV task. These findings not only demonstrate the general LO profile prediction but also reveal the fluctuations in outcome prediction across learning tasks and training phases.

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

Learning profile prediction performance and outcome prediction for each learning task and training day. A, Learning profile predictions with neural and non-neural cognitive and memory measures using the 10-fold CV with bootstrapping and permutation procedures (for the predictive modeling procedure, see Fig. 2). Neural measures were significantly better than chance and non-neural measures in predicting learning profiles. Perm, Permutation-based chance prediction distribution. B, LO prediction performances were estimated for each learning task separately and compared with their corresponding permutation-based chance distributions to derive statistical significance. C, Outcome prediction performances for each learning task and imaging session. Error bar indicates SD. *p < 0.05; **p < 0.01; nonparametric permutation test. Bootstrapping, Bootstrapping-based prediction distributions; Perm, permutation-based chance prediction distributions.

To further determine which brain regions in which training days significantly contributed to the LO profile prediction, we estimated the prediction contribution for each voxel in each imaging session by calculating the statistical significance of the voxel-selection rate (by further comparing to the selection rate distributions derived from the permutation-based prediction) using the nonparametric permutation test procedure. Across 4 d of imaging sessions, we identified distributed brain regions that significantly contributed to learning-profile predictions, including the lateral prefrontal, middle temporal, inferior parietal, insular, and subcortical striatum regions (Fig. 7A). These regions can be classified into two broad categories based on the relationships between their responses and outcomes: positive and negative predictive regions (Fig. 7A, left and right). More activations in those positive predictive regions were associated with better outcomes (Fig. 7A, left), whereas less activation or more deactivation in the negative predictive regions was associated with better outcomes (Fig. 7A, right). We further categorized these regions into four brain networks based on previous neuroimaging studies on language learning and resting-state connectivity studies (Yeo et al., 2011; Kepinska et al., 2017a) as well as their correlation patterns as follows: (1) PSN, including the bilateral inferior frontal gyrus (IFG) and left middle temporal gyrus (LMTG); (2) FPN, including the bilateral middle frontal gyrus (MFG), bilateral inferior parietal lobule (IPL), and bilateral inferior temporal cortices (ITC); (3) SAN, including the bilateral insula (Ins), left middle cingulate cortices (MCC), bilateral ventral striatum (Vstr; including bilateral caudate nuclear and left putamen), thalamus (Tha), supplementary motor areas (SMA), and cerebellum; and (4) DMN, including the posterior cingulate cortex (PCC), mPFC, left angular gyrus (LAG), left anterior temporal gyrus (LATL), and left hippocampus (LHip).

Figure 7.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 7.

Brain networks highly contributed to the learning profile prediction. A, Conjunctive brain maps, which summarized all regions that significantly contributed to the profile prediction across imaging sessions. These regions were identified by the voxel-wise permutation test with a threshold of p = 0.001. They were categorized into four networks based on their response patterns and previous literature. B, Prediction contribution maps for each imaging session. Response patterns of these brain networks systematically contributed to the profile prediction differently across training days. Pos, Positive predictive features; Neg, negative predictive features; R, right hemisphere. Voxel-level p = 0.001, cluster-level FWE-corrected p < 0.05.

The four predictive brain networks showed distinct prediction contributions across training sessions. Voxel-wise predictive contributions were projected back separately to each training day (Fig. 7B). The contributions of the PSN regions were prominent at the early stage of training (i.e., day 1 and 2), whereas those of the SAN regions were more salient around the halfway point (i.e., day 2 and 3). The contributions of the FPN regions sustained across the training sessions but were slightly more prominent in the first 3 d, whereas those of the DMN were most prominent at the end of the training.

These dynamic patterns in predictive contributions were further validated by the grammar-general outcome prediction procedure (for a graphical schema, see Fig. 8A). To further quantify the time-dependent contributions of those networks in outcome prediction across the grammar learning tasks, we trained predictive models with data of two grammar tasks (e.g., NP and SV) with 90% of the learners and validated the trained model with the held-out task (e.g., SOV) and the unseen 10% learners. This grammar-task-general 10-fold CV procedure enabled us to test the generalization ability of the predictive models across learners and grammar learning tasks. We found that different brain networks showed distinct but systematic time-dependent predictive contributions (Fig. 8B): contributions of PSN decreased over training sessions with most prominent effects observed at the early stage of training (i.e., the first 3 d); sustained contributions of FPN were identified across all the training days; and an inverted-U contribution pattern was found for SAN with most prominent contributions found in the middle of the training (i.e., day 3). The contribution of the DMN increased over training sessions with the most prominent effects observed at the end of training (i.e., day 7). Figure 8C illustrates in greater detail the prediction fluctuations for each network node across the four imaging sessions. These results together suggest that these language-learning-related neural networks dynamically and systematically change during learning to contribute to learning success.

Figure 8.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 8.

Grammar-general outcome prediction for each network and network node across training days. A, Grammar-general outcome prediction procedure. SVR models were trained with two of the grammar tasks (e.g., NP and SV), and the models were validated with data from the held-out task (e.g., SOV), repeated 3 times (see Materials and Methods). This cross-task CV procedure ensures that the predictive models are generalizable across tasks. B, Day-by-day grammar-general predictive performances were summarized based on the four brain networks. Bootstrapping, Bootstrapping-based prediction distributions; Perm, permutation-based chance distributions. Error bar indicates quantile. *Permutation-based FDR-corrected q < 0.05. **FDR-corrected q < 0.01. C, Regional predictive powers for each network node and each of the 4 training days. Four colors in the circular graphs represent four brain networks. Each region is labeled within the circular graphs. The height of each bar for each region represents the predictive power (r[obs, pred]). Color of the bar represents FDR-corrected q. D2, day 2; D3, day 3; D7, day 7.

We also identified the prediction patterns of the brain networks for each grammar learning task across imaging sessions, respectively (Fig. 9). Distinct contributions in outcome predictions were found across networks, especially for the PSN, FPN, and SAN as well as across the three tasks and four imaging sessions (Fig. 9A). For example, for the PSN, similar decreased patterns in prediction were found in the SV and SOV tasks, whereas a flat contribution line across the training days was observed for the NP task. The predictive powers of individual regions for each task are shown in Figure 9B. Distinct contribution patterns in outcome prediction across the network nodes and training days were found among the three grammar learning tasks, suggesting that the four neural networks dynamically reconfigured in response to learning different grammatical rules over time.

Figure 9.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 9.

LO prediction performance for each imaging session, grammar learning task, and brain network. A, The prediction distribution for each grammar task and imaging session. Perm, Permutation-based prediction distributions. Error bar indicates 95% percentiles. *Predictive powers were significantly different across tasks (i.e., the main effect of learning task) at the threshold of p = 0.05 (permutation test). B, Regional predictive powers for each task and imaging session. D1, day 1; D2, day 2; D3, day 3; D7, day 7. The FDR approach was used for multiple comparison correction (FDR, q < 0.05).

To further examine whether the activation patterns derived from these networks also subserve learning-task- or grammar-specific information, we conducted a task decoding or classification analysis (i.e., classifying the three tasks) based on the network response patterns of the 23 selected ROIs derived from the outcome prediction procedure. We found that the task decoding accuracies were all significantly better than chance (33.3%) across training days (day 1 median accuracy = 61.5%, p < 0.001, permutation test with 10,000 iterations; day 2 median accuracy = 58.3%, p < 0.001; day 3 median accuracy = 47.9%, p = 0.0067; day 7 median accuracy = 62.5%, p < 0.001). These results further indicate that the network responses not only underlie individual differences in LO profiles but also reflect language-component- or task-specific learning dynamics in addition to the general language learning patterns.

Neural representations of grammatical knowledge emerged during training

We conducted a multivariate grammaticality decoding analysis to examine to which extent the local activation patterns of these network nodes represented newly acquired grammar knowledge during learning and contributed to individual learning success. Single-trial brain responses were estimated using the least-squared single approach (Mumford et al., 2012; Feng et al., 2018a, 2021b). Grammaticality decoding (i.e., grammatical vs ungrammatical trials) was conducted using a support vector machine classifier (SVM-C) with a leave-one-block-out CV procedure (Feng et al., 2018a). ROI-based decoding analysis was performed for each grammar task and each imaging session and learner individually. The 23 ROIs were derived from the profile predictive modeling. The LMER was used to model the ROI-based decoding accuracies with three fixed effects, including the main effect of training day (i.e., 1, 2, 3, and 7), LO, and the day × outcome interaction effect. We found that LIFG from PSN, bilateral MFG and IPL from FPN, and bilateral Ins and SMA from SAN showed significant main effects of training day, where the grammaticality decoding accuracies for these regions increased over training sessions, suggesting that the multivoxel patterns of these regions increasingly encode newly acquired grammar knowledge (Fig. 10). The four FPN regions also showed significant day × outcome interactions, suggesting that the time-dependent changes in the robustness of neural representations of grammaticality differed across learners with different LO profiles. For visualization purposes, we split the learners into two groups (i.e., successful and less successful learners) based on the median outcome scores (N = 16 per group), and displayed the decoding accuracies over four imaging sessions (see Fig. 10, right, line graphs). More successful learners displayed increasingly more robust neural representations of grammaticality (i.e., better in classifying grammatical vs ungrammatical trials).

Figure 10.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 10.

Neural decoding of grammaticality across training days for the ROIs derived from the predictive modeling. The linear mixed-effects regression model was used to evaluate three fixed effects (i.e., main effects of day, outcome, and day × outcome interaction). D, main effect of day; I, day × outcome interaction. *q < 0.05; **q < 0.01; FDR correction. Error bar indicates SEM.

Because of these frontoparietal grammaticality representation regions being associated with decision-making processes previously, we further examined whether these regions also showed increasing representations of response time following training. Trials were split into two classes (i.e., fast and slow trials) based on the median RT in each imaging session and learning task. We did not find any ROIs that show a significant main effect of day, nor did we find a day × outcome interaction effect in RT-decoding accuracy (Fig. 11). However, we found that the decoding accuracies in those fronto-temporo-parietal regions were significantly better than chance (i.e., 50%) across sessions, which suggests that these regions are also encoding response speed, but such representations did not change as a function of training and learning success. Together, these network nodes in PSN and FPN that showed strong predictive powers at the early stage of training also revealed significantly increasing grammaticality representations over the training sessions. These findings suggest that neural responses to training stimuli at the early stage of training may be associated with the later neural encoding of the acquired grammar knowledge.

Figure 11.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 11.

Neural decoding of response time across the 4 imaging sessions for the ROIs selected from the outputs of predictive modeling. Trials were split into two classes (i.e., fast and slow trials) based on the median score in each imaging session and learning task. We did not find any ROI that showed a significant main effect of day, nor did we find a day × outcome interaction effect. This is a control analysis to demonstrate that the regions that showed emerging representations of grammaticality (see Fig. 10) did not show emerging representations of decision time following training. Error bar indicates SEM.

RSs between the grammar-representation regions contribute to grammar learning success

To further examine whether the interregional coupling in activation patterns underlies individual differences in learning success, we calculated the interregional RS between each pair of the predictive ROIs for each learner, imaging session, and grammar learning task as an informational connectivity measure (for the RS analysis procedure and an average RS matrix, see Fig. 12A). On average, higher RSs were found between the LIFG and PSN regions (Fig. 12A), suggesting similar representational structures and close functional coupling between these regions. We used these RSs as predictive features to predict LOs with 10-fold CV and 10,000-iteration permutation and bootstrapping procedures. Both positive and negative correlated RSs contributed significantly to predicting individual LOs across tasks (Fig. 12B). We further found that regions that showed significant increments in grammar representations in FPN were widely connected to other networks (i.e., higher RS), and the RSs between the FPN regions and the other network nodes significantly contributed to predicting LOs (Fig. 12C). We also found that RSs between the SAN region (i.e., bilateral insula and SMA) and other network nodes contributed most to the LO prediction. The RSs of these connections were positively associated with LOs, indicating that a more similar pattern in representational structure between these regions relates to better LOs. By contrast, we found that the RSs between DMN core regions (especially the mPFC, PCC, LAG, and LHip) and the other three networks were negatively predictive of LOs, suggesting that less similar in neural representations between these regions is associated with better outcomes. These findings demonstrate that not only regional multivoxel patterns but also interregional coupling or similarity in neural representations subserves individual differences in learning success.

Figure 12.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 12.

Interregional RSs contribute to LO prediction across the three grammar learning tasks. A, Interregional RS analysis procedure. Local spatial activation patterns were extracted from the ROIs (e.g., LIFG and RIFG) for all stimulus items. For each ROI, dissimilarity (i.e., 1 – Pearson's correlation coefficient) was calculated based on the spatial activation patterns between each pair of stimulus items to construct an nRDM. The RS (i.e., Pearson's correlation) between every pair of nRDMs was calculated to create an interregional RS matrix for each learner. These RSs were then used as predictive features for the LO predictions. The RS matrix displayed in the right panel was summarized across the three grammar tasks and four imaging sections and all the learners. B, Interregional RSs were significantly predictive of LOs for each of the three grammar learning tasks. Color-scale violin plots represent bootstrapping-based prediction distributions. Grayscale violin plots represent permutation-based prediction distributions. *p < 0.01, permutation test. C, We identified edges where their interregional RSs between those grammar representation regions were positively associated with grammar LOs across tasks (i.e., positive predictive edges). D, We identified negative predictive edges to show that decreased RSs between the DMN core regions (e.g., mPFC, PC, and AG), and the other network nodes were associated with more successful grammar learning. Color lines within the circular graphs indicate significantly predictive RSs with a threshold of p < 0.001 (permutation test). Gray lines indicate RSs with a threshold of p < 0.01. Gray bars outside the color rings represent the number of significant predictive edges connecting to a network node.

Discussion

By combining a multiday training protocol with fMRI as well as leveraging predictive modeling and multivariate pattern analyses, we demonstrate spatiotemporal neural network dynamics underlying a phenomenon that is ubiquitous yet far from being understood in language learning: extensive individual differences in LOs within and across components of language. Neural dynamics derived from four imaging sessions across 7 training days served as fingerprints predicting individual LO profiles summarized across language components. Four neural networks (PSN, FPN, SAN, and DMN) were identified, and their responses dynamically contributed to LO predictions across training tasks and sessions. The learning prediction shows varying fluctuations over training sessions across networks: (1) the early predictive contribution of PSN (i.e., decreasing); (2) the sustained contribution of FPN; (3) the robust contribution of SAN emerging around the halfway point of the training (i.e., an inverted-U pattern); and (4) the late involvement of DMN (i.e., increasing). A subset of network nodes in the frontoparietal regions, insula, and supplementary motor areas represents newly acquired linguistic knowledge with increasing accuracy; the robustness of the representations made a marked contribution to differentiate successful and less successful learners. These representational regions are interconnected during training by shared local multivoxel representational patterns, whereas the degrees of interregional representation similarities are highly predictive of learning success. These findings reveal the neural sources of individual differences in language learning by attributing not only the degree of success in learning one language component but also the profiles of LOs across components, to time-dependent neural dynamics.

The dynamic contributions of multiple networks to explaining individual differences in language learning profiles in attainments emphasize the close relationships between the time-dependent neural dynamics and the learning success across aspects of language. The prediction performances of the neural model outperformed models of cognitive and memory measures. The cognitive and memory measures may only reflect static characteristics of the learners, which may not capture essential information about individual differences in LOs that are tightly linked to the learning process and neural dynamics. We further identified that the four networks play different roles in predicting individual learning profiles across training phases. The PSN, consisting of the bilateral IFG (BA44), posterior middle temporal gyrus, and the functional and anatomic connections between these regions, has been previously associated with language comprehension, production, and complex linguistic processes (Saur et al., 2008; Fedorenko et al., 2011; Friederici, 2011; Price, 2012; Feng et al., 2015, 2016). These Perisylvian regions, especially the left IFG, play an important role in learning rule-based linguistic structures (i.e., grammar acquisition) (Tagarelli et al., 2019). Moving beyond these previous findings, we demonstrate that the dynamic responses of the PSN showed significant predictive contributions to individual differences in learning success across components of grammar learning. The prediction models were generalizable across grammar tasks, which demonstrates that increased involvements of the PSN regions are associated with more successful grammar learning in general. The contributions of PSN are most prominent on the first 3 d of training, and its predictive powers gradually decrease as training progresses. At this early learning stage, most of the learners were still novices (mean GJT accuracy across grammar tasks on day 3 was 62%). Increased activations in PSN in response to training at this early learning stage are associated with enhanced LOs, which may reflect early neuromarkers of later learning attainment. These findings also suggest that the initial engagements of PSN in processing and learning to classify grammatical from ungrammatical stimuli are critical to grammar acquisition.

The FPN contributes significantly to the profile prediction across training sessions, more prominently in the first 3 d of training. An active training paradigm, as used in this study, may require attention and executive resources for item memorization and rule abstraction. FPN regions are dynamically and flexibly involved in active demanding tasks, for example, when participants categorize confusable speech categories (Feng et al., 2018a), undertake difficult linguistic and nonlinguistic tasks (Cocchi et al., 2013; Waskom et al., 2014), effortfully process a non-native language (Abutalebi and Green, 2008; Feng et al., 2015), or switch between tasks (Cole et al., 2013) and languages (Abutalebi et al., 2007). FPN regions are also highly connected to PSN regions when participants engage in language tasks (Cole et al., 2013; Feng et al., 2015). Our findings demonstrate that early involvements of FPN and PSN regions during online grammar learning (instead of offline processes) in an active judgment task contribute to successful learning. In the memory literature, the engagements of the frontoparietal regions during the encoding phase are predictive of subsequent success in item recognition and recall (Xue, 2018). Thus, increased responses of both FPN and PSN regions during training may play critical roles in online learning and contribute to future learning attainment.

The SAN shows an inverted-U pattern of prediction across training sessions where its predictions reach a plateau in the middle of training and decrease at the end. Most of the predictive regions are in the corticostriatal pathway, including the bilateral insula, left putamen, right caudate, and left middle cingulate cortex. These regions are related to reward-related learning, including syntactic acquisition (Ullman, 2001, 2004, 2016) and non-native speech learning (Chandrasekaran et al., 2014b; Feng et al., 2019, 2021a). In our paradigm, learners are reliant on trial-by-trial feedback to abstract items and update internal representations of grammar rules. Individual differences in updating neural representations with feedback may be closely associated with individual differences in future learning success. Increased corticostriatal responses may facilitate the formation of item-to-rule representations in the cortex for efficient and automized grammaticality judgments. In contrast, decreased DMN responses are associated with enhanced outcomes. DMN has previously been linked to stimulus monitoring and cognitive resource allocation (Chadick and Gazzaley, 2011; Spreng, 2012). Successful DMN suppression (i.e., less activation) has been associated with better task performances (Daselaar et al., 2004; Anticevic et al., 2012). Thus, the DMN activations may not directly relate to the online learning process per se but post-training language processes. More efficient DMN suppression at the late stage of learning as observed among more successful learners may reflect inhibiting internal self-referential processing and consequently facilitating the processing of acquired linguistic knowledge compared with less successful learners.

The multivoxel representations of grammaticality emerge during training and associate individual differences in learning success, especially in the left IFG from the PSN, frontoparietal FPN regions, and bilateral insula and SMA from the SAN. Previous studies showed that the IFG activities increased throughout learning in response to dependency violation of linguistic structures, whereas learners with higher proficiency showed more activations (Tettamanti et al., 2002; Sakai et al., 2004), which suggest that the IFG may assist ungrammaticality detection (Opitz and Friederici, 2007; Hauser et al., 2012). Moreover, the robustness of the grammaticality representations in the frontoparietal regions is increasingly associated with the degree of learning success, more prominently in the late stage. These FPN regions are also involved in the early learning stage and contribute to learning success. We speculate that the level of frontoparietal activations in the early training stage may reflect the degree of effective learning processes, whereas the multivoxel representations of grammaticality that emerged at the late stage may reflect the neural encoding of acquired grammar knowledge. These two neural measures may be closely related to each other. Increased neural activations in response to the learning items may result in better neural encoding of the items and therefore associate with better LOs. Future studies should systematically test this possibility.

Interregional communications between those representational regions also contribute to individual learning success. We calculated the interregional RSs between each pair of the regions based on their multivoxel patterns. The RS represents the degree of interregional information sharing or communication. Those grammar-representation regions are highly interconnected during learning, and their RSs are predictive of LO, which suggests that not only regional responses, but also interregional RSs, subserve individual differences in learning success. The robustness of RSs among the PSN, FPN, and SAN nodes is associated with learning success, whereas increased RSs between the representation regions and DMN regions relate to decreased outcomes. Previous studies have demonstrated that functional couplings among the language-learning-related regions related to the acquisition of different aspects of an artificial or foreign language (Kepinska et al., 2017a, 2018; Feng et al., 2019; Qi et al., 2019). Our findings build on these previous results and demonstrate that increased RSs between PSN, FPN, and SAN regions during learning are associated with future learning success.

The neural responses of four networks, especially the PSN, FPN, and SAN, and their neural reconfiguration during training subserve individual differences in learning profiles. This study represents a pioneering exploration of neural fingerprints underlying individual learning profiles across components of language and training phases. Future research will involve a larger sample size and developing prior hypotheses concerning the exact contributing patterns of the networks and how they evolve together to predict language learning profiles holistically.

Footnotes

  • P.C.M.W. is a founder of a company in Hong Kong supported by a Hong Kong SAR government startup scheme for universities. The remaining authors declare no competing financial interests.

  • This work was supported by grants from the Research Grants Council of Hong Kong 14619518 to G.F. and 34000118 to P.C.M.W.; and the Direct Grant for Research from the Chinese University of Hong Kong 4051137 to G.F.

  • Correspondence should be addressed to Gangyi Feng at g.feng{at}cuhk.edu.hk or Patrick C. M. Wong at p.wong{at}cuhk.edu.hk

SfN exclusive license.

References

  1. ↵
    1. Abutalebi J,
    2. Brambati SM,
    3. Annoni JM,
    4. Moro A,
    5. Cappa SF,
    6. Perani D
    (2007) The neural cost of the auditory perception of language switches: an event-related functional magnetic resonance imaging study in bilinguals. J Neurosci 27:13762–13769. doi:10.1523/JNEUROSCI.3294-07.2007 pmid:18077688
    OpenUrlAbstract/FREE Full Text
  2. ↵
    1. Abutalebi J,
    2. Green DW
    (2008) Control mechanisms in bilingual language production: Neural evidence from language switching studies. Lang Cogn Process 23:557–582.
    OpenUrlCrossRef
  3. ↵
    1. Anticevic A,
    2. Cole MW,
    3. Murray JD,
    4. Corlett PR,
    5. Wang XJ,
    6. Krystal JH
    (2012) The role of default network deactivation in cognition and disease. Trends Cogn Sci 16:584–592. doi:10.1016/j.tics.2012.10.008 pmid:23142417
    OpenUrlCrossRefPubMed
  4. ↵
    1. Barbeau EB,
    2. Chai XJ,
    3. Chen JK,
    4. Soles J,
    5. Berken J,
    6. Baum S,
    7. Watkins KE,
    8. Klein D
    (2017) The role of the left inferior parietal lobule in second language learning: an intensive language training fMRI study. Neuropsychologia 98:169–176. doi:10.1016/j.neuropsychologia.2016.10.003 pmid:27725166
    OpenUrlCrossRefPubMed
  5. ↵
    1. Birdsong D
    (2018) Plasticity, variability and age in second language acquisition and bilingualism. Front Psychol 9:81. doi:10.3389/fpsyg.2018.00081 pmid:29593590
    OpenUrlCrossRefPubMed
  6. ↵
    1. Braver TS,
    2. Barch DM
    (2006) Extracting core components of cognitive control. Trends Cogn Sci 10:529–532. doi:10.1016/j.tics.2006.10.006 pmid:17071129
    OpenUrlCrossRefPubMed
  7. ↵
    1. Breitenstein C,
    2. Jansen A,
    3. Deppe M,
    4. Foerster AF,
    5. Sommer J,
    6. Wolbers T,
    7. Knecht S
    (2005) Hippocampus activity differentiates good from poor learners of a novel lexicon. Neuroimage 25:958–968. doi:10.1016/j.neuroimage.2004.12.019 pmid:15808996
    OpenUrlCrossRefPubMed
  8. ↵
    1. Carroll JB,
    2. Sapon SM
    (1959) Modern language aptitude test. San Antonio, TX, US: Psychological Corporation.
  9. ↵
    1. Chadick JZ,
    2. Gazzaley A
    (2011) Differential coupling of visual cortex with default or frontal-parietal network based on goals. Nat Neurosci 14:830–832. doi:10.1038/nn.2823 pmid:21623362
    OpenUrlCrossRefPubMed
  10. ↵
    1. Chandrasekaran B,
    2. Yi HG,
    3. Maddox WT
    (2014a) Dual-learning systems during speech category learning. Psychon Bull Rev 21:488–495. doi:10.3758/s13423-013-0501-5 pmid:24002965
    OpenUrlCrossRefPubMed
  11. ↵
    1. Chandrasekaran B,
    2. Koslov SR,
    3. Maddox WT
    (2014b) Toward a dual-learning systems model of speech category learning. Front Psychol 5:825. doi:10.3389/fpsyg.2014.00825 pmid:25132827
    OpenUrlCrossRefPubMed
  12. ↵
    1. Chang CC,
    2. Lin CJ
    (2011) LIBSVM: a library for support vector machines. Acm Trans Intell Syst Technol 2:1–27:27. doi:10.1145/1961189.1961199
    OpenUrlCrossRef
  13. ↵
    1. Cocchi L,
    2. Zalesky A,
    3. Fornito A,
    4. Mattingley JB
    (2013) Dynamic cooperation and competition between brain systems during cognitive control. Trends Cogn Sci 17:493–501. doi:10.1016/j.tics.2013.08.006 pmid:24021711
    OpenUrlCrossRefPubMed
  14. ↵
    1. Cole MW,
    2. Reynolds JR,
    3. Power JD,
    4. Repovs G,
    5. Anticevic A,
    6. Braver TS
    (2013) Multi-task connectivity reveals flexible hubs for adaptive task control. Nat Neurosci 16:1348–1355. doi:10.1038/nn.3470 pmid:23892552
    OpenUrlCrossRefPubMed
  15. ↵
    1. Costa RM,
    2. Cohen D,
    3. Nicolelis MA
    (2004) Differential corticostriatal plasticity during fast and slow motor skill learning in mice. Curr Biol 14:1124–1134. doi:10.1016/j.cub.2004.06.053 pmid:15242609
    OpenUrlCrossRefPubMed
  16. ↵
    1. Daneman M,
    2. Carpenter PA
    (1980) Individual differences in working memory and reading. J Verbal Learning Verbal Behav 19:450–466. doi:10.1016/S0022-5371(80)90312-6
    OpenUrlCrossRef
  17. ↵
    1. Daselaar SM,
    2. Prince SE,
    3. Cabeza R
    (2004) When less means more: deactivations during encoding that predict subsequent memory. Neuroimage 23:921–927. doi:10.1016/j.neuroimage.2004.07.031 pmid:15528092
    OpenUrlCrossRefPubMed
  18. ↵
    1. Davis MH,
    2. Gaskell MG
    (2009) A complementary systems account of word learning: neural and behavioural evidence. Philos Trans R Soc Lond B Biol Sci 364:3773–3800. doi:10.1098/rstb.2009.0111 pmid:19933145
    OpenUrlCrossRefPubMed
  19. ↵
    1. Deng Z,
    2. Chandrasekaran B,
    3. Wang S,
    4. Wong PC
    (2016) Resting-state low-frequency fluctuations reflect individual differences in spoken language learning. Cortex 76:63–78. doi:10.1016/j.cortex.2015.11.020 pmid:26866283
    OpenUrlCrossRefPubMed
  20. ↵
    1. Ettlinger M,
    2. Bradlow AR,
    3. Wong PCM
    (2014) Variability in the learning of complex morphophonology. Appl Psycholinguist 35:807–831.
    OpenUrl
  21. ↵
    1. Fedorenko E,
    2. Behr MK,
    3. Kanwisher N
    (2011) Functional specificity for high-level linguistic processing in the human brain. Proc Natl Acad Sci USA 108:16428–16433. doi:10.1073/pnas.1112937108 pmid:21885736
    OpenUrlAbstract/FREE Full Text
  22. ↵
    1. Feng G,
    2. Chen HC,
    3. Zhu Z,
    4. He Y,
    5. Wang S
    (2015) Dynamic brain architectures in local brain activity and functional network efficiency associate with efficient reading in bilinguals. Neuroimage 119:103–118. doi:10.1016/j.neuroimage.2015.05.100 pmid:26095088
    OpenUrlCrossRefPubMed
  23. ↵
    1. Feng G,
    2. Chen Q,
    3. Zhu Z,
    4. Wang S
    (2016) Separate brain circuits support integrative and semantic priming in the human language system. Cereb Cortex 26:3169–3182. doi:10.1093/cercor/bhv148 pmid:26209843
    OpenUrlCrossRefPubMed
  24. ↵
    1. Feng G,
    2. Gan Z,
    3. Wang S,
    4. Wong PC,
    5. Chandrasekaran B
    (2018a) Task-general and acoustic-invariant neural representation of speech categories in the human brain. Cereb Cortex 28:3241–3254. doi:10.1093/cercor/bhx195 pmid:28968658
    OpenUrlCrossRefPubMed
  25. ↵
    1. Feng G,
    2. Ingvalson EM,
    3. Grieco-Calub TM,
    4. Roberts MY,
    5. Ryan ME,
    6. Birmingham P,
    7. Burrowes D,
    8. Young NM,
    9. Wong PC
    (2018b) Neural preservation underlies speech improvement from auditory deprivation in young cochlear implant recipients. Proc Natl Acad Sci USA 115:E1022–E1031. doi:10.1073/pnas.1717603115 pmid:29339512
    OpenUrlAbstract/FREE Full Text
  26. ↵
    1. Feng G,
    2. Yi HG,
    3. Chandrasekaran B
    (2019) The role of the human auditory corticostriatal network in speech learning. Cereb Cortex 29:4077–4089. doi:10.1093/cercor/bhy289 pmid:30535138
    OpenUrlCrossRefPubMed
  27. ↵
    1. Feng G,
    2. Li Y,
    3. Hsu SM,
    4. Wong PC,
    5. Chou TL,
    6. Chandrasekaran B
    (2021a) Emerging native-similar neural representations underlie non-native speech category learning success. Neurobiol Lang 2:280–307. doi:10.1162/nol_a_00035
    OpenUrlCrossRef
  28. ↵
    1. Feng G,
    2. Gan Z,
    3. Llanos F,
    4. Meng D,
    5. Wang S,
    6. Wong PC,
    7. Chandrasekaran B
    (2021b) A distributed dynamic brain network mediates linguistic tone representation and categorization. Neuroimage 224:117410. doi:10.1016/j.neuroimage.2020.117410 pmid:33011415
    OpenUrlCrossRefPubMed
  29. ↵
    1. Finn AS,
    2. Hudson Kam CL,
    3. Ettlinger M,
    4. Vytlacil J,
    5. D'Esposito M
    (2013) Learning language with the wrong neural scaffolding: the cost of neural commitment to sounds. Front Syst Neurosci 7:85. doi:10.3389/fnsys.2013.00085 pmid:24273497
    OpenUrlCrossRefPubMed
  30. ↵
    1. Foerde K,
    2. Knowlton BJ,
    3. Poldrack RA
    (2006) Modulation of competing memory systems by distraction. Proc Natl Acad Sci USA 103:11778–11783. doi:10.1073/pnas.0602659103 pmid:16868087
    OpenUrlAbstract/FREE Full Text
  31. ↵
    1. Friederici AD
    (2011) The brain basis of language processing: from structure to function. Physiol Rev 91:1357–1392. doi:10.1152/physrev.00006.2011 pmid:22013214
    OpenUrlCrossRefPubMed
  32. ↵
    1. Greicius MD,
    2. Krasnow B,
    3. Reiss AL,
    4. Menon V
    (2003) Functional connectivity in the resting brain: a network analysis of the default mode hypothesis. Proc Natl Acad Sci USA 100:253–258. doi:10.1073/pnas.0135058100 pmid:12506194
    OpenUrlAbstract/FREE Full Text
  33. ↵
    1. Hauser MF,
    2. Hofmann J,
    3. Opitz B
    (2012) Rule and similarity in grammar: their interplay and individual differences in the brain. Neuroimage 60:2019–2026. doi:10.1016/j.neuroimage.2012.02.016 pmid:22369994
    OpenUrlCrossRefPubMed
  34. ↵
    1. Herholz SC
    (2013) Individual predisposition for learning and neuroplasticity. J Neurosci 33:15321–15323. doi:10.1523/JNEUROSCI.3197-13.2013 pmid:24068798
    OpenUrlFREE Full Text
  35. ↵
    1. Herholz SC,
    2. Coffey EB,
    3. Pantev C,
    4. Zatorre RJ
    (2016) Dissociation of neural networks for predisposition and for training-related plasticity in auditory-motor learning. Cereb Cortex 26:3125–3134. doi:10.1093/cercor/bhv138 pmid:26139842
    OpenUrlCrossRefPubMed
  36. ↵
    1. Kepinska O,
    2. de Rover M,
    3. Caspers J,
    4. Schiller NO
    (2017a) Whole-brain functional connectivity during acquisition of novel grammar: distinct functional networks depend on language learning abilities. Behav Brain Res 320:333–346. doi:10.1016/j.bbr.2016.12.015 pmid:27993693
    OpenUrlCrossRefPubMed
  37. ↵
    1. Kepinska O,
    2. de Rover M,
    3. Caspers J,
    4. Schiller NO
    (2017b) On neural correlates of individual differences in novel grammar learning: an fMRI study. Neuropsychologia 98:156–168. doi:10.1016/j.neuropsychologia.2016.06.014 pmid:27305834
    OpenUrlCrossRefPubMed
  38. ↵
    1. Kepinska O,
    2. de Rover M,
    3. Caspers J,
    4. Schiller NO
    (2018) Connectivity of the hippocampus and Broca's area during acquisition of a novel grammar. Neuroimage 165:1–10. doi:10.1016/j.neuroimage.2017.09.058 pmid:28970145
    OpenUrlCrossRefPubMed
  39. ↵
    1. Kidd E,
    2. Donnelly S,
    3. Christiansen MH
    (2018) Individual differences in language acquisition and processing. Trends Cogn Sci 22:154–169. doi:10.1016/j.tics.2017.11.006 pmid:29277256
    OpenUrlCrossRefPubMed
  40. ↵
    1. Kleim JA,
    2. Hogg TM,
    3. VandenBerg PM,
    4. Cooper NR,
    5. Bruneau R,
    6. Remple M
    (2004) Cortical synaptogenesis and motor map reorganization occur during late, but not early, phase of motor skill learning. J Neurosci 24:628–633. doi:10.1523/JNEUROSCI.3440-03.2004 pmid:14736848
    OpenUrlAbstract/FREE Full Text
  41. ↵
    1. Kriegeskorte N,
    2. Goebel R,
    3. Bandettini P
    (2006) Information-based functional brain mapping. Proc Natl Acad Sci USA 103:3863–3868. doi:10.1073/pnas.0600244103 pmid:16537458
    OpenUrlAbstract/FREE Full Text
  42. ↵
    1. Meara P
    (2005) LLAMA language aptitude tests: the manual. Swansea: Lognostics.
  43. ↵
    1. Misaki M,
    2. Kim Y,
    3. Bandettini PA,
    4. Kriegeskorte N
    (2010) Comparison of multivariate classifiers and response normalizations for pattern-information fMRI. Neuroimage 53:103–118. doi:10.1016/j.neuroimage.2010.05.051 pmid:20580933
    OpenUrlCrossRefPubMed
  44. ↵
    1. Mohebi A,
    2. Pettibone JR,
    3. Hamid AA,
    4. Wong JM,
    5. Vinson LT,
    6. Patriarchi T,
    7. Tian L,
    8. Kennedy RT,
    9. Berke JD
    (2019) Dissociable dopamine dynamics for learning and motivation. Nature 570:65–70. doi:10.1038/s41586-019-1235-y pmid:31118513
    OpenUrlCrossRefPubMed
  45. ↵
    1. Morgan-Short K
    (2007) A neurolinguistic investigation of late-learned second language knowledge: the effects of explicit and implicit conditions. Washington, DC: Georgetown University.
  46. ↵
    1. Morgan-Short K
    (2020) Insights into the neural mechanisms of becoming bilingual: a brief synthesis of second language research with artificial linguistic systems. Bilingualism 23:87–91. doi:10.1017/S1366728919000701
    OpenUrlCrossRef
  47. ↵
    1. Morgan-Short K,
    2. Steinhauer K,
    3. Sanz C,
    4. Ullman MT
    (2012) Explicit and implicit second language training differentially affect the achievement of native-like brain activation patterns. J Cogn Neurosci 24:933–947. doi:10.1162/jocn_a_00119 pmid:21861686
    OpenUrlCrossRefPubMed
  48. ↵
    1. Morgan-Short K,
    2. Deng ZZ,
    3. Brill-Schuetz KA,
    4. Faretta-Stutenberg M,
    5. Wong PC,
    6. Wong FC
    (2015) A view of the neural representation of second language syntax through artificial language learning under implicit contexts of exposure. Stud Second Lang Acquis 37:383–419. doi:10.1017/S0272263115000030
    OpenUrlCrossRef
  49. ↵
    1. Mumford JA,
    2. Turner BO,
    3. Ashby FG,
    4. Poldrack RA
    (2012) Deconvolving BOLD activation in event-related designs for multivoxel pattern classification analyses. Neuroimage 59:2636–2643. doi:10.1016/j.neuroimage.2011.08.076 pmid:21924359
    OpenUrlCrossRefPubMed
  50. ↵
    1. Musso M,
    2. Moro A,
    3. Glauche V,
    4. Rijntjes M,
    5. Reichenbach J,
    6. Buchel C,
    7. Weiller C
    (2003) Broca's area and the language instinct. Nat Neurosci 6:774–781. doi:10.1038/nn1077 pmid:12819784
    OpenUrlCrossRefPubMed
  51. ↵
    1. Nevat M,
    2. Ullman MT,
    3. Eviatar Z,
    4. Bitan T
    (2017) The neural bases of the learning and generalization of morphological inflection. Neuropsychologia 98:139–155. doi:10.1016/j.neuropsychologia.2016.08.026 pmid:27575853
    OpenUrlCrossRefPubMed
  52. ↵
    1. Newman-Norlund RD,
    2. Frey SH,
    3. Petitto LA,
    4. Grafton ST
    (2006) Anatomical substrates of visual and auditory miniature second-language learning. J Cogn Neurosci 18:1984–1997. doi:10.1162/jocn.2006.18.12.1984 pmid:17129186
    OpenUrlCrossRefPubMed
  53. ↵
    1. Oosterhof NN,
    2. Connolly AC,
    3. Haxby JV
    (2016) CoSMoMVPA: multi-modal multivariate pattern analysis of neuroimaging data in Matlab/GNU octave. Front Neuroinform 10:27–27. doi:10.3389/fninf.2016.00027 pmid:27499741
    OpenUrlCrossRefPubMed
  54. ↵
    1. Opitz B,
    2. Friederici AD
    (2003) Interactions of the hippocampal system and the prefrontal cortex in learning language-like rules. Neuroimage 19:1730–1737. doi:10.1016/S1053-8119(03)00170-8
    OpenUrlCrossRefPubMed
  55. ↵
    1. Opitz B,
    2. Friederici AD
    (2004) Brain correlates of language learning: the neuronal dissociation of rule-based versus similarity-based learning. J Neurosci 24:8436–8440. doi:10.1523/JNEUROSCI.2220-04.2004 pmid:15456816
    OpenUrlAbstract/FREE Full Text
  56. ↵
    1. Opitz B,
    2. Friederici AD
    (2007) Neural basis of processing sequential and hierarchical syntactic structures. Hum Brain Mapp 28:585–592. doi:10.1002/hbm.20287 pmid:17455365
    OpenUrlCrossRefPubMed
  57. ↵
    1. Peach RK,
    2. Wong PC
    (2004) Integrating the message level into treatment for agrammatism using story retelling. Aphasiology 18:429–444. doi:10.1080/02687030444000147
    OpenUrlCrossRef
  58. ↵
    1. Price CJ
    (2010) The anatomy of language: a review of 100 fMRI studies published in 2009. Ann NY Acad Sci 1191:62–88. doi:10.1111/j.1749-6632.2010.05444.x pmid:20392276
    OpenUrlCrossRefPubMed
  59. ↵
    1. Price CJ
    (2012) A review and synthesis of the first 20 years of PET and fMRI studies of heard speech, spoken language and reading. Neuroimage 62:816–847. doi:10.1016/j.neuroimage.2012.04.062 pmid:22584224
    OpenUrlCrossRefPubMed
  60. ↵
    1. Qi Z,
    2. Han M,
    3. Wang Y,
    4. de Los Angeles C,
    5. Liu Q,
    6. Garel K,
    7. Chen ES,
    8. Whitfield-Gabrieli S,
    9. Gabrieli JD,
    10. Perrachione TK
    (2019) Speech processing and plasticity in the right hemisphere predict variation in adult foreign language learning. Neuroimage 192:76–87. doi:10.1016/j.neuroimage.2019.03.008 pmid:30853566
    OpenUrlCrossRefPubMed
  61. ↵
    1. Saito K
    (2017) Effects of sound, vocabulary, and grammar learning aptitude on adult second language speech attainment in foreign language classrooms. Lang Learn 67:665–693. doi:10.1111/lang.12244
    OpenUrlCrossRef
  62. ↵
    1. Sakai KL,
    2. Miura K,
    3. Narafu N,
    4. Muraishi Y
    (2004) Correlated functional changes of the prefrontal cortex in twins induced by classroom education of second language. Cereb Cortex 14:1233–1239. doi:10.1093/cercor/bhh084 pmid:15142962
    OpenUrlCrossRefPubMed
  63. ↵
    1. Saur D,
    2. Kreher BW,
    3. Schnell S,
    4. Kummerer D,
    5. Kellmeyer P,
    6. Vry MS,
    7. Umarova R,
    8. Musso M,
    9. Glauche V,
    10. Abel S,
    11. Huber W,
    12. Rijntjes M,
    13. Hennig J,
    14. Weiller C
    (2008) Ventral and dorsal pathways for language. Proc Natl Acad Sci USA 105:18035–18040. doi:10.1073/pnas.0805234105 pmid:19004769
    OpenUrlAbstract/FREE Full Text
  64. ↵
    1. Skehan P
    (2016) Foreign language aptitude, acquisitional sequences, and psycholinguistic processes. In: Cognitive Individual Differences in Second Language Processing and Acquisition (Granena G, Jackson DO, Yilmaz Y, eds), pp 17–40. John Benjamins.
  65. ↵
    1. Spreng RN
    (2012) The fallacy of a “Task-Negative” network. Front Psychol 3:145. doi:10.3389/fpsyg.2012.00145 pmid:22593750
    OpenUrlCrossRefPubMed
  66. ↵
    1. Stamps JA
    (2016) Individual differences in behavioural plasticities. Biol Rev Camb Philos Soc 91:534–567. doi:10.1111/brv.12186 pmid:25865135
    OpenUrlCrossRefPubMed
  67. ↵
    1. Tagarelli KM,
    2. Shattuck KF,
    3. Turkeltaub PE,
    4. Ullman MT
    (2019) Language learning in the adult brain: a neuroanatomical meta-analysis of lexical and grammatical learning. Neuroimage 193:178–200. doi:10.1016/j.neuroimage.2019.02.061 pmid:30826361
    OpenUrlCrossRefPubMed
  68. ↵
    1. Tettamanti M,
    2. Alkadhi H,
    3. Moro A,
    4. Perani D,
    5. Kollias S,
    6. Weniger D
    (2002) Neural correlates for the acquisition of natural language syntax. Neuroimage 17:700–709. pmid:12377145
    OpenUrlCrossRefPubMed
  69. ↵
    1. Trahan DE,
    2. Larrabee GJ
    (1988) Continuous Visual Memory Test: Professional manual. Odessa, FL, US: Psychological Assessment Resources.
  70. ↵
    1. Ullman MT
    (2001) A neurocognitive perspective on language: the declarative/procedural model. Nat Rev Neurosci 2:717–726. doi:10.1038/35094573 pmid:11584309
    OpenUrlCrossRefPubMed
  71. ↵
    1. Ullman MT
    (2004) Contributions of memory circuits to language: the declarative/procedural model. Cognition 92:231–270. doi:10.1016/j.cognition.2003.10.008 pmid:15037131
    OpenUrlCrossRefPubMed
  72. ↵
    1. Ullman MT
    (2016) The Declarative/Procedural Model: A Neurobiological Model of Language Learning, Knowledge, and Use. In: Neurobiology of Language (Hickok G, Small SL, eds), pp 953–968. San Diego: Academic Press.
  73. ↵
    1. Waskom ML,
    2. Kumaran D,
    3. Gordon AM,
    4. Rissman J,
    5. Wagner AD
    (2014) Frontoparietal representations of task context support the flexible control of goal-directed cognition. J Neurosci 34:10743–10755. doi:10.1523/JNEUROSCI.5282-13.2014 pmid:25100605
    OpenUrlAbstract/FREE Full Text
  74. ↵
    1. Weber K,
    2. Christiansen MH,
    3. Petersson KM,
    4. Indefrey P,
    5. Hagoort P
    (2016) fMRI syntactic and lexical repetition effects reveal the initial stages of learning a new language. J Neurosci 36:6872–6880. doi:10.1523/JNEUROSCI.3180-15.2016 pmid:27358446
    OpenUrlAbstract/FREE Full Text
  75. ↵
    1. Wechsler D
    (1997) Wechsler memory scale (WMS-III). San Antonio: Psychological Corporation.
  76. ↵
    1. Wong FC,
    2. Chandrasekaran B,
    3. Garibaldi K,
    4. Wong PC
    (2011) White matter anisotropy in the ventral language pathway predicts sound-to-word learning success. J Neurosci 31:8780–8785. doi:10.1523/JNEUROSCI.0999-11.2011 pmid:21677162
    OpenUrlAbstract/FREE Full Text
  77. ↵
    1. Wong PC,
    2. Perrachione TK,
    3. Parrish TB
    (2007) Neural characteristics of successful and less successful speech and word learning in adults. Hum Brain Mapp 28:995–1006. doi:10.1002/hbm.20330 pmid:17133399
    OpenUrlCrossRefPubMed
  78. ↵
    1. Xue G
    (2018) The neural representations underlying human episodic memory. Trends Cogn Sci 22:544–561. doi:10.1016/j.tics.2018.03.004 pmid:29625850
    OpenUrlCrossRefPubMed
  79. ↵
    1. Yang J,
    2. Gates KM,
    3. Molenaar P,
    4. Li P
    (2015) Neural changes underlying successful second language word learning: an fMRI study. J Neurolinguistics 33:29–49. doi:10.1016/j.jneuroling.2014.09.004
    OpenUrlCrossRef
  80. ↵
    1. Yeo BT,
    2. Krienen FM,
    3. Sepulcre J,
    4. Sabuncu MR,
    5. Lashkari D,
    6. Hollinshead M,
    7. Roffman JL,
    8. Smoller JW,
    9. Zollei L,
    10. Polimeni JR,
    11. Fischl B,
    12. Liu H,
    13. Buckner RL
    (2011) The organization of the human cerebral cortex estimated by intrinsic functional connectivity. J Neurophysiol 106:1125–1165. doi:10.1152/jn.00338.2011 pmid:21653723
    OpenUrlCrossRefPubMed
  81. ↵
    1. Yin HH,
    2. Mulcare SP,
    3. Hilario MR,
    4. Clouse E,
    5. Holloway T,
    6. Davis MI,
    7. Hansson AC,
    8. Lovinger DM,
    9. Costa RM
    (2009) Dynamic reorganization of striatal circuits during the acquisition and consolidation of a skill. Nat Neurosci 12:333–341. doi:10.1038/nn.2261 pmid:19198605
    OpenUrlCrossRefPubMed
  82. ↵
    1. Zatorre RJ
    (2013) Predispositions and plasticity in music and speech learning: neural correlates and implications. Science 342:585–589. doi:10.1126/science.1238414 pmid:24179219
    OpenUrlAbstract/FREE Full Text
Back to top

In this issue

The Journal of Neuroscience: 41 (35)
Journal of Neuroscience
Vol. 41, Issue 35
1 Sep 2021
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Ed Board (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Neural Fingerprints Underlying Individual Language Learning Profiles
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Neural Fingerprints Underlying Individual Language Learning Profiles
Gangyi Feng, Jinghua Ou, Zhenzhong Gan, Xiaoyan Jia, Danting Meng, Suiping Wang, Patrick C. M. Wong
Journal of Neuroscience 1 September 2021, 41 (35) 7372-7387; DOI: 10.1523/JNEUROSCI.0415-21.2021

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Neural Fingerprints Underlying Individual Language Learning Profiles
Gangyi Feng, Jinghua Ou, Zhenzhong Gan, Xiaoyan Jia, Danting Meng, Suiping Wang, Patrick C. M. Wong
Journal of Neuroscience 1 September 2021, 41 (35) 7372-7387; DOI: 10.1523/JNEUROSCI.0415-21.2021
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • individual differences
  • language learning
  • neural fingerprint
  • neural network dynamics
  • predictive modeling

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Increased neuronal expression of the early endosomal adaptor APPL1 replicates Alzheimer’s Disease-related endosomal and synaptic dysfunction with cholinergic neurodegeneration.
  • Presynaptic mu opioid receptors suppress the functional connectivity of ventral tegmental area dopaminergic neurons with aversion-related brain regions
  • Change of spiny neuron structure in the basal ganglia song circuit and its regulation by miR-9 during song development
Show more Research Articles

Systems/Circuits

  • Presynaptic mu opioid receptors suppress the functional connectivity of ventral tegmental area dopaminergic neurons with aversion-related brain regions
  • V2b neurons act via multiple targets to produce in phase inhibition during locomotion
  • Specializations in Amygdalar and Hippocampal Innervation of the Primate Nucleus Accumbens Shell
Show more Systems/Circuits
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.