Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

Emergence of Categorical Representations in Parietal and Ventromedial Prefrontal Cortex across Extended Training

Zhiya Liu, Yitao Zhang, Chudan Wen, Jingzhao Yuan, Jingxian Zhang and Carol A. Seger
Journal of Neuroscience 26 February 2025, 45 (9) e1315242024; https://doi.org/10.1523/JNEUROSCI.1315-24.2024
Zhiya Liu
1Center for Studies of Psychological Application, South China Normal University, Guangzhou 510631, China
2Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou 510631, China
3Key Laboratory of Brain, Cognition, and Education Sciences of the Ministry of Education, South China Normal University, Guangzhou 510631, China
4School of Psychology, South China Normal University, Guangzhou 510631, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Zhiya Liu
Yitao Zhang
1Center for Studies of Psychological Application, South China Normal University, Guangzhou 510631, China
2Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou 510631, China
3Key Laboratory of Brain, Cognition, and Education Sciences of the Ministry of Education, South China Normal University, Guangzhou 510631, China
4School of Psychology, South China Normal University, Guangzhou 510631, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Chudan Wen
1Center for Studies of Psychological Application, South China Normal University, Guangzhou 510631, China
2Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou 510631, China
3Key Laboratory of Brain, Cognition, and Education Sciences of the Ministry of Education, South China Normal University, Guangzhou 510631, China
4School of Psychology, South China Normal University, Guangzhou 510631, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jingzhao Yuan
1Center for Studies of Psychological Application, South China Normal University, Guangzhou 510631, China
2Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou 510631, China
3Key Laboratory of Brain, Cognition, and Education Sciences of the Ministry of Education, South China Normal University, Guangzhou 510631, China
4School of Psychology, South China Normal University, Guangzhou 510631, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jingxian Zhang
1Center for Studies of Psychological Application, South China Normal University, Guangzhou 510631, China
2Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou 510631, China
3Key Laboratory of Brain, Cognition, and Education Sciences of the Ministry of Education, South China Normal University, Guangzhou 510631, China
4School of Psychology, South China Normal University, Guangzhou 510631, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Carol A. Seger
1Center for Studies of Psychological Application, South China Normal University, Guangzhou 510631, China
2Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou 510631, China
3Key Laboratory of Brain, Cognition, and Education Sciences of the Ministry of Education, South China Normal University, Guangzhou 510631, China
4School of Psychology, South China Normal University, Guangzhou 510631, China
5Molecular, Cellular and Integrative Neurosciences Program, Department of Psychology, Colorado State University, Fort Collins, Colorado 80523
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • Peer Review
  • PDF
Loading

Abstract

How do the neural representations underlying category learning change as skill develops? We examined perceptual category learning using a prototype learning task known to recruit a corticostriatal system including the posterior striatum, motor cortex, visual cortex, and intraparietal sulcus (IPS). Male and female human participants practiced categorizing stimuli as category members or nonmembers (A vs not-A) across 3 d, with fMRI data collected at the beginning and end. Univariate analyses found that corticostriatal activity in regions associated with habitual instrumental learning was recruited across both sessions, but activity in regions associated with goal-directed instrumental learning decreased from Day 1 to Day 3. Multivoxel pattern analysis (MVPA) indicated that after training, the trained category could be more easily decoded from the IPS when compared with a novel category. Representational similarity analysis (RSA) showed development of category representations in the IPS and motor cortex. In addition, RSA revealed evidence for category-related representations including prototype representation in the ventromedial prefrontal cortex which may reflect parallel development of schematic memory for the category structure. Overall, the results converge to show how performance of category decisions and representations of the category structure emerge after extensive training across the corticostriatal system underlying perceptual category learning.

  • automaticity
  • category learning
  • caudate
  • habit
  • putamen
  • skill

Significance Statement

We compared activity during initial category learning with that after an extended training session and used multivariate methods to characterize representational changes. We found that representations changed in the intraparietal sulcus (IPS) and ventromedial prefrontal cortex (VMPFC). The IPS became sensitive to category membership and distinguished between the trained category and a novel category. The VMPFC showed sensitivity to the prototype as well as other category-related features. In addition, the motor cortex coded for category membership decisions and making associated motor responses. Overall our results go beyond previous research that established what brain regions are recruited during the initial phases of perceptual category learning to characterize how category representations emerge as participants become highly skilled.

Introduction

Category learning is a fundamental ability that allows organisms to learn about their environment in order to behave appropriately (Seger and Miller, 2010). Although learning categories is a lifelong process and much of our real-world categorization is based on extensive experience (e.g., categorizing an animal as a “cat” or “dog”), most laboratory studies have only examined acquisition of novel categories over relatively short periods of time and have not characterized how category representations emerge with extended training.

Perceptual category learning recruits a corticostriatal system including areas of the posterior dorsal striatum (body and tail of the caudate and posterior putamen), intraparietal sulcus (IPS), visual cortical regions, and premotor and motor cortex. This system learns to map perceptual representations of stimuli to category labels and associated responses via dopamine-mediated reinforcement learning (Ashby et al., 1998; Seger, 2008; Cantwell et al., 2015; Ashby and Rosedahl, 2017). The corticostriatal system supporting perceptual category learning overlaps with the system recruited in instrumental learning (Seger, 2018). Instrumental learning is typically divided into two types of learning, including goal-directed learning mechanisms in addition to habit learning (O’Doherty et al., 2017; Balleine and Dezfouli, 2019). Behavioral research has found that people often use a hypothesis testing strategy reliant on goal-directed cognitive functions during early learning but then shift to a procedural strategy as training continues (Ashby and Maddox, 2011). This is reflected in neuroimaging studies by early recruitment of neural areas associated with goal-directed instrumental learning (frontal cortex and head of the caudate) that decreases as learning progresses (Seger and Cincotta, 2005; Seger et al., 2010). Waldschmidt and Ashby (2011) examined performance on a perceptual category learning task (the information integration task) for 20 sessions totaling 10,000 trials. They found activity in the posterior striatum (putamen) and motor cortical regions continued across days, but by the final session only cortical activity remained.

Multivariate and model-based univariate analyses have begun to characterize how knowledge is represented in the perceptual category learning network. Regions of the lateral parietal cortex centering on the intraparietal sulcus (IPS) have been shown to be sensitive to both decision threshold (category decision boundary) and similarity of the current stimulus to the trained structure of the category (Mack et al., 2013; Seger et al., 2015; Braunlich et al., 2017; Bowman et al., 2020; Blank and Bayer, 2022). These results are broadly consistent with this region representing both internal category structure and how to map this structure onto category membership and responses. The motor cortex represents category information going beyond simple response-related activity. This includes the amount of information supporting a particular category-response mapping (Gluth et al., 2012; Wheeler et al., 2014; Braunlich and Seger, 2016).

In addition, category representations have been reported within declarative memory systems including the hippocampus and ventromedial prefrontal cortex (VMPFC). This system learns relations between features that together form a knowledge structure called a schema (Gilboa and Marlatte, 2017). Schematic knowledge in the VMPFC has been shown to reflect the prototype structure in a discrete feature family resemblance prototype learning task (Bowman and Zeithamova, 2018; Bowman et al., 2020).

Participants learned to categorize stimuli as category members or nonmembers across 3 d of training with fMRI scans on the first and last days. We predicted that univariate analyses would show recruitment of habit-related corticostriatal regions (posterior basal ganglia, IPS, and motor cortex) across both days, but that activity in goal-directed areas would decrease, consistent with the shift from goal-directed control to habitual control in previous research. More importantly, we predicted that multivoxel pattern analysis (MVPA) would find an increased ability to decode categorization decisions within the trained category on Day 3 and decode differences between the trained category and a new category on Day 3. Finally, we predicted that representational similarity analysis (RSA) would show emergence of category representations on Day 3 that were not present on Day 1.

Materials and Methods

Participants

Twenty-six participants were recruited from the student population at South China Normal University to participate in the experiment. One participant was eliminated due to excessive head motion (exceeding the inclusion criterion of no >2 mm displacement along any of the x, y, or z axes and no >2° rotation in any of the three canonical planes) resulting in a total of 25 participants being included in the data analysis (11 males, 14 females; average age, 21.8 ± 2.16). All participants gave informed consent before the experiment and were paid for their participation after the experiment. All were right-handed and met the criteria for MR scanning (e.g., restrictions on metal implants and lack of a history of claustrophobia). Participants had normal or corrected-to-normal vision and reported no impairment in color perception. This study was approved by the SCNU Institutional Review Board.

Stimuli

The polygon version (Homa et al., 1981; Smith et al., 2005) of the classic dot pattern prototype task (Posner and Keele, 1968) was used in this study. Prototypes were formed from nine points, or dots, pseudorandomly assigned to locations within a 23 × 23 grid. As illustrated in Figure 1A, to increase visual salience, the nine dots were connected with lines, and the resultant shape was then filled with a solid blue color (Braunlich et al., 2017).

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

A, Sample stimuli from the categories learned in the experiment. Each category was formed by first generating a prototype image and then generating individual exemplars of the category by moving the locations of the vertexes. Seven different levels of distortion were used (measured as bits per dot), with randomly generated images serving as the most extreme level of distortion (see labels at the bottom of the figure). For each category, stimuli were considered to be category members if they had a distortion level of 12 or less; those with distortion of 18 or greater were designated to be outside the category (e.g., not-A). This division between-category members and nonmembers is referred to as the decision bound and is depicted as the dotted red line. Unique category prototypes and stimuli were generated for each participant. B, Outline of the overall study procedure across the three consecutive days of training. C, Individual trial events and timing for the different tasks completed on different days of training. During the scanning sessions (Days 1 and 3, middle panels), participants alternated between blocks of Category A stimuli and the alternative category (B on Day 1, C on Day 3) and were cued at the beginning of each block as to which category was being learned. Stimuli appeared on the screen for 3 s during which participants indicated their response using the button boxes held in each hand and then received feedback as to whether they were correct or not. Between each trial was a jittered interval ranging from 3 to 6 s. During the training session (Day 2, left panel), the jittered intertrial interval was removed, and the stimulus presentation and feedback presentation times were adjusted slightly. Following the Day 1 and Day 3 training sessions in the MRI scanner, participants completed two tests without feedback, a single task test and a dual task test. These are illustrated in the right panel.

After defining the category prototypes, stimuli were formed as distortions of the prototype by changing the average Euclidean distance from the prototype at each of the nine points. The degree of distortion was controlled by setting the average Euclidean distance range for the movements according to a well-established procedure (Posner and Keele, 1968), which allowed us to create a large number of unique exemplars. Euclidean distance reflects the size of the cloud of possible new dot locations surrounding the initial location; these values can also be expressed in terms of average measure of bits per dot. To implement distortion, levels involved perturbing the locations of the dots by first identifying 12 “rings” surrounding each dot. Each ring was comprised of the cells surrounding the previous ring; therefore, the dot itself comprised a single cell, the adjacent ring comprised 8 cells, and the outermost ring comprised 88 cells. Although a dot had equal probability of moving to any cell within each ring, the probability of a dot moving to a ring decreased with distance from its original position. Using this framework, the uncertainty of the dot positions of a particular stimulus, s, can be defined according to its entropy, H, as follows:H(s)=−∑k=1KPk*log2(Pk),(1) where K is the number of cells within which a point could be located and Pk is the probability that a point is within a particular cell, k. For dot prototype stimuli, the psychological distance, d, between stimuli has been shown to follow a logarithmic function of the average Euclidean distance moved by each dot (Posner and Keele, 1968):dψ=log(1+d(prototype,exemplar)),(2) where d(prototype, exemplar) represents the average Euclidean distance moved per dot between an exemplar and the prototype.

Seven different levels of distortion were used: 3, 6, 9, 12, 18, 21, and 24 bits per dot. In addition, random exemplars were generated without regard to the template to serve as the most extreme level of distortion; in these stimuli, the nine points were located randomly in the 23 × 23 grid. The decision boundary separating category members (e.g., A) and nonmembers (e.g., not-A) was set between distortion levels 12 and 18. We generated three prototypes for each participant to serve as Categories A, B, and C. Separate category prototypes and exemplars were generated randomly for each participant, and stimulus sets were not repeated across participants. Participants learned A and B on the first day during the first fMRI scan and continued to train A out of the scanner on the second day until they were proficient. On Day 3 during the second fMRI scan, they continued to categorize stimuli from Category A and in addition learned a new category, C.

Procedure

The experiment was completed over 3 consecutive days, illustrated in Figure 1B. On the first day, participants learned Categories A and B in the MRI scanner. Each scanning session included four runs of the category learning task. Within each run, participants completed four blocks total, two for Category A and two for Category B. The order of A and B blocks was randomized within each run. Each block included 3 trials at each of the eight distortion levels, totaling 24 trials per block. On each trial, participants categorized a single individual stimulus as being a member or nonmember of the specified category and received feedback as to whether they were correct or not (Fig. 1C). During fMRI scanning, responses were made using multibutton response boxes held in each hand. For Category A, participants pressed the left index finger button to indicate that the stimulus belonged to the category (A) and the right index finger button to indicate that the stimulus did not belong to the category (not-A). For Category B, participants pressed the left thumb button to indicate that the stimulus belonged to the category and the right thumb button to indicate that the stimulus did not belong to the category (not-B).

After the scanning, two categorization tests without feedback were carried out for Category A stimuli: a single task test and a dual task test. These categorization tests had the same procedure as the learning tasks, except that there was no feedback. Participants completed one block (24 trials; three stimuli for each of the eight distortion levels) of each of the two nonfeedback tasks, with the single task first and dual task second. The primary rationale for using a dual task was to test whether participants were using a similarity-based strategy and could perform the task without significant load on executive functions. Previous research has found that similarity-based tasks like the dot pattern categorization task show little dual task interference (Waldron and Ashby, 2001; Zeithamova and Maddox, 2007). In addition, dual task methods have been used with rule-based tasks as a criterion for considering performance to be automatic: in rule-based tasks, performance is typically impaired by dual tasks early in training, but not after automaticity is achieved (Ashby et al., 2010). Therefore, we additionally predicted that even if, contrary to our predictions, there was dual task interference early in training (Day 1), nevertheless this interference would be reduced or eliminated after extensive training (Day 3). The dual task was the numerical Stroop task (Hélie et al., 2010). On each trial, participants were shown two digits with different magnitudes at two different physical sizes (e.g., a large 4 and a small 8) on opposite sides of the screen before categorizing the stimulus and then after categorizing had to answer a probe question about the magnitude or size of the digits. For the magnitude question, cued by the word “magnitude,” they pressed the button corresponding to the side that the digit with the larger magnitude was presented on. For the size question, cued by the word “size,” they pressed the button corresponding to the side of the screen that the physically larger digit was presented on. This task requires participants to maintain in working memory the physical size and magnitude of the digit while performing the categorization task, thus imposing a cognitive load. The timing of the trials was as follows: first, the two digits were presented for 2 s, then the category stimulus for 2 s during which the participants made their category response, then a blank screen for 2 s, then the “size” or “magnitude” cue for 2 s during which the participants made their number judgment response, and finally feedback for the number judgment response (correct/incorrect) for 2 s.

The second day consisted of behavioral training for Category A. Participants completed 30 blocks of training with 24 trials in each block, totaling 720 trials. The 24 trials in each block were equally distributed across the eight distortion levels, so that there were three stimuli from each level within the block. Stimuli within blocks were randomly ordered. Participants responded to the stimuli using the computer keyboard rather than handheld response boxes using the same fingers as in the scanner session (left index finger on the F key to indicate A; right index finger on the J key to indicate not-A). Participants could choose to take a short break between blocks if desired or continue to the next block immediately.

The third day was an fMRI session. The procedure of the experiment was the same as Day 1, except that a new category, C, was substituted for Category B. Participants continued to categorize Category A. After the scan, participants completed the same behavioral tests outside the scanner as after the first day of scanning, consisting of the single task test and the dual task test.

MRI acquisition

Images were obtained with a 3.0 Tesla MRI scanner (Siemens Prisma) at the Brain Imaging Center at South China Normal University. The scanner was equipped with a 24-channel head coil. Structural images were collected using a T1-weighted magnetization-prepared rapid gradient echo sequence [256 × 256 matrix; field of view (FOV), 256; 208 1-mm-thick slices]. Functional images were reconstructed from 42 axial oblique slices obtained using a T2*-weighted two-dimensional echoplanar sequence (repetition time, 1,500 ms; echo time, 30 ms; flip angle, 90; FOV, 192 mm; 64 × 64 matrix; 3-mm-thick slices).

Image preprocessing

Images were preprocessed using BrainVoyager QX 2.6 (www.brainvoyager.com). The functional data were first preprocessed through a pipeline consisting of three-dimensional motion correction, slice scan time correction, and temporal data smoothing with a high-pass filter of three cycles in the time course and linear trend removal. The head motion parameters for the 25 participants included in the analyses were within 1 mm shift and 1° of rotation in the standard coordinates, substantially lower than the threshold for exclusion set before the study of >2 mm shift or 2° of rotation. Each participant's high-resolution anatomical image was normalized to the Talairach brain template. BrainVoyager performs normalization in two steps: an initial rigid body translation into the anterior commissure–posterior commissure plane followed by an elastic deformation into the standard space performed on 20 individual subvolumes. The resulting set of transformations was applied to the participant's functional image volumes to form volume time course representations to be used in subsequent statistical analyses. Finally, the functional data were spatially smoothed with a Gaussian kernel, full width at half maximum of 6.0 mm.

Univariate analyses

BrainVoyager QX 2.6 was used to compare contrasts between conditions and parametric analyses. For each condition, a model of the hemodynamic response was formed by convolving a prototypical hemodynamic response function with the time course of each trial within the condition. Each trial was modeled as a short epoch which began at the onset of stimulation presentation and extended for two TRs (3 s). Each distortion level in each category was modeled separately, so that for each category, there were eight regressors (3, 6, 9, 12, 18, 21, 24, and random; refer to Fig. 1A). Contrasts were then defined by indicating which regressors were included in the conditions being compared. When categories were compared, all eight regressors were included: for example, in the contrast A > C, the contrast was defined as (all eight A regressors) > (all eight C regressors). We did not explicitly model a separate baseline condition. Explicit modeling of the baseline can lead to overparameterization and resultant inaccurate parameter estimates (Pernet, 2014). Instead, we used an implicit baseline, which can be thought of as including all other time points in the scan that were not explicitly modeled in each condition. Contrasts were compared using the general linear model (GLM) with separate participant predictors and participants treated as random effects.

To threshold the univariate analyses, we used two approaches. Our standard approach was cluster-size thresholding that was implemented using the cluster level simulator plugin in BrainVoyager (Forman et al., 1995; Goebel et al., 2006). For comparisons with implicit baseline, this threshold resulted in very large areas of activity that were difficult to visualize; for those contrasts, we used a strict Bonferroni’s correction in order to be able to better convey the primary areas of activation.

MVPA analyses

We performed multivoxel pattern analysis (MVPA) to identify areas from which it was possible to decode what type of trial was being performed. Specifically, we used MVPA to answer questions that are not well-suited for univariate analysis. First, what neural regions develop information across training that can be used to distinguish between the trained category (A) and the untrained category (C)? Second, what neural regions could be used to distinguish between in-category decisions (A) and out-of-category decisions (not-A)?

MVPA was performed using SPM12 and the PyMVPA toolkit (www.pymvpa.org). We first preprocessed the data using SPM 12 in order to extract head motion variables for use in the analysis. Preprocessing consisted of (1) slice scan time correction using the middle slice as reference; (2) head motion correction, aligning each image with the first image, and producing six head motion parameters; (3) coregistration of the functional images with the structural image; (4) segmentation of the structural image; and (5) normalization of the structural image to the MNI template. We did not perform spatial smoothing on this data. We then performed an independent GLM analysis for each trial, with each individual trial in the model as a variable of interest, and the six head motion variables as predictors of no interest. The resulting t test statistics maps were then used as input data for multivariate analysis. We used the least squares separate model to obtain t test statistic maps for each trial which has been shown to improve the performance and reliability of multivariate analysis (Mumford et al., 2012).

We used the searchlight method with a radius of 9 mm combined with a linear support vector machine algorithm (Kriegeskorte et al., 2006). We used a leave-one-run-out cross-validation method performed four times, training the classifier with the data of three runs each time, and testing the accuracy of the classifier with the data of the remaining run. For the group analysis, we performed paired sample t tests comparing the resulting Day 1 and Day 3 brain maps to identify regions with significantly different decoding accuracy across days, thresholded at p < 0.05.

We performed two different MVPA analyses. One was performed on Category A trials, searching for voxels from which it was possible to decode whether the stimulus on that trial was a category member (A) or a nonmember (not-A). We included all trials regardless of whether the participant was correct or not. We performed this analysis on Day 1 and Day 3 separately and then compared the results across days. A second analysis compared the trained category, A, with the comparison Categories B (additional category learned on day 1) and C (additional category learned on Day 3). For these analyses, we included only within-category trials (A, B, and C) and excluded out-of-category trials (not-A, not-B, and not-C). We looked for voxels from which it was possible to decode whether the stimulus on that trial was from Category A or B on Day 1 and A or C on Day 3 and finally directly compared the two analyses to find voxels from which we could decode A versus C on Day 3 (skilled performance vs novel category) but could not decode A from B on Day 1 (equally novel categories).

RSA

RSA analysis was performed using BrainVoyager 22.4. RSA analyses were performed via the following steps. First, data were preprocessed, and regressors were defined as described above in the image preprocessing section and univariate analysis sections. We then selected regions of interest (ROIs) for the RSA analysis. The choice of ROIs and how ROIs were defined is discussed in more detail below in the region of interest selection section. Next, we performed first- and second-level RSA analyses, each of which is described in more detail below. Briefly, the first-level RSA analysis was a data-driven analysis in which we calculated for individual ROIs the dissimilarity matrix of the neural activity within the ROI across the eight stimulus distortion levels for the Category A stimuli. In the second-level RSA analysis, we compared these observed representational dissimilarity matrixes (RDMs) with theory-generated RDMs that reflected different possible representations of category-relevant information.

We used the first-level RSA estimation procedure in BrainVoyager to calculate a dissimilarity matrix for each ROI across the eight stimulus types. We calculated the dissimilarity matrixes separately for Day 1 and Day 3 and for each Category A, B, and C in order to be able to see if representations were present early in training or if they emerged later in the learning process. There were eight different distortion levels of the stimuli, which were arranged in order in the dissimilarity matrixes from Level 1 (random) to Level 8 (prototype). The category boundary was between Levels 4 and 5: all stimuli in Levels 1–4 were category nonmembers, and those in Levels 5–8 were members.

The goal of the second-level RSA analysis was to compare the observed patterns in the RDMs with hypothesized potential category-relevant information. We first identified predicted RDMs that would be expected if the area coded for specific aspects of category representation, illustrated in Figure 7. The category member versus nonmember RDM was intended to identify regions that coded for the category membership (A or not-A) of the stimuli. In this RDM, all stimuli in the category have low dissimilarity with each other, all stimuli outside the category have low dissimilarity with each other, and stimuli in the category compared with those out of the category have high dissimilarity, with each other. The perceptual similarity RDM was intended to represent perceptual similarity across the eight levels of distortion. Adjacent stimulus pairs, such as Entropy 6 with an Entropy 9 have low dissimilarity, whereas pairs with greater entropy differences, such as Entropy 6 with Entropy 24, have high dissimilarity. The prototype and random RDMs were intended to represent unique coding of the prototype and random stimuli, respectively. If a single stimulus type is represented differently from the other stimuli, then we would expect high dissimilarity when it is compared with all other types of stimuli, and low dissimilarity when all other types of stimuli are compared with each other.

Region of interest selection

We selected regions of interest (ROIs) for use in the RSA analysis based on regions identified in past research as representing category-related information. A visualization of each ROI and the coordinates for the center voxel in each are included in Figures 5 and 6. ROIs were defined by specifying the center coordinate and forming a sphere of radius 4 for cortical regions (resulting in a total of 257 voxels in the ROI) or a sphere of radius 3 for subcortical regions (resulting in a total of 113 voxels).

We began with areas in the intraparietal sulcus (IPS) and ventromedial prefrontal cortex (VMPFC) in which multivariate and/or model-based univariate analyses in previous studies have identified category representations. Previous studies using model-based fMRI have found that the IPS is sensitive to category membership across a variety of measures including distance from the prototype and similarity to previously studied category members (Mack et al., 2013; Seger et al., 2015; Bowman et al., 2020; Blank and Bayer, 2022; Frank et al., 2023). Research with macaques has shown category representations in LIP, the monkey homolog of human IPS (Swaminathan et al., 2013; Freedman and Assad, 2016). Because IPS is a large structure and previous research has sometimes found different sensitivity of anterior and posterior IPS to categorization (Braunlich et al., 2017), we chose two regions within the IPS: one more anterolateral region and one more posteromedial. We chose to examine VMPFC based on previous research that activity in this area correlated with measures of prototype strength in tasks using discrete feature stimuli (Bowman and Zeithamova, 2018; Bowman et al., 2020). In light of research that suggests functional heterogeneity within the VMPFC (Clithero and Rangel, 2014; Jackson et al., 2020), we defined two ROIs extending bilaterally across the midline, one relatively inferior, and the other relatively superior and anterior.

We chose the precentral gyrus in the hand region based on previous research finding activity in this region for skilled categorization (Braunlich and Seger, 2016). We determined the location of this ROI by using the term-based meta-analysis available in Neurosynth (www.neurosynth.org), searching for the term “hand” and choosing the central voxel of the resulting region. Finally, we defined ROIS in the dorsal striatum including the head of the caudate (part of the goal-directed system) and tail of the caudate (part of the habit system). These were defined anatomically with reference to the division between the head of the caudate and body/tail of the caudate we have used in previous research (Seger and Cincotta, 2005).

Results

Behavioral results

To examine learning curves across the two scanning sessions, we first examined accuracy and reaction time for categorization of stimuli from Category A, illustrated in Figure 2A.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

A, Behavioral results for the trained Category A during the two fMRI scanning sessions on Day 1 and Day 3. Left, Accuracy across blocks. Right, Reaction time across blocks. Error bars indicate standard error. B, Accuracy and reaction time as a function of distance from the prototype for Category A on Day 1 and Day 3. Note that the decision bound between A and not-A, indicated by the vertical black line, falls between distances 12 and 18, so that distances 3, 6, 9, and 12 were considered correct if categorized as “A” and distances 18, 21, and 24 and random were considered correct if categorized as not-A. C, Mean accuracy across blocks during out-of-scanner training on Day 2. Each of the 30 blocks consisted of 24 trials, for a total of 720 trials. Error bars indicate standard error. D, Postscan categorization tests results. Only category A was tested, and no feedback was given. Left panel, Accuracy in the single and dual task conditions across Days 1 and 3. Right panel, Reaction time in the single and dual task conditions across Days 1 and 3. *p < 0.05. Error bars indicate standard error.

In order to examine learning across training, A 2 (Day 1 vs Day 3) × 8 (block) repeated-measures ANOVA was performed for the dependent variable of accuracy. There was main effect of day, F(1,24) = 90.10, p < 0.001, η2 = 0.346. The average accuracy increased from 67.6% on the first day to 84.7% on the third day. The main effect of block was significant, F(7,168) = 6.08, p < 0.001, η2 = 0.038, and the interaction between training and block was significant, F(7,168) = 3.53, p < 0.05, η2 = 0.031. Post hoc simple effects tests showed that there were significant differences in accuracy across blocks during the first day, but on the third day, accuracy did not differ significantly across blocks. A 2 (training Day 1 vs Day 3) × 8 (block) repeated-measures ANOVA for the dependent measure of reaction time was then performed. The main effect of training day was significant, F(1,24) = 41.08, p < 0.001, η2 = 0.187, with a faster reaction time overall on Day 3 (M = 948 ms) than Day 1 (M = 1,158 ms). The main effect of block was significant, F(7,168) = 8.29, p < 0.001, η2 = 0.046, and the interaction between training and block was also significant, F(7,168) = 13.99, p < 0.001, η2 = 0.077. Post hoc simple effects tests showed that there were significant differences in reaction time across blocks during the first day, but that on the third day, accuracy did not differ significantly across blocks.

We then examined how participants developed sensitivity to the category structure by examining accuracy and reaction time as a function of distance from the prototype (Fig. 2B). Similarity-based category learning studies typically report that accuracy and reaction time are best for the prototype stimulus and show lower performance as distance from the prototype increases (Posner and Keele, 1968; Bowman and Zeithamova, 2018). In addition, category learning studies find that accuracy and reaction time are best for stimuli far from the decision bound and worst for stimuli near the decision bound, consistent with the greater difficulty of making precise categorical determinations for near-bound stimuli (Seger et al., 2015; Braunlich et al., 2017). In our study, distance from the prototype and distance from the decision bound are confounded for within-category stimuli, but nevertheless, it is clear that accuracy and RT were best for the prototype and worst for distortion level 12 (near the decision bound). For not-A stimuli, accuracy and reaction time were worse closer to the decision bound and best for the random stimuli. A comparison of the Day 1 and Day 3 curves reveals that sensitivity to distance had already emerged on Day 1, but became more pronounced on Day 3. These observations were supported by statistical analyses. A 2 × 8 repeated measurement ANOVA for the dependent variable of accuracy showed that the main effect of day was significant, F(1,24) = 80.359, p < 0.001, η2 = 0.770; the main effect of distance from the prototype was significant, F(7,168) = 47.699, p < 0.001, η2 = 0.665; and there was a significant interaction between day and distance from the prototype, F(7,168) = 5.246, p < 0.001, η2 = 0.179. A 2 × 8 repeated measurement ANOVA for the dependent variable of response time showed that the main effect of day was significant, F(1,24) = 57.981, p < 0.001, η2 = 0.707; the main effect of distance from the prototype was significant, F(7,168) = 29.548, p < 0.001, η2 = 0.552; and there was significant interaction between day and distance from the prototype, F(7,168) = 7.583, p < 0.001, η2 = 0.240.

We examined performance on Day 2 (the out-of-scanner behavioral training session) and plotted the learning curve in Figure 2C. As can be seen in the Figure, on Day 2, participants initially performed at 73% accuracy on Block 1, similar to the accuracy rate on the final block in Day 1, and increased to 86% on the final block, similar to the accuracy rate for the first block on Day 3.

Finally, we compared average accuracy of Category A with the comparison categories, B on Day 1 and C on Day 3. Overall accuracy across all blocks and runs was not significantly different on Day 1 t(24) = 0.833, p = 0.413, Cohen's d = 0.167 (A, M = 0.68 ± 0.047; B, M = 0.71 ± 0.095), but was significantly higher for A than C on Day 3 t(24) = 3.956 p < 0.001, Cohen's d = 0.791 (A, M = 0.85 ± 0.036; C, M = 0.81 ± 0.041).

Postscan tests

The purpose of the postscan category tests was to use a dual task methodology in order to determine whether participants had achieved automaticity in their performance. We used the numerical Stroop task, which has been successfully used for this purpose in previous research (Helie et al., 2010). We first examined the accuracy of the dual task to ensure that participants paid sufficient attention to the dual task during categorization. Across both days, accuracy on this task was high (Day 1 M = 93.8%, Day 3 M = 95.7%) with no significant difference in accuracy between Days 1 and 3 on the dual task, t(24) = 0.204, p = 0.84. Reaction time decreased from Day 1 (M = 819 ms) to Day 3 (M = 716 ms) on the dual task, indicating that participants became more efficient at performing the dual task while maintaining accuracy.

We performed a 2 × 2 repeated measurement ANOVA (type of task, single vs dual task) × day (Day 1 vs Day 3) for the dependent measures of accuracy on the category learning task (Fig. 2D). The main effect of the presence of the dual task was not significant, F(1,24) = 0.506, p = 0.483, and there was no significant interaction between day and task, F(1,24) = 0.044, p = 0.835. Only the main effect of day was significant, F(1,24) = 18.68, p < 0.001, η2 = 0.227, with higher accuracy on Day 3 than Day 1. In the reaction time ANOVA, the interaction between day and task type was not significant, F(1,24) = 0.305, p = 0.586. The main effect of day was significant, F(1,24) = 11.83, p < 0.05, η2 = 0.114, as was the main effect of task type, F(1,24) = 20.496, p < 0.001, η2 = 0.047. Across both accuracy and reaction time measures, it is clear that the dual task had little effect. Participants were able to continue to accurately perform the category learning task, even after only the first session of training (Day 1). Overall participants did respond more slowly in the category learning task under dual task conditions, but this difference did not interact with day. In conclusion, the results are consistent with our prediction that performance on this task would not show dual task interference, consistent with other perceptual similarity-based category learning (Waldron and Ashby, 2001; Zeithamova and Maddox, 2007).

FMRI univariate analyses comparing conditions across days

Our goals and rationale for univariate analyses were twofold. Most importantly, we wanted to compare activity between training and novel categories (A vs C) and across Days 1 and 3 to identify how activity overall changed as a function of extensive training. In addition, we wanted to perform a “manipulation check” to verify that overall our task recruited neural regions similar to those reported in previous research in this area. We started with the simple “manipulation check” analysis by comparing categorization trials with an implicit baseline to verify that we recruited the predicted brain areas in the corticostriatal system and interacting regions. As shown in Figure 3 (top panel) and Tables 1 and 2, on both Days 1 and 3, we found activity in the dorsal striatum including the head of the caudate, tail of the caudate, and putamen, along with the intraparietal sulcus. In addition, there was broad activity in the visual cortex and activity in the motor cortex consistent with the motor demands of the task. During the first day, areas of the lateral frontal cortex and salience network (inferior frontal, anterior insula, anterior cingulate) were also recruited, consistent with involvement with executive functions and goal-directed learning during early category learning.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Whole-brain univariate analyses. Top panel, first row, Learning-related activity during Day 1 (Categories A and B compared with implicit baseline). Top panel, second row, Activity related to skilled categorization on Day 3 (Category A only compared with implicit baseline). Bottom panel, top row, Areas in which activity on Day 3 differed between the highly trained Category A and the novel Category C. Bottom panel, middle and bottom rows, Comparison of Day 1 and Day 3 for the highly trained Category A; the middle row includes all runs for both days, whereas the bottom row compares the first two runs on Day 1 and the last two runs on Day 3. Contrasts with implicit baseline (top panel) used a Bonferroni’s correction for multiple comparisons; contrasts between conditions (bottom panel) used a cluster threshold. See corresponding Tables 1⇓⇓–4 for details about the multiple-comparisons corrections used. Z, height of the horizontal slices in Talairach coordinates; R, right hemisphere; L, left hemisphere.

View this table:
  • View inline
  • View popup
Table 1.

Regions recruited on Day 1

View this table:
  • View inline
  • View popup
Table 2.

Regions recruited on Day 3, Category A only

We then performed the more theoretically important analyses that examined how univariate activity changed as a result of extensive training. We compared activity for the highly trained Category A with the novel Category C (Fig. 3, third row from the top; Table 3) and found greater activity for the well-learned Category A in small regions of the motor cortex, frontal, and parietal cortex. We also compared skilled categorization for A on Day 3 with Day 1, both overall across both scans (Fig. 3, fourth row from the top; Table 4, top panel) and in a contrast focusing on just the first two runs of Day 1 (during which most learning occurred as measured by accuracy), with the last two runs of Day 3 (Fig. 3, bottom row; Table 4, bottom panel). These analyses revealed decreases in activity in the head of the caudate, lateral frontal cortex, posterior IPS/precuneus, and thalamus as categorization became more skilled. These areas are associated with goal-directed instrumental learning, and the decrease in activity is consistent with a shift during training from goal-directed to habitual learning systems. A few regions became more active as categorization became more skilled, including the somatomotor cortex, frontopolar cortex, the anterior superior temporal gyrus, and bilateral anterior parahippocampal gyrus.

View this table:
  • View inline
  • View popup
Table 3.

Day 3 differences in activity for Category A versus Category C

View this table:
  • View inline
  • View popup
Table 4.

Differences in activity for Category A across days

MVPA analysis: effects of training day on category membership representation

Our first MVPA analysis was intended to identify areas from which the category decisions (A or not-A) could be decoded. We used a support vector machine combined with a whole-brain searchlight to locate voxel neighborhoods representing relevant information. We decoded the information related to category membership for trials in which the stimulus was a member of Category A (Fig. 1; this included distances 3, 6, 9, and 12) versus not-A stimuli (distances 18, 21, and 24 and random) on the first day and third day, respectively, and compared the results of the two searchlights with a paired sample t test.

As shown in Figure 4, top panel, and Table 5, during the first scan the category decision could be decoded from a large cortical region extending across the somatomotor cortex and premotor cortexes. Motor recruitment may reflect the different motor demands of the category decision, in which “yes” and “no” were indicated by different hands. In addition, the category decision could be decoded from the early visual cortex. It should be noted that stimuli in the category (A) were more similar to each other than those outside (not-A), which may have driven the visual cortex differences. On Day 3 after extensive training, A versus not-A could be decoded from a similar network. To identify the difference between Day 1 and Day 3, we compared the two and found that the differences were largely in the somatomotor cortex, including the hand region of the precentral gyrus. This indicates that as skill developed neural representations for the categorization decision became more distinct within the somatomotor cortex.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Multivoxel pattern analysis decoding results. Top, Analyses decoding trials with category members (A) from trials with category nonmembers (not-A). Bottom, Analyses decoding trials with the trained category (A) from trials on the untrained categories on Day 1 (B) and Day 3 (C). The bottom right panel shows the direct comparison between A versus C decoding and A versus B decoding. Color bars indicate t-values.

View this table:
  • View inline
  • View popup
Table 5.

MVPA decoding of A versus not-A categorical decision

MVPA analysis 2: post training representation of the trained category versus the novel category

In the second MVPA analysis, we focused on comparing the trained Category A with the comparison categories, B and C (Fig. 4, bottom panel; Table 6). We limited this analysis to only within-category stimuli (i.e., stimuli at distortion levels 3, 6, 9, 12). On the first day of training, A could be decoded from B only in regions of the occipital cortex. This may reflect bottom-up representational differences in the basic features found in A and B, e.g., differences in common angles, line lengths, and shapes. After training, on Day 3, Category A could be decoded from C across a larger network extending into the parietal cortex including the precuneus and IPS. A direct comparison of the two MVPAs found greater decoding in the IPS on Day 3 when decoding A from C than Day 1 when decoding A from B.

View this table:
  • View inline
  • View popup
Table 6.

MVPA decoding of category, Day 1 and Day 3

Representational similarity analyses

The goal of the RSA analysis was to determine whether predefined ROIs chosen on the basis of recruitment in previous research represent different aspects of category structure. Details about how ROIs were chosen and defined are given in the Materials and Methods section, and each ROI is illustrated along the left side of Figures 5 and 6. We performed the ROI analysis in two steps. In the first step, we calculated the representational dissimilarity matrix (RDM) within each ROI in a data-driven manner, separately for each day and each category. These RDMs are illustrated along the right side of Figures 5 and 6. Visualizing the RDMs allows us to get an overview of the pattern of similarity and dissimilarity within each ROI.

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Representational similarity analysis results for cortical ROIs. Each ROI is visualized on the left side of the figure. Observed dissimilarity matrixes from the first-level RSA analysis are shown on the right side of the figure for each ROI. The bar graph in the middle indicates the results of the second-level RSA which correlated the observed dissimilarity matrix with each of the four theory-generated RDMs illustrated in Figure 7. Talairach coordinates are indicated for the central voxel of each ROI; ROIs were defined as spheres encompassing 257 voxels for cortical ROIs.

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

Representational similarity analysis results for basal ganglia ROIs. Each ROI is visualized on the left side of the figure. Observed dissimilarity matrixes from the first-level RSA analysis are shown on the right side of the figure for each ROI. The bar graph in the middle indicates the results of the second-level RSA which correlated the observed dissimilarity matrix with each of the four theory-generated RDMs illustrated in Figure 7. Talairach coordinates are indicated for the central voxel of each ROI; ROIs were defined as spheres encompassing 113 voxels for caudate ROIs.

In order to better assess what information was represented in each ROI, we performed a second-level RSA analysis in which we compared how well the observed RDMs matched theory-generated RDMs representing four different possible types of representation. These theory-generated RDMs are shown in Figure 7 and included a categorical representation based on whether the stimuli were categorized as members or nonmembers (i.e., A or not-A), a perceptual similarity representation in which similarity was scaled according to the relative similarity between pairs of stimuli, a prototype representation positing a unique representation of the prototype, and a random model positing that random stimuli are represented differently than all other stimuli. We performed Pearson’s correlations between the observed and theory-generated RDMs for each category and day, which are shown in the bar graphs in the middle columns of Figures 5 and 6. It should be noted that these theory-generated RDMs were intended to reflect different possible category-related patterns of activity. However, the resulting RDMs are not orthogonal, and as a result, some observed RDMs can correlate strongly with more than theory-generated RDM. For example, in both the category model and the perceptual similarity model, dissimilarity is high in the upper right and lower level quadrants, and therefore an RDM with a similar pattern may correlate strongly with each. Furthermore, an observed RDM in which more than one category-related pattern of activity is present may not correlate strongly with either of the individual theory-generated RDMs if those two RDMs have substantially different dissimilarity values across regions of the matrix. For example, the prototype and category theory-generated RDMs have very different patterns in the lower right quadrant, and therefore an observed RDM in which the prototype stimuli are treated as dissimilar from all other stimuli, but the other within-category stimuli are treated as similar, will only partly correlate with each of the theory-based RDMs.

Figure 7.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 7.

Representational dissimilarity matrixes (RDMs) illustrating theory-generated patterns characterizing different types of category-relevant information. These matrixes were used in the second-level analyses which calculated the correlation between each and the observed RDM for each ROI. In these matrixes, the stimulus types are indicated along the top and left side, and each square is color coded (see color bar in the center) indicating how dissimilar the stimulus pair would be if activity in the ROI reflected the proposed representation. For the category member versus nonmember matrix, category members have low dissimilarity to other category members (e.g., 12, 9, 6, and prototype), and category nonmembers have low dissimilarity to other nonmembers (random, 24, 21, 18). For pairs in which one member is a category member and one a nonmember, this model predicts high dissimilarity. For the perceptual similarity representation matrix, similarity for each pair is determined by their relative distance in perceptual space, such that adjacent pairs (e.g., 24 and 21 or 9 and 12) have low dissimilarity and pairs farther apart (e.g., 6 and 24) have high dissimilarity. In the random versus distortions matrix, randomly generated stimuli are represented differently from all the other stimuli (which were generated as distortions of the prototype); in this matrix, dissimilarity is high between random stimuli and all others but low for other pairwise comparisons. Finally, the prototype versus other stimuli representation reflects a unique representation of the prototype such that dissimilarity is high between the prototype and all other stimuli. R, random stimuli; P, prototype stimuli (i.e., distortion level 3).

The first pattern apparent when examining Figures 5 and 6 is that the observed RDMs show very little difference in similarity between stimulus pairs on Day 1, both for Category A and the comparison Category B, in all regions except the motor cortex. However, many regions show emergence of similarity differences for Category A on Day 3.

Figure 5 illustrates the results from the cortical ROIs (note that no apparent differences were seen in the left and right posterior IPS, so this region was not included in the figure in order to save space). Within the left anterior IPS, we found representations for Category A that emerged on Day 3 and were largely absent on Day 1 and largely absent in the comparison Categories B and C (though Category C showed some indication of differential representation of both the random and prototype stimuli). The overall pattern we observed in the RDMs was largely categorical. This observation is supported by the second-level analysis, which found high correlations in the left ROI for Category A on Day 3 with both the category membership theory-generated RDM and the perceptual similarity theory-generated RDM. In the right ROI, the pattern was different: Activity correlated with both the random and perceptual similarity theory-generated RDMs.

We examined two bilateral VPMFC ROIs, one in a more inferior location and the other in a more superior location. In the superior ROI, the pattern shown in the RDMs indicated emergence of differences in stimulus processing largely limited to Category A on Day 3. The second-level analysis found a correlation with the prototype theory-generated RDM on both Days 1 and 3 for Category A. In the inferior ROI, we found a large degree of heterogeneity within the RDM, as evidenced by the large number of red squares indicating dissimilar patterns of activity for many stimulus pairs. However, these patterns correlated only modestly with the theory-generated RDMs. A visual inspection of the RDM indicates a potential combination of a unique representation for random stimuli, a unique representation for prototype stimuli, and a categorical pattern for the remaining stimuli. This ROI also showed variability in dissimilarity across stimulus pairs for Category C, particularly for the prototype, which may reflect early acquisition of information about Category C, and/or processing related to differentiating between the two categories.

Finally, we examined right and left precentral gyrus ROIs. Both these ROIs showed a strongly categorical pattern that was present on Day 1 for both Category A and B and became more distinct on Day 3 for Category A. The second-level analysis found a high correlation between the RDM and both the category membership and perceptual similarity theory-generated RDMs. These patterns are consistent with the motor function of the hand region of the precentral gyrus: category responses for A stimuli and not-A stimuli were indicated by responses made by different hands.

We also examined two regions of the basal ganglia, the head of the caudate, and the tail of the caudate, shown in Figure 6. In the head of the caudate, both the left and right ROIs showed high correlations with the perceptual similarity, prototype, and random theory-generated RDMs on Day 3 that were lower or not present on Day 1. In the tail of the caudate, the patterns were similar, especially for the right ROI.

Discussion

Our study revealed category representation changes in neural systems underlying perceptual categorization between an initial learning phase and a skilled performance phase. MVPA analyses indicated that decodable differences emerged between the trained category (A) and a novel category (C) in the intraparietal sulcus. Convergent evidence from RSA found that representations emerged in both the anterior IPS and VMPFC. Representations in the left anterior IPS had a largely categorical structure reflecting the decision bound between the categories, with low dissimilarity within the groups of in-category stimuli and out-of-category stimuli and high dissimilarity between these groups of stimuli. In contrast, representations in the VMPFC were more reflective of other category-related information, including prototype representations.

Variable category representations across neural regions

Humans are capable of utilizing multiple categorization strategies and learning multiple types of category structure (Hélie et al., 2016; Ashby and Valentin, 2018), and fMRI evidence indicates that multiple category structures can be represented simultaneously in the brain (Davis et al., 2012; Bowman et al., 2020) during training. Our task design lent itself to examining several different category-related representations. Most importantly, our use of an A/not-A task (one in which participants need to implement a decision bound that divides the continuously varying stimuli into two or more discrete groups; Aizenstein et al., 2000; Zeithamova et al., 2008; Ashby and Valentin, 2018) allowed us to identify categorical representations in the brain that reflected the trained decision bound and associated discrete motor responses for A and not-A stimuli. In addition, we were able to identify neural regions that represented perceptual similarity, represented the prototype as a unique member of the category, and represented random stimuli (in contrast to stimuli formed as distortions of the prototype).

We found sensitivity to category representation in the anterior IPS, as evidenced by both the RSA analysis and the MVPA analysis in which the skilled category could be decoded from the novel category on Day 3 in this area. A number of previous studies have found representations of categorical information in LIP neurons in macaques (Swaminathan et al., 2013; Freedman and Assad, 2016). Neuroimaging studies also found representations in IPS for hierarchical rule structure (Frank et al., 2023), within-category similarity (Seger et al., 2015), and between-category decision bounds (Braunlich et al., 2017). One approach has been to use model-based fMRI to compare prototype models (in which stimuli are categorized on the basis of distance to the prototype) and exemplar models (in which stimuli are categorized on the basis of distance to previously studied individual items). Results have been inconsistent: Blank and Bayer (2022) studied early learning using a task similar to the one used in the present study found that IPS activity was better fit by prototype than exemplar models, but studies using family resemblance tasks in which participants learn to distinguish between two categories formed by manipulating discrete features found that activity in parietal regions including IPS was better fit by an exemplar model (Mack et al., 2013; Bowman and Zeithamova, 2018). A resolution to this difference will require further research; however, both models support the view that IPS category representations can support transfer to new stimuli through perceptual similarity metrics.

Previous research examining category representations in VMPFC has had conflicting results: studies using family resemblance tasks with discrete features found prototype representations (Bowman and Zeithamova, 2018; Bowman et al., 2020), but another using a task similar to ours found no evidence for prototype representations (Blank and Bayer, 2022). Our RSA analysis showed sensitivity to the category prototype in a superior VMPFC ROI. The pattern within an inferior VMPFC ROI was harder to characterize but appeared to represent aspects of the prototype, random stimuli, and the category structure. The inferior VMPFC ROI overlaps areas of activation reported in other studies isolating a variety of task features that collectively may rely on or contribute to schema formation. These include category prototype representations (Bowman et al., 2020), abstraction of complex rules (Cortese et al., 2021), accumulation of information for decision-making (Theves et al., 2021), representation of values associated with feature combinations (Pelletier et al., 2021), and compression of task representations after practice (Mack et al., 2020). Functional connectivity studies have found that inferior VMPFC is more strongly connected with regions of the lateral temporal cortex that code for long-term conceptual knowledge, consistent with the use of schemas for acquisition of novel concepts, whereas the more superior and anterior region is more strongly connected with the default mode network (Jackson et al., 2020).

We found representation of the category decision bound and response in the motor cortex. Category decisions could be decoded using MVPA from the motor cortex to a greater degree on Day 3 than Day 1, and the RSA analysis showed increasingly categorical representations on Day 3 as well. This activity likely reflected in part motor effector control, as activity was lateralized consistent with the motor demands of category membership decisions being indicated via button presses by each hand. However, other research has also found that the motor cortex goes beyond simple response execution in categorization. For example, motor cortex activity is related to the accumulation of category evidence over time (Wheeler et al., 2014; Braunlich and Seger, 2016). Recent research has found that nearby regions within the motor cortex represent effector-specific and cognitive factors which may allow for integration during motor control (Gordon et al., 2023).

Shifts in corticostriatal recruitment across training

Early learning included recruitment of goal-directed corticostriatal systems, including the frontal cortex and the head of the caudate, which decreased in activity with training and were more active on Day 3 for the novel category than the trained category. In contrast, the posterior striatum including the tail of the caudate and putamen were recruited across both days, along with the visual cortex, motor cortex, and IPS. This shift from executive to somatomotor corticostriatal networks is present not only in category learning tasks but also across other instrumental and motor learning tasks (Floyer-Lea and Matthews, 2004; Lehéricy et al., 2005; Hikosaka et al., 2019; Choi et al., 2020). In category learning, this neural system shift is associated with a behavioral shift away from using explicit hypothesis testing strategies to using procedural strategies (Ashby et al., 1998; Ashby and Maddox, 2011).

Skill, habit, and automaticity

An important question is how to characterize the skill level of our participants. We chose to compare across 2 d, with 720 trials of training on the second day. This was sufficient for the participants to reach asymptotic accuracy and reaction time, which are standard measures of skill (Seger and Spiering, 2011). Additional concepts exist of “habit” and “automaticity” from instrumental learning and cognitive psychology, respectively, which propose different criteria for considering learning to have qualitatively changed from initial learning to a more skilled level. Behavioral criteria for habits have proved difficult to establish (Balleine and Dezfouli, 2019). The most typical criterion is insensitivity to outcome devaluation (Yin and Knowlton, 2006), which is not always effective in identifying habits in human participants (Pool et al., 2022), and could not be easily incorporated into our task. For automaticity, multiple criteria have been proposed (Ashby et al., 2010), with the most common being that an automatic process can be performed independent of executive functions and working memory. We utilized a dual task manipulation and found that accuracy was not impaired by performing the dual task, even at the end of the first day of practice, after <200 trials. This is consistent with findings that implicit procedural learning tasks are less affected by dual tasks than explicit learning in the early phases of learning (Zeithamova and Maddox, 2007).

Other studies have examined greater amounts of training. Waldschmidt and Ashby (2011) trained participants for 20 d and >10,000 trials of practice. They found changes in neural recruitment even at the late phases: the striatum was recruited through Day 10, but was no longer recruited on Day 20, consistent with theories that there is a shift to cortical representations after extensive training (Ashby et al., 2010; Helie et al., 2015).

In this study, the sessions were completed on 3 consecutive days, allowing for a full night of sleep and associated memory consolidation between each session. Consolidation-related effects have been identified in both the VMPFC (Nieuwenhuis and Takashima, 2011; Gilboa and Moscovitch, 2021) and corticostriatal system (Rusu and Pennartz, 2020).

Conclusion

How do the brain systems we use for categorization change during training? We compared brain activity during initial learning with that after extensive practice and found that patterns in the intraparietal sulcus and VMPFC changed to reflect different aspects of the category structure and categorization performance. These results extend our knowledge of how neural systems change during training and allow us to identify the types of categorical information that each brain region is sensitive to.

Footnotes

  • This work was supported by the MOE Project of the Key Research Institute of Humanities and Social Sciences in Universities (22JJD190006), the Guangdong Basic and Applied Basic Research Foundation (2023A1515012355), and Striving for the First-Class, Improving Weak Links and Highlighting Features (SIH) Key Discipline for Psychology in South China Normal University.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Carol A. Seger at carol.seger{at}colostate.edu.

SfN exclusive license.

References

  1. ↵
    1. Aizenstein HJ,
    2. MacDonald AW,
    3. Stenger VA,
    4. Nebes RD,
    5. Larson JK,
    6. Ursu S,
    7. Carter CS
    (2000) Complementary category learning systems identified using event-related functional MRI. J Cogn Neurosci 12:977–987. https://doi.org/10.1162/08989290051137512
    OpenUrlCrossRefPubMed
  2. ↵
    1. Ashby FG,
    2. Alfonso-Reese LA,
    3. Turken AU,
    4. Waldron EM
    (1998) A neuropsychological theory of multiple systems in category learning. Psychol Rev 105:442–481. https://doi.org/10.1037/0033-295X.105.3.442
    OpenUrlCrossRefPubMed
  3. ↵
    1. Ashby FG,
    2. Maddox WT
    (2011) Human category learning 2.0. Ann N Y Acad Sci 1224:147–161. https://doi.org/10.1111/j.1749-6632.2010.05874.x pmid:21182535
    OpenUrlCrossRefPubMed
  4. ↵
    1. Ashby FG,
    2. Rosedahl L
    (2017) A neural interpretation of exemplar theory. Psychol Rev 124:472–482. https://doi.org/10.1037/rev0000064 pmid:28383925
    OpenUrlCrossRefPubMed
  5. ↵
    1. Ashby FG,
    2. Turner BO,
    3. Horvitz JC
    (2010) Cortical and basal ganglia contributions to habit learning and automaticity. Trends Cogn Sci 14:208–215. https://doi.org/10.1016/j.tics.2010.02.001 pmid:20207189
    OpenUrlCrossRefPubMed
  6. ↵
    1. Ashby FG,
    2. Valentin VV
    (2018) The categorization experiment: experimental design and data analysis. In: Stevens Handbook of Experimental Psychology and Cognitive Neuroscience, Fourth Edition, Volume Five: Methodology, pp 307–347. New York: Wiley.
  7. ↵
    1. Balleine BW,
    2. Dezfouli A
    (2019) Hierarchical action control: adaptive collaboration between actions and habits. Front Psychol 10:2735. https://doi.org/10.3389/fpsyg.2019.02735 pmid:31920796
    OpenUrlCrossRefPubMed
  8. ↵
    1. Blank H,
    2. Bayer J
    (2022) Functional imaging analyses reveal prototype and exemplar representations in a perceptual single-category task. Commun Biol 5:896. https://doi.org/10.1038/s42003-022-03858-z pmid:36050393
    OpenUrlCrossRefPubMed
  9. ↵
    1. Bowman CR,
    2. Iwashita T,
    3. Zeithamova D
    (2020) Tracking prototype and exemplar representations in the brain across learning. Elife 9:e59360. https://doi.org/10.7554/eLife.59360 pmid:33241999
    OpenUrlCrossRefPubMed
  10. ↵
    1. Bowman CR,
    2. Zeithamova D
    (2018) Abstract memory representations in the ventromedial prefrontal cortex and hippocampus support concept generalization. J Neurosci 38:2811–2817. https://doi.org/10.1523/JNEUROSCI.2811-17.2018 pmid:29437891
    OpenUrlCrossRefPubMed
  11. ↵
    1. Braunlich K,
    2. Liu Z,
    3. Seger CA
    (2017) Occipitotemporal category representations are sensitive to abstract category boundaries defined by generalization demands. J Neurosci 37:7631–7642. https://doi.org/10.1523/JNEUROSCI.3825-16.2017 pmid:28674173
    OpenUrlAbstract/FREE Full Text
  12. ↵
    1. Braunlich K,
    2. Seger CA
    (2016) Categorical evidence, confidence, and urgency during probabilistic categorization. Neuroimage 125:941–952. https://doi.org/10.1016/j.neuroimage.2015.11.011 pmid:26564532
    OpenUrlCrossRefPubMed
  13. ↵
    1. Cantwell G,
    2. Crossley MJ,
    3. Ashby FG
    (2015) Multiple stages of learning in perceptual categorization: evidence and neurocomputational theory. Psychon Bull Rev 22:1598–1613. https://doi.org/10.3758/s13423-015-0827-2 pmid:25917141
    OpenUrlCrossRefPubMed
  14. ↵
    1. Choi Y,
    2. Shin EY,
    3. Kim S
    (2020) Spatiotemporal dissociation of fMRI activity in the caudate nucleus underlies human de novo motor skill learning. Proc Natl Acad Sci U S A 117:23886–23897. https://doi.org/10.1073/pnas.2003963117 pmid:32900934
    OpenUrlAbstract/FREE Full Text
  15. ↵
    1. Clithero JA,
    2. Rangel A
    (2014) Informatic parcellation of the network involved in the computation of subjective value. Soc Cogn Affect Neurosci 9:1289–1302. https://doi.org/10.1093/scan/nst106 pmid:23887811
    OpenUrlCrossRefPubMed
  16. ↵
    1. Cortese A,
    2. Yamamoto A,
    3. Hashemzadeh M,
    4. Sepulveda P,
    5. Kawato M,
    6. Martino BD
    (2021) Value signals guide abstraction during learning. Elife 10:e68943. https://doi.org/10.7554/eLife.68943 pmid:34254586
    OpenUrlCrossRefPubMed
  17. ↵
    1. Davis T,
    2. Love BC,
    3. Preston AR
    (2012) Learning the exception to the rule: model-based FMRI reveals specialized representations for surprising category members. Cereb Cortex 22:260–273. https://doi.org/10.1093/cercor/bhr036
    OpenUrlCrossRefPubMed
  18. ↵
    1. Floyer-Lea A,
    2. Matthews PM
    (2004) Changing brain networks for visuomotor control with increased movement automaticity. J Neurophysiol 92:2405–2412. https://doi.org/10.1152/jn.01092.2003
    OpenUrlCrossRefPubMed
  19. ↵
    1. Forman SD,
    2. Cohen JD,
    3. Fitzgerald M,
    4. Eddy WF,
    5. Mintun MA,
    6. Noll DC
    (1995) Improved assessment of significant activation in functional magnetic resonance imaging (fMRI): use of a cluster-size threshold. Magn Reson Med 33:636–647. https://doi.org/10.1002/mrm.1910330508
    OpenUrlCrossRefPubMed
  20. ↵
    1. Frank SM,
    2. Maechler MR,
    3. Fogelson SV,
    4. Tse PU
    (2023) Hierarchical categorization learning is associated with representational changes in the dorsal striatum and posterior frontal and parietal cortex. Hum Brain Mapp 44:3897–3912. https://doi.org/10.1002/hbm.26323 pmid:37126607
    OpenUrlCrossRefPubMed
  21. ↵
    1. Freedman DJ,
    2. Assad JA
    (2016) Neuronal mechanisms of visual categorization: an abstract view on decision making. Annu Rev Neurosci 39:129–147. https://doi.org/10.1146/annurev-neuro-071714-033919
    OpenUrlCrossRefPubMed
  22. ↵
    1. Gilboa A,
    2. Marlatte H
    (2017) Neurobiology of schemas and schema-mediated memory. Trends Cogn Sci 21:618–631. https://doi.org/10.1016/j.tics.2017.04.013
    OpenUrlCrossRefPubMed
  23. ↵
    1. Gilboa A,
    2. Moscovitch M
    (2021) No consolidation without representation: correspondence between neural and psychological representations in recent and remote memory. Neuron 109:2239–2255. https://doi.org/10.1016/j.neuron.2021.04.025
    OpenUrlCrossRefPubMed
  24. ↵
    1. Gluth S,
    2. Rieskamp J,
    3. Büchel C
    (2012) Deciding when to decide: time-variant sequential sampling models explain the emergence of value-based decisions in the human brain. J Neurosci 32:10686–10698. https://doi.org/10.1523/JNEUROSCI.0727-12.2012 pmid:22855817
    OpenUrlAbstract/FREE Full Text
  25. ↵
    1. Goebel R,
    2. Esposito F,
    3. Formisano E
    (2006) Analysis of functional image analysis contest (FIAC) data with BrainVoyager QX: from single-subject to cortically aligned group general linear model analysis and self-organizing group independent component analysis. Hum Brain Mapp 27:392–401. https://doi.org/10.1002/hbm.20249 pmid:16596654
    OpenUrlCrossRefPubMed
  26. ↵
    1. Gordon EM, et al.
    (2023) A somato-cognitive action network alternates with effector regions in motor cortex. Nature 617:351–359. https://doi.org/10.1038/s41586-023-05964-2 pmid:37076628
    OpenUrlCrossRefPubMed
  27. ↵
    1. Helie S,
    2. Roeder JL,
    3. Ashby FG
    (2010) Evidence for cortical automaticity in rule-based categorization. J Neurosci 30:14225–14234. https://doi.org/10.1523/JNEUROSCI.2393-10.2010 pmid:20962243
    OpenUrlAbstract/FREE Full Text
  28. ↵
    1. Helie S,
    2. Roeder JL,
    3. Vucovich L,
    4. Rünger D,
    5. Ashby FG
    (2015) A neurocomputational model of automatic sequence production. J Cogn Neurosci 27:1412–1426. https://doi.org/10.1162/jocn_a_00794
    OpenUrlCrossRefPubMed
  29. ↵
    1. Hélie S,
    2. Turner BO,
    3. Crossley MJ,
    4. Ell SW,
    5. Ashby FG
    (2016) Trial-by-trial identification of categorization strategy using iterative decision-bound modeling. Behav Res Methods 49:1146–1162. https://doi.org/10.3758/s13428-016-0774-5 pmid:27496174
    OpenUrlCrossRefPubMed
  30. ↵
    1. Hélie S,
    2. Waldschmidt JG,
    3. Ashby FG
    (2010) Automaticity in rule-based and information-integration categorization. Atten Percept Psychophys 72:1013–1031. https://doi.org/10.3758/APP.72.4.1013
    OpenUrlCrossRefPubMed
  31. ↵
    1. Hikosaka O,
    2. Yasuda M,
    3. Nakamura K,
    4. Isoda M,
    5. Kim HF,
    6. Terao Y,
    7. Amita H,
    8. Maeda K
    (2019) Multiple neuronal circuits for variable object–action choices based on short- and long-term memories. Proc Natl Acad Sci U S A 116:26313–26320. https://doi.org/10.1073/pnas.1902283116 pmid:31871157
    OpenUrlAbstract/FREE Full Text
  32. ↵
    1. Homa D,
    2. Sterling S,
    3. Trepel L
    (1981) Limitations of exemplar-based generalization and the abstraction of categorical information. J Exp Psychol Hum Learn Mem 7:418. https://doi.org/10.1037/0278-7393.7.6.418
    OpenUrlCrossRef
  33. ↵
    1. Jackson RL,
    2. Bajada CJ,
    3. Lambon Ralph MA,
    4. Cloutman LL
    (2020) The graded change in connectivity across the ventromedial prefrontal cortex reveals distinct subregions. Cereb Cortex 30:165–180. https://doi.org/10.1093/cercor/bhz079 pmid:31329834
    OpenUrlCrossRefPubMed
  34. ↵
    1. Kriegeskorte N,
    2. Goebel R,
    3. Bandettini P
    (2006) Information-based functional brain mapping. Proc Natl Acad Sci U S A 103:3863–3868. https://doi.org/10.1073/pnas.0600244103 pmid:16537458
    OpenUrlAbstract/FREE Full Text
  35. ↵
    1. Lehéricy S,
    2. Benali H,
    3. Van de Moortele P-F,
    4. Pélégrini-Issac M,
    5. Waechter T,
    6. Ugurbil K,
    7. Doyon J
    (2005) Distinct basal ganglia territories are engaged in early and advanced motor sequence learning. Proc Natl Acad Sci U S A 102:12566–12571. https://doi.org/10.1073/pnas.0502762102 pmid:16107540
    OpenUrlAbstract/FREE Full Text
  36. ↵
    1. Mack ML,
    2. Preston AR,
    3. Love BC
    (2013) Decoding the brain’s algorithm for categorization from its neural implementation. Curr Biol 23:2023–2027. https://doi.org/10.1016/j.cub.2013.08.035 pmid:24094852
    OpenUrlCrossRefPubMed
  37. ↵
    1. Mack ML,
    2. Preston AR,
    3. Love BC
    (2020) Ventromedial prefrontal cortex compression during concept learning. Nat Commun 11:46. https://doi.org/10.1038/s41467-019-13930-8 pmid:31911628
    OpenUrlCrossRefPubMed
  38. ↵
    1. Mumford JA,
    2. Turner BO,
    3. Ashby FG,
    4. Poldrack RA
    (2012) Deconvolving BOLD activation in event-related designs for multivoxel pattern classification analyses. Neuroimage 59:2636–2643. https://doi.org/10.1016/j.neuroimage.2011.08.076 pmid:21924359
    OpenUrlCrossRefPubMed
  39. ↵
    1. Nieuwenhuis ILC,
    2. Takashima A
    (2011) The role of the ventromedial prefrontal cortex in memory consolidation. Behav Brain Res 218:325–334. https://doi.org/10.1016/j.bbr.2010.12.009
    OpenUrlCrossRefPubMed
  40. ↵
    1. O’Doherty JP,
    2. Cockburn J,
    3. Pauli WM
    (2017) Learning, reward, and decision making. Annu Rev Psychol 68:73–100. https://doi.org/10.1146/annurev-psych-010416-044216 pmid:27687119
    OpenUrlCrossRefPubMed
  41. ↵
    1. Pelletier G,
    2. Aridan N,
    3. Fellows LK,
    4. Schonberg T
    (2021) A preferential role for ventromedial prefrontal cortex in assessing “the value of the whole” in multiattribute object evaluation. J Neurosci 41:5056–5068. https://doi.org/10.1523/JNEUROSCI.0241-21.2021 pmid:33906899
    OpenUrlAbstract/FREE Full Text
  42. ↵
    1. Pernet CR
    (2014) Misconceptions in the use of the general linear model applied to functional MRI: a tutorial for junior neuro-imagers. Front Neurosci 8:1. https://doi.org/10.3389/fnins.2014.00001 pmid:24478622
    OpenUrlCrossRefPubMed
  43. ↵
    1. Pool ER, et al.
    (2022) Determining the effects of training duration on the behavioral expression of habitual control in humans: a multilaboratory investigation. Learn Mem 29:16–28. https://doi.org/10.1101/lm.053413.121 pmid:34911800
    OpenUrlCrossRefPubMed
  44. ↵
    1. Posner MI,
    2. Keele SW
    (1968) On the genesis of abstract ideas. J Exp Psychol 77:353–363. https://doi.org/10.1037/h0025953
    OpenUrlCrossRefPubMed
  45. ↵
    1. Rusu SI,
    2. Pennartz CMA
    (2020) Learning, memory and consolidation mechanisms for behavioral control in hierarchically organized cortico-basal ganglia systems. Hippocampus 30:73–98. https://doi.org/10.1002/hipo.23167 pmid:31617622
    OpenUrlCrossRefPubMed
  46. ↵
    1. Seger CA
    (2008) How do the basal ganglia contribute to categorization? Their roles in generalization, response selection, and learning via feedback. Neurosci Biobehav Rev 32:265–278. https://doi.org/10.1016/j.neubiorev.2007.07.010 pmid:17919725
    OpenUrlCrossRefPubMed
  47. ↵
    1. Seger CA
    (2018) Corticostriatal foundations of habits. Curr Opin Behav Sci 20:153–160. https://doi.org/10.1016/j.cobeha.2018.01.006
    OpenUrlCrossRef
  48. ↵
    1. Seger CA,
    2. Braunlich K,
    3. Wehe HS,
    4. Liu Z
    (2015) Generalization in category learning: the roles of representational and decisional uncertainty. J Neurosci 35:8802–8812. https://doi.org/10.1523/JNEUROSCI.0654-15.2015 pmid:26063914
    OpenUrlAbstract/FREE Full Text
  49. ↵
    1. Seger CA,
    2. Cincotta CM
    (2005) The roles of the caudate nucleus in human classification learning. J Neurosci 25:2941–2951. https://doi.org/10.1523/JNEUROSCI.3401-04.2005 pmid:15772354
    OpenUrlAbstract/FREE Full Text
  50. ↵
    1. Seger CA,
    2. Miller EK
    (2010) Category learning in the brain. Annu Rev Neurosci 33:203–219. https://doi.org/10.1146/annurev.neuro.051508.135546 pmid:20572771
    OpenUrlCrossRefPubMed
  51. ↵
    1. Seger CA,
    2. Peterson EJ,
    3. Cincotta CM,
    4. Lopez-Paniagua D,
    5. Anderson CW
    (2010) Dissociating the contributions of independent corticostriatal systems to visual categorization learning through the use of reinforcement learning modeling and Granger causality modeling. Neuroimage 50:644–656. https://doi.org/10.1016/j.neuroimage.2009.11.083 pmid:19969091
    OpenUrlCrossRefPubMed
  52. ↵
    1. Seger CA,
    2. Spiering BJ
    (2011) A critical review of habit learning and the basal ganglia. Front Syst Neurosci 5:66. https://doi.org/10.3389/fnsys.2011.00066 pmid:21909324
    OpenUrlCrossRefPubMed
  53. ↵
    1. Smith JD,
    2. Redford JS,
    3. Gent LC,
    4. Washburn DA
    (2005) Visual search and the collapse of categorization. J Exp Psychol Gen 134:443–460. https://doi.org/10.1037/0096-3445.134.4.443
    OpenUrlCrossRef
  54. ↵
    1. Swaminathan SK,
    2. Masse NY,
    3. Freedman DJ
    (2013) A comparison of lateral and medial intraparietal areas during a visual categorization task. J Neurosci 33:13157–13170. https://doi.org/10.1523/JNEUROSCI.5723-12.2013 pmid:23926269
    OpenUrlAbstract/FREE Full Text
  55. ↵
    1. Theves S,
    2. Neville DA,
    3. Fernández G,
    4. Doeller CF
    (2021) Learning and representation of hierarchical concepts in hippocampus and prefrontal cortex. J Neurosci 41:7675–7686. https://doi.org/10.1523/JNEUROSCI.0657-21.2021 pmid:34330775
    OpenUrlAbstract/FREE Full Text
  56. ↵
    1. Waldron EM,
    2. Ashby FG
    (2001) The effects of concurrent task interference on category learning: evidence for multiple category learning systems. Psychon Bull Rev 8:168–176. https://doi.org/10.3758/BF03196154
    OpenUrlCrossRefPubMed
  57. ↵
    1. Waldschmidt JG,
    2. Ashby FG
    (2011) Cortical and striatal contributions to automaticity in information-integration categorization. Neuroimage 56:1791–1802. https://doi.org/10.1016/j.neuroimage.2011.02.011 pmid:21316475
    OpenUrlCrossRefPubMed
  58. ↵
    1. Wheeler ME,
    2. Woo SG,
    3. Ansel T,
    4. Tremel JJ,
    5. Collier AL,
    6. Velanova K,
    7. Ploran EJ,
    8. Yang T
    (2014) The strength of gradually accruing probabilistic evidence modulates brain activity during a categorical decision. J Cogn Neurosci 27:705–719. https://doi.org/10.1162/jocn_a_00739 pmid:25313658
    OpenUrlPubMed
  59. ↵
    1. Yin HH,
    2. Knowlton BJ
    (2006) The role of the basal ganglia in habit formation. Nat Rev Neurosci 7:464–476. https://doi.org/10.1038/nrn1919
    OpenUrlCrossRefPubMed
  60. ↵
    1. Zeithamova D,
    2. Maddox WT
    (2007) The role of visuospatial and verbal working memory in perceptual category learning. Mem Cognit 35:1380–1398. https://doi.org/10.3758/BF03193609
    OpenUrlCrossRefPubMed
  61. ↵
    1. Zeithamova D,
    2. Maddox WT,
    3. Schnyer DM
    (2008) Dissociable prototype learning systems: evidence from brain imaging and behavior. J Neurosci 28:13194–13201. https://doi.org/10.1523/JNEUROSCI.2915-08.2008 pmid:19052210
    OpenUrlAbstract/FREE Full Text
Back to top

In this issue

The Journal of Neuroscience: 45 (9)
Journal of Neuroscience
Vol. 45, Issue 9
26 Feb 2025
  • Table of Contents
  • About the Cover
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Emergence of Categorical Representations in Parietal and Ventromedial Prefrontal Cortex across Extended Training
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Emergence of Categorical Representations in Parietal and Ventromedial Prefrontal Cortex across Extended Training
Zhiya Liu, Yitao Zhang, Chudan Wen, Jingzhao Yuan, Jingxian Zhang, Carol A. Seger
Journal of Neuroscience 26 February 2025, 45 (9) e1315242024; DOI: 10.1523/JNEUROSCI.1315-24.2024

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Emergence of Categorical Representations in Parietal and Ventromedial Prefrontal Cortex across Extended Training
Zhiya Liu, Yitao Zhang, Chudan Wen, Jingzhao Yuan, Jingxian Zhang, Carol A. Seger
Journal of Neuroscience 26 February 2025, 45 (9) e1315242024; DOI: 10.1523/JNEUROSCI.1315-24.2024
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Conclusion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • Peer Review
  • PDF

Keywords

  • automaticity
  • category learning
  • caudate
  • habit
  • putamen
  • skill

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Too little and too much: balanced hippocampal, but not medial prefrontal, neural activity is required for intact novel object recognition in rats
  • Structural plasticity of peptidergic and nonpeptidergic C afferent terminals in the medullary dorsal horn during craniofacial inflammatory pain
  • Comparison of signals from cerebellar Purkinje cells and deep nuclei during temporal prediction in primates
Show more Research Articles

Behavioral/Cognitive

  • Too little and too much: balanced hippocampal, but not medial prefrontal, neural activity is required for intact novel object recognition in rats
  • Ventral striatal cholinergic interneurons regulate decision making or motor impulsivity differentially across learning and biological sex
  • The Hidden Benefits of Noise: Low-Frequency tRNS and Dynamic Visual Noise Enhance Visual Processing
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.