Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

Perception and Memory Reinstatement Engage Overlapping Face-Selective Regions within Human Ventral Temporal Cortex

Yvonne Y. Chen, Aruni Areti, Daniel Yoshor and Brett L. Foster
Journal of Neuroscience 29 May 2024, 44 (22) e2180232024; https://doi.org/10.1523/JNEUROSCI.2180-23.2024
Yvonne Y. Chen
1Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Yvonne Y. Chen
Aruni Areti
2Rice University, Houston, Texas 77030
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Daniel Yoshor
1Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Brett L. Foster
1Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Brett L. Foster
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • Peer Review
  • PDF
Loading

Abstract

Humans have the remarkable ability to vividly retrieve sensory details of past events. According to the theory of sensory reinstatement, during remembering, brain regions specialized for processing specific sensory stimuli are reactivated to support content-specific retrieval. Recently, several studies have emphasized transformations in the spatial organization of these reinstated activity patterns. Specifically, studies of scene stimuli suggest a clear anterior shift in the location of retrieval activations compared with the activity observed during perception. However, it is not clear that such transformations occur universally, with inconsistent evidence for other important stimulus categories, particularly faces. One challenge in addressing this question is the careful delineation of face-selective cortices, which are interdigitated with other selective regions, in configurations that spatially differ across individuals. Therefore, we conducted a multisession neuroimaging study to first carefully map individual participants’ (nine males and seven females) face-selective regions within ventral temporal cortex (VTC), followed by a second session to examine the activity patterns within these regions during face memory encoding and retrieval. While face-selective regions were expectedly engaged during face perception at encoding, memory retrieval engagement exhibited a more selective and constricted reinstatement pattern within these regions, but did not show any consistent direction of spatial transformation (e.g., anteriorization). We also report on unique human intracranial recordings from VTC under the same experimental conditions. These findings highlight the importance of considering the complex configuration of category-selective cortex in elucidating principles shaping the neural transformations that occur from perception to memory.

  • episodic memory
  • face processing
  • fMRI
  • iEEG
  • reinstatement

Significance Statement

Sensory reinstatement suggests that brain regions involved in the initial sensory processing of a stimulus are reactivated to support successful memory retrieval. However, recent findings have suggested reinstated cortical activations occur anterior to perceptually driven activities, particularly for scene stimuli. It remains unclear if this anteriorization occurs for other stimuli, such as faces. To address this question, we conducted a multisession functional magnetic resonance imaging study to identify face-selective regions in ventral temporal cortex and examined activities within these regions during face memory reinstatement. Results showed retrieval activity closely aligns with the perceptual neural substrate, confirming individual-specific face–selective regions without consistent spatial shifts. This underscores the importance of considering individual functional organization when investigating the neural substrates of perception–memory transformations.

Introduction

During episodic memory retrieval, prior experiences can be consciously remembered to support behavior. A leading theory for this process posits that accessing sensory details of past events occurs via the reinstatement of prior activity patterns in sensory cortex (Tulving and Thomson, 1973; Danker and Anderson, 2010). This sensory reinstatement of prior neural patterns is thought to result from hippocampally mediated cortical reactivation (McClelland et al., 1995; Bosch et al., 2014; Xue, 2018; Staresina and Wimber, 2019). A substantial literature has reported how encoding activity patterns in sensory cortex are reinstated during retrieval in a stimulus-specific fashion, whereby the degree of similarity between retrieval and encoding predicts greater memory strength and accuracy (Polyn et al., 2005; Ritchey et al., 2013; Gordon et al., 2014; Kuhl and Chun, 2014; St-Laurent et al., 2015; Favila et al., 2018; Wang et al., 2022). However, human memory retrieval is not a perfect recapitulation of prior perceptual experiences but rather a transformed and reconstructed form of prior events (Bartlett and Burt, 1932; Roediger, 2001; Squire, 2004; Hemmer and Steyvers, 2009). Therefore, in more recent years, a growing literature has focused on identifying what principles and mechanisms support these transformations between perception and memory (Favila et al., 2020; Xue, 2022).

Growing evidence suggests one potential transformation is the topographical anterior shift in the locus of retrieval activations relative to the perceptual/encoding substrate. This “anterior shift” has precedents in the literature across different stimulus domains (Beauchamp et al., 1999; Chao and Martin, 1999; Simmons et al., 2007; Hsu et al., 2011; Rugg and Thompson-Schill, 2013); however, most recently supporting evidence has come from studies of place/scene perception and memory (Fairhall et al., 2014; Baldassano et al., 2016; Silson et al., 2016, 2019; Bainbridge et al., 2021; Steel et al., 2021; Srokova et al., 2022). At present, it is unclear if this mnemonic anteriorization is also consistently observed for face stimuli and reflects a common feature of reinstatement (Steel et al., 2021; Srokova et al., 2022). As reinstatement is proposed to recapitulate encoding patterns, the identification and quantification of any transformation require careful delineation of the perceptually driven cortical substrate of each individual.

Historically, response selectivity for face stimuli was associated with a localized aspect of the fusiform gyrus [fusiform face area (FFA); Kanwisher et al., 1997; Kanwisher and Yovel, 2006; Kanwisher, 2010, 2017]. Subsequent work in the human and nonhuman primate has shown there to be a more complex network of face-selective regions (patches) throughout the ventral stream (Tsao et al., 2006, 2008; Pinsk et al., 2009; Weiner and Grill-Spector, 2010, 2012, 2013; Chang and Tsao, 2017; Hesse and Tsao, 2020; Chen et al., 2023). Importantly, this progress includes evidence of classic FFA being comprised of two distinct face-selective clusters in the fusiform gyrus, mFus (medial fusiform, FFA2) and pFus (posterior fusiform, FFA1; Pinsk et al., 2009; Weiner and Grill-Spector, 2010, 2012, 2013; Chen et al., 2023), often observed together with the inferior occipital gyrus (IOG or “occipital face area”; Gauthier et al., 2000; Pitcher et al., 2011; Rossion et al., 2018). These face-selective regions are bordered by selective regions for human body and limb stimuli (without faces), as well as other object categories (Grill-Spector and Weiner, 2014). Most recently, a large-scale examination has shown systematic variation in the number and configuration of face-selective regions, suggesting three putative “types” of face-selective organization (Chen et al., 2023). Together, these findings suggest that identification of face-selective cortex, as the putative substrate for later reinstatement of face stimuli, requires careful dissociation from related visual categories and with respect to individual variations. Therefore, we performed a multisession functional magnetic resonance imaging (fMRI) experiment to first carefully isolate face-selective cortex within each individual and next examined activity patterns within these selective regions during the study and cued retrieval of face stimuli (Fig. 1a,b).

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Experimental tasks and region of interest. a, Visual object localizer task for identifying face-selective regions. During each trial, eight images from a given visual category were presented as a mini block at the rate of 500 ms per image for a total of 4,000 ms. Participants had to provide a button press to indicate any one-back stimulus repetition. Example stimuli (bottom) from each of the 10 visual categories are shown. b, Memory task experimental procedure. During the study phase, word–image pairs were presented for 5,000 ms. During the delay period, between the encoding and retrieval phases, participants were asked to make an odd or even judgment on presented single-digit numbers. During the test phase, cue words (old/studied and new/unstudied) were presented for 5,000 ms followed by a memory strength judgment. Participants were required to encode word–face associations and to retrieve the associated face image from previously studied word cues. c, Example right hemisphere anatomy of the human VTC (inflated), highlighting key sulci and gyri.

During successful retrieval of face stimuli, we observed consistent overlap within face-selective regions but with a reduced magnitude and spatial extent of response. Importantly, we did not find any evidence for topographic transformation, including anteriorization, of retrieval activity patterns. Finally, in consideration of the neurophysiological basis of our observed fMRI responses, we report on unique human intracranial electroencephalogram (iEEG) data during the same experimental conditions. Together, these findings support the view that, for face category stimuli, memory reinstatement recapitulates individual-specific perceptual substrates, but does not appear to reflect any consistent directed topographic transformation.

Materials and Methods

fMRI participants

Sixteen healthy participants (seven females, two self-reported left-handed, mean age 27.9 years, ranging from 18 to 39 years) with normal or corrected-to-normal vision completed the experiment. Informed consent was obtained prior to participation. The experimental protocol was approved by the Committee for the Protection of Human Subjects at Baylor College of Medicine [Institutional Review Board (IRB) Protocol Number H-38398]. All participants were monetarily compensated for their time.

fMRI experimental design

To ensure robust neural responses and limit the fatigue of participants, this study comprised two experimental sessions. Session 1 was a 60 min scanning session of two high-resolution anatomical structural scans and three runs of a visual object localizer task for robust identification of visual category selectivity in the ventral temporal cortex (VTC). Session 2 took place on average 39 h (1.6 d, ranging from 22 to 90 h) after Session 1 and was a 90 min scanning session of one high-resolution anatomical structural scan and three runs of a paired-associate memory task.

Participants were scanned at Baylor College of Medicine's Core for Advanced MRI. Stimuli were presented on an MR-compatible 32″ LCD display screen (BOLDscreen, Cambridge Research Systems) placed behind the bore of the scanner. Behavioral responses were collected using a fiber-optic button response pad (Current Designs). Stimuli were presented and synchronized with the MR data acquisition using MATLAB (MathWorks) and the Psychophysics Toolbox (Brainard, 1997; Pelli, 1997).

Session 1: visual object localizer

During the visual object localizer task, participants were presented with grayscale images from 10 visual categories: faces, bodies, cars, corridors, instruments, limbs, houses, letters, numbers, and phase-scrambled noise. Visual stimuli come from a publicly available corpus that has been successfully used as a visual category localizer in fMRI studies (Stigliani et al., 2015). The stimuli have been statistically matched for contrast, luminance similarity, and spatial frequency. Following prior work, stimuli were presented in a mini block design. During each mini block, participants viewed eight images from the same visual category at a rate of 500 ms per image, for a total of 4,000 ms (Fig. 1a). A total of 60 mini blocks (six mini blocks per visual category) were presented in a random order. Across the entire experiment, 20% of the mini blocks (12 mini blocks) contained a target trial (repeat images), where participants were asked to respond via button press when an identical stimulus was repeated back-to-back (one-back task, Fig. 1a). Participants performed three runs of the visual object localizer task (∼6 min per run) to allow robust identification of visual category selectivity in the VTC.

Session 2: paired-associate memory task

Visual object localizer and memory tasks were scanned in separate sessions to help reduce participant fatigue from long scanning time without compromising data quality. Localizer and memory tasks were conducted on average 1.6 d apart. There were three phases to the memory task: study, delay, and test. During the study phase, participants were presented with 30 word–picture pairs. For each trial, single words (e.g., “ROCK,” “GLASS”) were displayed above color photographs of well known people (e.g., “Barack Obama” or “Taylor Swift”) for 5 s per pair with a jittered interstimulus interval between 4 and 6 s (Fig. 1b). Participants were instructed to associate each arbitrary word–face pair. After the study phase, during the delay phase, participants were presented with 14 trials of a single-digit number (e.g., “6” or “5”) and were asked to judge whether the number was odd or even via button press. Numbers were presented for 1,500 ms with 500 ms interstimulus interval. During the test phase, participants were presented with 60 cue words (30 old/studied and 30 new/unstudied, not associated with a face image). Single cue words were presented for 5 s, during which participants were instructed to retrieve and bring to mind the associated face picture as vividly as possible. Once the cue was removed, participants were asked to rate their memory via a button press on a four-point scale: (1) “No memory”; (2) “Yes, memory for word only”; (3) “Yes, memory for word and picture weak”; (4) “Yes, memory for word and picture strong.” The question/response screen was presented for 5 s with a jittered interstimulus interval between 4 and 6 s (Fig. 1b). Participants performed three runs of the memory task (∼21 min per run) to allow robust identification of the memory study and test activity in the VTC.

Word stimuli were selected with a limited letter range (4–8), number of syllables (1–3), concreteness rating (600–700), and imaginability rating (600–700) from the Medical Research Council Psycholinguistic Database (http://www.psy.uwa.edu.au/MRCDataBase/uwa_mrc.htm). Face stimuli were color images of famous individuals with an equal number of women and men from the Google Image Search. Images were rescaled to 450 × 450 pixels, with a resolution of 97.987 pixels/inch.

fMRI methods

Imaging acquisition

Imaging data were acquired using a 3 T Siemens Trio MRI scanner (Siemens) equipped with a 32-channel head coil. A total of three high-resolution T1–weighted anatomical scans (TR, 2,600 ms; matrix, 256 × 256 mm2; 176 slices; voxel size, 1.0 × 1.0 × 1.0 mm3; FOV, 256 mm; TE, 3.02 ms; flip angle, 8°) were collected. fMRI scans covered the entire brain using a continuous multislice echo planar sequence. Scanning parameters were identical across functional scans (visual object localizer and memory task): TR, 2,000 ms; TE, 30 ms; flip angle, 72°; in-plane resolution of 2 × 2 mm; 69 2 mm axial slices; multiband acceleration factor, 3; and voxel size, 2.0 × 2.0 × 2.0 mm3. A total of 153 volumes were acquired for each run of visual object localizer and 638 volumes for each run of the memory task.

fMRI analysis

To better understand the precise nature of memory reinstatement in VTC, we implemented a surface-based fMRI analysis at the individual participant level. Preprocessing was performed using fMRIPrep 1.4.0 (Esteban et al., 2019), which is based on Nipype 1.2.0 (Gorgolewski et al., 2011). This pipeline uses a combination of well known software packages to implement the most suitable tools for each stage of preprocessing. The preprocessed task fMRI data were entered into a general linear model (GLM) to estimate the fMRI activity at each vertex/voxel in each run using the FreeSurfer Functional Analysis Stream (FsFast; Burock and Dale, 2000). Linear contrasts were computed to estimate the effects of interest. Fixed-effect analyses were conducted to estimate the average effects across runs within each participant.

Functional data analysis

Using the output from fMRIPrep, a surface-based vertex–wise GLM was used to analyze the preprocessed time series. The GLM included regressors of no experimental interest generated from fMRIPrep preprocessing, including motion estimates. Within-subject first–level analysis was carried out using FsFast. For the visual object localizer, the GLM contained 10 regressors-of-interest corresponding to the 10 visual categories: faces, bodies, cars, corridors, instruments, limbs, houses, letters, numbers, and scrambled. For the memory experiment, the GLM contained five regressors-of-interest: stimulus presentation (study phase), fixation (study phase), hits (test phase), correct rejections (test phase), and retrieval errors (test phase, misses and false alarms).

Manual definition of visual-selective regions

Face-selective regions on the VTC surface were manually identified for each hemisphere and each participant. First, contiguous sets of cortical surface vertices that passed a common threshold (t > 3) of the face-selective map (face > all other visual categories) formed face-selective clusters. Then, these clusters were labeled as mFus, pFus, and IOG based on anatomical landmark markers and criteria outlined in the prior work (Weiner and Grill-Spector, 2010, 2012, 2013; Chen et al., 2023). Similarly, for control analyses parahippocampal place area (PPA) and visual word form area (VWFA) were manually identified for each hemisphere and each participant based on the same cluster-forming common threshold (t > 3) of the place-selective maps (place > all other visual categories) for PPA and the word-selective maps (word > all other visual categories) for VWFA. The place-selective and word-selective clusters were labeled as PPA and VWFA, respectively, based on anatomical landmark markers outlined in the prior work (Epstein et al., 1999; Kanwisher and Yovel, 2006).

Activity-weighted center-of-mass analysis

To quantify possible shifts in the location of the neural response from perception to the study and test phase of the memory task, we calculated the weighted center of mass for each face-selective regions on each surface, where the contribution of each vertex to the center of mass was weighted by its activity map (t statistic of the contrast) as outlined in Steel et al. (2021). In simple cases, the center of mass of an object is usually located at its geometric center, such as the middle of a ball. However, when considering the center of mass for face-selective regions, the distribution of neural activity becomes crucial. Parts of the region with stronger activity will have a more significant impact on the position of the center of mass compared with parts with weaker activity. Consequently, the activity-weighted center of mass reflects where the neural activity is concentrated and is sensitive to changes in the distribution of activity levels. Activity-weighted centers of mass were calculated within the boundaries of face-selective ROIs (mFus, pFus, and IOG) for each participant and hemisphere. These calculations were weighted by the t statistic of the following contrasts: “face > all other visual categories” for face selectivity, “stimuli presentation > fixation” for study-phase activity, and “hits > correct rejections” for test-phase activity. Importantly, no thresholding of t values was applied to these calculations. This ensured that every vertex within a given ROI, along with its associated activity value, contributed to the activity weighting. The distance in the center of mass was measured in millimeters in the y-dimension (anterior–posterior) and x-dimension (medial–lateral). This method allows us to investigate whether there is a shift in the anatomical location of activation maps during the test phase compared with that during the localizer and study phase. However, it cannot fully distinguish between different scenarios of spatial transformation (Fig. 4a).

ECoG methods

Patient

Intracranial recordings (electrocorticography, ECoG) were obtained from one male patient (20 years of age) undergoing invasive monitoring for the potential surgical treatment of refractory epilepsy at Baylor St. Luke's Medical Center. He provided written and verbal voluntary consent to participate in the experiments reported here. All experimental protocols were approved by the IRB at Baylor College of Medicine (IRB Protocol Number H-18112). We excluded electrodes at epileptic foci, and no experiments were recorded in the presence of interictal epileptic discharges.

ECoG experimental design

Taking a similar approach as our fMRI study above, the patient performed the visual object localizer and word–face memory tasks (Fig. 5a,b). Details of these tasks are reported above and briefly summarized below.

Experiment 1: visual object localizer

In the perception task, the patient was presented grayscale images from eight visual categories (faces, houses, bodies, limbs, cars, words, numbers, and phase-scrambled noise) in a random order (Fig. 5a). On each trial, stimuli were shown for 1,000 ms, with a random interstimulus interval (ISI) between 1,000 and 1,500 ms. During the task, the patient was required to press a button whenever they detected a specific stimulus being repeated back-to-back (one-back task). A total of 15 different stimuli were presented for each category, with 10 random images being repeated (serving as targets), leading to a total of 160 trials. On average, the task was 7 min in duration. The patient performed two runs of this experiment.

Experiment 2: memory task (word–picture paired associates)

In the memory task, the patient performed a paired-associate paradigm with a study and test phase. During the test phase, the patient was presented with 15 word–face pairs. Each pair was presented for 5,000 ms, with a self-paced ISI. The patient pressed a button to advance to the next word–picture pair. After a short delay, the test phase would begin with the patient being presented with only cue words displayed above the box frame, but no picture. Cue words were from a list of 15 old (from the study phase) and 15 new words (Fig. 5b). On each trial of the test phase, the patient was required to retrieve the picture associated with the cue. Cue words were presented for 5,000 ms, followed by a response period where patients were asked to provide their memory strength of the cue and associate. On average, the task was 10 min in duration. The patient performed three runs of this experiment.

Electrophysiological recording

Intracranial EEG data was acquired at a sample rate of 2 kHz and bandpass of 0.3–500 Hz (fourth-order Butterworth filter) using a Blackrock Cerebus system (Blackrock Microsystems). Initial recordings were referenced to an inverted subdural electrode away from the pathological zones. During recordings, stimulus presentation was tracked using a photodiode sensor (attached to stimulus monitor) synchronously recorded at 30 kHz. All additional data processing was performed offline.

Electrode localization and selection

To identify electrodes located within the VTC, we coregistered a postoperative CT scan to a preoperative T1 anatomical MRI scan, using FSL and AFNI (Cox, 1996; Dale et al., 1999). The volume location of each electrode was identified by clear hyperintensities on the aligned CT using AFNI and visualized using iELVis software functions in MATLAB (v2016a, MathWorks; Groppe et al., 2017).

Preprocessing and spectral decomposition

All signal processing was performed using custom scripts in MATLAB (v2020b, MathWorks). First, raw EEG signals were inspected for line noise, recording artifacts or interictal epileptic spikes. Electrodes with clear epileptic or artifactual activity were excluded from further analysis. Second, signals were notch-filtered (60 Hz and harmonics) and average-rereferenced. Finally, rereferenced signals were downsampled to 1 kHz and spectrally decomposed using Morlet wavelets, with center frequencies spaced linearly from 2 to 200 Hz in 1 Hz steps (seven cycles).

Statistical analysis

fMRI and iEEG data were subject to quantification using parametric and nonparametric statistical methods as appropriate for underlying data distributions. Mixed-effect models, using subject as a random effect, were employed to account for individual variability or population sampling nuisance. Confidence intervals of Spearman’s rank correlation were estimated by 1,000,000 replicates of bootstraps and corrected with a bias-corrected accelerated method. Statistical analyses were carried out using an R statistical software (R Development Core Team, 2010).

Data and code availability

The datasets and custom code supporting the current study will be deposited on the National Institute of Mental Health Data Archive and released on project completion. Data and code are also available from the lead contact upon reasonable request.

Results

Task performance

Participants underwent two sessions of fMRI scanning. In the first session (visual object localizer task), participants viewed images from 10 visual categories and were instructed to press a button when the same image was presented back-to-back (one-back task, as shown in Fig. 1a). Participants performed well on the task, with a mean accuracy of 93.95% (SD, ±10.41).

In the second session (paired-associate memory task), participants were presented with word–face pairs during the study phase. Later at the test phase, they were given only cue words and asked to retrieve the associated faces. The cue words consisted of words that were paired with faces during encoding (referred to as “old” cues) and words that were not (referred to as “new” cues). Participants were instructed to make one of four responses to each cue word: (1) “no,” if they did not remember the cue word from the encoding phase; (2) “yes (word only),” if they remembered the cue word but not the associated face; (3) “yes (word and picture, weak),” if they remembered the cue word and had some details about the paired face; and (4) “yes (word and picture, strong),” if they remembered the cue word and retrieved vivid details about the paired face (Fig. 1b). Memory accuracy was assessed based on participants' responses. Hits referred to correctly identifying the old cue words by responding “yes” (responses 2–4), while correct rejections referred to correctly identifying new cue words by responding “no.” A corrected memory performance score (d′) was also calculated. Overall, participants exhibited high memory performance, with a hit rate of 85.2% (SD, ± 14.6), a correct rejection rate of 90.1% (SD, ± 12.7), and a mean d′ value of 2.8 (SD, ± 0.87).

Face-selective regions in VTC

Growing evidence has shown that face selectivity within the VTC is not limited to a single focal region but encompasses several distinct face-selective regions or “patches” in the human and nonhuman primate (Tsao et al., 2006, 2008; Pinsk et al., 2009; Weiner and Grill-Spector, 2010, 2012, 2013; Chang and Tsao, 2017; Hesse and Tsao, 2020; Chen et al., 2023). Indeed, a recent large-scale study involving 1,000 participants revealed systematic differences in the configuration of face-selective regions within human VTC, such that individual activation maps did not resemble the averaged group map (Chen et al., 2023). Whereby, this study identified three types of face-selectivity configurations: (1) “separate” type with a sizable cortical gap between mFus and pFus; (2) “continuous” type with little to no cortical gap between these regions; and (3) “single” type containing only mFus or pFus defined by anatomical landmarks.

In order to examine the nature of functional overlap between perception and memory, we followed the common approach of leveraging visual category stimuli, specifically face stimuli, given their behavioral relevance to episodic memory and well known functional organization in VTC. To address this question at an individual level, we first had a separate functional visual object localizer scan (Session 1) utilizing established visual category stimuli (Stigliani et al., 2015) to carefully and precisely identify face-selective cortices and their spatial configuration for each participant. In doing so, we obtained an individual-specific template of “face-selective” regions for subsequent memory activity comparisons.

First, we identified face-selective regions within each participant, specifically the three face-selective regions: mFus, pFus, and IOG. To identify the face-selective cortex, we used well established methods (Weiner and Grill-Spector, 2010, 2013; Stigliani et al., 2015). Specifically, we used a GLM to identify regions where responses to face stimuli were greater than nine other visual categories shown (body, car, corridor, instrument, limb, house, letter, number, and phase-scrambled noise; Fig. 1a). Face-selective clusters were identified based on a common t statistic threshold across participants (t > 3, vertex level). We reliably identified three face-selective regions (mFus, pFus, and IOG) in the majority of participants in both left and right hemispheres (Fig. 2e). Consistent with a large literature, we observed face-selective regions in VTC to be interdigitated with body/limb and other object categories, with variable spatial configurations (Fig. 2a). After identifying face-selective regions, we further considered their topographic organization in light of recent large-scale studies suggesting distinct “types” of face-selective cortex configurations across individuals. Within each subject, face-selective functional clusters were labeled based on their anatomical location and spatial extent, and consequently, the type of face-selectivity configuration (separate, continuous, and single) established by Chen et al. (2023) was assigned. Figure 2, a and b, shows the organization of these three face-selective regions on the inflated cortical surfaces of the three example participants. Overall, we observed three types of face-selectivity configurations: the separate type (62.5%, Fig. 2a top), the continuous type (21.88%, Fig. 2a middle), and the single type (15.62%, Fig. 2a bottom). Within our sample (n = 16), the proportions of spatial configuration types closely matched those previously reported in a large cohort (Fig. 2d; Chen et al., 2023).

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Face selectivity and study activity in VTC. a, Example participants' functional delineation of visual selectivity in VTC (eight visual categories, face, limb, body, place, word, number, car, and instrument; t > 3, vertex level). b, Same example participants showing face selectivity in VTC (GLM contrast map, face > all other visual categories). Three types of face-selective–region spatial configurations were observed: separate type (top) with discrete mFus, pFus, and IOG face-selective regions; continuous type (middle) with a continuous mFus and pFus region and IOG; and single type (bottom) with only mFus or pFus region and IOG. A common threshold (t > 3, vertex level) was used to define all regions across all participants. Color maps illustrate the t statistics of activation for the given contrast. c, Study phase activities (GLM contrast map, stimulus presentation > fixation) for the same separate, continuous, and single-type example participants in a. Black outlines in c denote the boundaries of face-selective regions from a. d, Percentage of participants with each type of spatial configuration of face-selective regions for left and right hemispheres. e, Percentage of participants with face-selective regions identified in left and right hemispheres. f, Group-averaged proportions of vertices in each face-selective region that were active during localizer only (green) or both localizer and study phases of the memory task (orange). Each black dot denotes the proportion of vertices active during both localizer and memory encoding for a given participant.

Having identified face-selective regions in each participant, we next proceeded to examine the extent to which this neural substrate was engaged during the presentation of faces at the study phase of a separate memory task (Session 2). Participants were presented with word–face pairs (Fig. 1b), with the images of faces expected to elicit responses in the identified face-selective regions. It is useful to compare the localizer to the study phase to confirm that the face-selective regions identified using the localizer were indeed active during the study phase when participants were learning different word–face pairs. Using a GLM contrast of stimulus presentation > fixation during the study phase of memory task, we identified activation maps that revealed the brain areas involved in encoding and processing of word–face pairs. Figure 2c displays the associated study activity maps for the three example configuration types, with putative face-selective regions previously identified outlined in black. These activation maps were generated using a common t statistic threshold across participants (t > 3, vertex level). Of the three sample participants, large portions of VTC showed strong activity (yellow–red colormap).

To further quantify the spatial overlap of responses, we examined the degree of activity overlap during the presentation of faces from the visual object localizer and the study phase of the memory task within each face-selective region. Using face-selective regions as anatomical masks, we identified vertices as active if their study activation contrast (stim presentation > fixation) t statistic was greater than 3. On average, a substantial proportion of vertices in each face-selective region were active during the study phase (Fig. 2f). This result confirmed that the identified face-selective regions, within each subject, served as a robust means for identifying the functional substrate engaged during the study of face stimuli.

In summary, we successfully mapped the unique pattern of face-selective regions in each participant's VTC via a separate controlled visual object localizer task. Our data suggested systematic differences between individuals in the number and configuration of face-selective regions, consistent with recent population analyses on VTC selectivity mapping (Chen et al., 2023). As expected, we found that the activity during the study phase of a word–face memory paradigm engaged separately identified face-selective cortices. This large functional overlap sets up the stage for our next questions regarding the organization of activity within these cortical regions during the successful retrieval of face stimuli.

Memory reactivation in face-selective regions

To examine evidence of cortical reinstatement, we examined whether regions in VTC that were engaged during memory retrieval overlapped with the brain regions involved in face perception above. Here, we examined the responses during the test phase of the memory task using the same approach outlined above. During the test phase, participants were presented with cue words alone without associated face images. Participants were asked to vividly retrieve faces that were previously paired with the old cue words during the study phase (Fig. 1b). As external visual inputs of face stimuli were absent during the test phase, the neural responses in the identified face-selective regions would likely be driven by internal processes such as memory reinstatement rather than external visual processing. Using GLM contrasts of successful retrieval of old (hits) > new (correct rejections), we identified test-phase activation maps that revealed the brain areas involved in the successful retrieval of cue-associated faces. Figure 3a illustrates the memory task test-phase activity maps for the same three example configuration types as in Figure 2, with putative face-selective regions outlined in black on the inflated cortical surface. These activation maps were generated using a common threshold across participants (t > 2, vertex level). Given the absence of external face stimuli input during the test phase, we expected reduced VTC activity compared with perception (localizer and study phase). However, we still anticipated that face-selective regions would be active to support memory retrieval of faces. Visually, VTC engagement during the test phase appeared less extensive compared with that during study-phase activity (Fig. 2).

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Memory retrieval activity within VTC face-selective regions. a, Example participants showing test activities of the memory task in the VTC (center, GLM contrast map, hits > correct rejections) for the separate- (top), continuous- (middle), and single (bottom)-type example participants in Figure 2a. Black outlines denote the boundaries of face-selective regions identified in Figure 2a. Color maps illustrate the t statistic of activation of the contrast. Zoomed-in maps of each face-selective region illustrate the spatial distribution of vertices within the region that were active for (1) face-selective localizer only (green), (2) localizer and study phase (orange), (3) localizer and test phase (magenta), and (4) localizer, study, and test (purple) in the left and right hemisphere. b, Proportions of vertices within each face-selective region [mFus(M), pFus(P), and IOG(I)] that were active for the same example participants during the same contrasts in a. c, Mean t values are shown across three face-selective regions (mFus, pFus, and IOG) and two control regions (PPA and VWFA) during localizer (face > other visual categories, green), study (stimulus presentation > intertrial fixation, orange), and test (hits > correct rejections, magenta). T values were averaged between left and right hemispheres and across participants with error bars indicating SEM. d, Proportions of active vertices are shown for all participants across three face-selective regions in both left and right hemispheres. e, Group-averaged proportions of active vertices in each face-selective region.

To quantify the spatial overlap of responses, we followed a similar approach to our study-phase analysis. First, using face-selective regions as anatomical masks, we identified vertices as active during successful memory retrieval if their test activation contrast t statistic was greater than 2. Figure 3b shows the proportions of active vertices during different overlap scenarios: localizer only, localizer and study, localizer and test, or localizer, study, and test, for the three example configuration types and face-selective regions. Additionally, the spatial distribution of active vertices within regions for different overlap scenarios is shown in Figure 3a. On average, ∼25% of vertices within face-selective ROIs were active during the localizer, the study phase, and the test phase of the memory task (Fig. 3e). This result indicates that portions of these face-selective regions are reactivated, with the test-phase activity conforming to individual face-selective neural substrates, supporting the notion of cortical reinstatement. Moreover, only a small number of vertices were active during the test phase but not during the study phase (<5% across participants), indicating a high degree of study–test overlap. It is worth noting that the localizer–study–test overlap varied considerably across regions and participants (as shown in Fig. 3d). Given these differences, we examined the relationship between the extent of overlap in activity between study and test and memory performance (d′). Specifically, we identified vertices that were active during the study (t > 3) and test (t > 2) within a given ROI. We then calculated the proportions of study–test overlap vertices and correlated the proportions with d′. Consistent with prior findings, we observed greater response proportions of overlap to be significantly correlated with improved memory performance (Spearman’s rank correlation ρ = 0.76; p < 0.001; n = 16; ρ 95% CI, 0.52:0.94).

Next, we compared the activation magnitude within each face-selective region during the localizer, study, and test phase of the memory task to two control brain regions, which is a common approach to assess the degree of category selectivity (i.e., face vs place). We expected that successful retrieval of face stimuli would elicit a response in face-selective regions but not in place-selective regions. Alternatively, if the responses were similar between face- and place-selective regions, it may suggest that memory retrieval reactivation of cortical regions was not content-specific, likely a more general top–down modulation of sensory areas (Kastner et al., 1999; Righart et al., 2010). Therefore, we localized the PPA in each participant using a similar approach as used for identifying face-selective regions. Briefly, we used a GLM contrast to identify regions where responses to corridor/house stimuli were greater than eight other visual categories shown (face, body, car, instrument, limb, letter, number, and phase-scrambled noise). Place-selective clusters were identified based on a common t statistic threshold across participants (t > 3, vertex level). From the place-selective clusters, PPA was identified based on their anatomical location (Epstein et al., 1999) in every participant (n = 16). Similar to PPA, we expected that viewing and processing of face stimuli during the localizer would not elicit a response in word-selective regions, our second control brain region. Unlike PPA, we expected that word-selective regions would be activated during the study and test, given the presence of word stimuli as word–image associate and cue words. Therefore, we localized the VWFA in each participant using a method similar to that employed for identifying face-selective regions and PPA. First, we isolated word-selective clusters in VTC via a GLM contrast where responses to word stimuli were greater than all other visual categories shown (face, body, car, instrument, limb, corridor, house, number, and phase-scrambled noise; cluster-forming threshold, t > 3). Next, we identified VWFA (in one or both hemispheres) from word-selective clusters based on established anatomical criteria (Rauschecker et al., 2012; Yeatman et al., 2013). The average t statistic values for each face-selective region, PPA, and VWFA during the localizer (face presentation), study, and test phase were computed and depicted in Figure 3c. To compare response magnitudes we used group linear mixed-effect analysis with task (localizer, study, test), ROI (mFus, pFus, IOG, PPA, VWFA) and hemisphere (left and right) as fixed effects and subject as random effects. It is worth noting that three distinct contrasts were used to examine the task-related activation magnitude, specifically face > other visual categories during the localizer, stimulus presentation > fixation during the study phase, and hits > correct rejections during the test phase. This analysis revealed significant main effects of task and ROI, but no significant effect between left and right hemisphere (Satterthwaite approximations used for significance of model coefficients). While response magnitude appears equivalent across the three face-selective regions during the localizer, it seems to differ during the study and test phase. Therefore, to further examine these possible differences, we conducted additional ROI pairwise comparisons for each task phase using pairwise Tukey's range test, with p values adjusted for comparing a family of three estimates. As expected, we found no significant difference during the localizer (IOG–mFus, t(102.9) = 1.142; p = 0.665; IOG–pFus, t(104.5) = 0.172; p = 0.998; mFus–pFus, t(102.4) = −0.966; p = 0.774) nor during the test phase (IOG–mFus, t(70.2) = −0.168; p = 0.985; IOG–pFus, t(71.1) = 1.074; p = 0.533; mFus–pFus, t(69.9) = 1.265; p = 0.420). However, significant differences among regions were found during the study (IOG–mFus, t(69.5) = 5.902; p < 0.0001; IOG–pFus, t(69.8) = 2.619; p = 0.029; mFus–pFus, t(69.4) = −3.171; p = 0.006). This difference in activation among face-selective regions was likely due to processing stimulus features unique to the study phase, such as color images of famous faces. More importantly, activation magnitude in control region PPA was substantially reduced compared with that in face-selective regions during all three task phases (mFus–PPA, t(335) = 13.820; p < 0.0001; pFus–PPA, t(337) = 14.891; p < 0.0001; IOG–PPA, t(337) = 17.163; p < 0.0001). In addition, as expected, activation magnitudes were smaller in control region VWFA compared with those in face-selective regions during the localizer (mFus–VWFA, t(132) = 22.545; p < 0.0001; pFus–VWFA, t(133) = 22.623; p < 0.0001; IOG–VWFA, t(130) = 22.979; p < 0.0001), indicating no response to faces, similar to PPA. Activation magnitudes were similar between VWFA and face-selective regions during the study (mFus–VWFA, t(129) = 1.167; p = 0.770; pFus–VWFA, t(129) = 4.207; p = 0.001; IOG–VWFA, t(129) = 6.981; p < 0.0001) and test (mFus–VWFA, t(130) = 0.962; p = 0.872; pFus–VWFA, t(131) = −0.456; p = 0.991; IOG–VWFA, t(129) = 0.765; p = 0.940). Moreover, activation in VWFA was greater compared with that in PPA during the study (PPA–VWFA, t(129) = −6.997; p < 0.0001) and test (PPA–VWFA, t(131) = −3.171; p = 0.016), which was expected given the presence of word stimuli during both these phases. As noted above, face-selective region reinstatement activity is predicted to closely recapitulate patterns observed during perception, although with a reduced magnitude compared with sensory-driven responses. Compared with PPA, the face-selective regions were more activated during face presentation at the localizer and the study phase of the memory task, supporting their role in processing face stimuli. More importantly, the face-selective regions were more activated during the test phase compared with those during PPA, suggesting face stimuli-specific retrieval. Furthermore, we examined the relationship between the magnitude of activation (both in face-selective and control regions) and memory performance (d′) separately for the study and test phases. We found that only the magnitude of activation during the test phase in face-selective regions exhibited a significant correlation with memory performance (face-selective ROIs, study, Spearman’s rank correlation ρ = 0.28; p = 0.28; test, ρ = 0.65; p = 0.008; PPA, study, ρ = 0.29; p = 0.27; test, ρ = 0.19; p = 0.48; VWFA, study, ρ = 0.28; p = 0.29; test, ρ = 0.24; p = 0.37). Therefore, both the response overlap between study and test, reported earlier, and the magnitude of activation in face-selective regions during the test were significantly correlated with memory performance. These findings provide further support that observed responses within identified face-selective cortex during the test reflect encoding specific reinstatement in the support of memory behavior.

In summary, using individually mapped face-selective regions in each participant's VTC, we found that face retrieval activity engaged an overlapping subset of cortical vertices. The engagement of these face-selective regions was reduced during the memory test phase compared with that during the localizer and the study phase but was still more activated when compared with a place-selective control region. Overall, our study adds to the growing body of evidence supporting the cortical reinstatement theory of memory retrieval and indicates that memory reinstatement is an information-specific process where the precise brain regions involved in processing specific stimuli during perception/encoding are reactivated during successful memory retrieval.

Memory transformation in face-selective regions

Our results showed that cortical activation overlap during the presentation of faces from the localizer and the study phase was substantial, with ∼75% of face-selective vertices showing activity in both processes. However, the overlap between face perception (localizer and the study phase) and face retrieval (the test phase) was more modest, with ∼25% of face-selective vertices being active during retrieval. This may indicate that only a subset of face-selective regions were activated because memory retrieval involves selective reactivation of specific neural networks associated with the initial sensory–perceptual experience (Bone et al., 2020). One possible alternative explanation for this modest retrieval activity level and overlap is the “spatial transformation” hypothesis, which posits that the cortical spatial location of neural activity during memory retrieval systemically differs from that of perception. This idea is supported by evidence from studies of place-/scene-processing networks that have observed a systematic “anterior shift” in cortical regions involved in scene retrieval compared with those involved in scene perception or memory encoding (Fairhall and Ishai, 2007; Rugg and Thompson-Schill, 2013; Baldassano et al., 2016; Silson et al., 2016; Bainbridge et al., 2021; Steel et al., 2021). To test whether a similar anterior shift occurs in individual face-selective regions during memory retrieval, we examined the location of neural responses during perception and compared them to the activity locations during memory encoding and retrieval.

To quantify any activity shift within VTC, we employed the activity-weighted center-of-mass approach outlined in Steel et al. (2021) combined with individually identified face-selective regions. It is crucial to consider the activity magnitude of all vertices and the geometry of ROIs rather than singular vertex estimates of maximal response, which can be noisy in single-subject fMRI ROIs. Therefore, activity-weighted center-of-mass calculation addresses this by taking into account both the magnitude of activity and the spatial distribution of all vertices. Specifically, we first extracted activity maps using the identified face-selective regions as masks during the localizer, study, and test phase of the memory task. Next, we calculated the center of mass weighted by their respective activation level. Activity of every vertex within the mask contributed to the calculation, with no t value thresholding applied. This approach allows for the detection of a center-of-mass shift in most spatial transformation scenarios (Fig. 4a shows schematics of possible overlap scenarios between perception and retrieval activities) while limiting the confounding influence of surrounding nonface-selective regions. Although a detection of location shift is possible, further differentiation of the specific type of spatial transformation may require additional methods. The zoomed-in maps in Figure 4b display the activity-weighted center of mass for each face-selective region for an example participant. The centers of mass for localizer, study phase, and test phase appeared to be very close in their anatomical locations. Therefore, to better quantify and visualize the possible location shift, we represented these centers of mass as coordinates on a polar plot, with the x-axis representing the medial–lateral direction and the y-axis representing the anterior–posterior direction. In this polar plot, unity (0,0) was defined by the location of the center of mass weighted by face perception activity during the localizer for all three face-selective regions (as shown in Fig. 4c). For the example participant, the centers of mass weighted by study activity (orange color) and test activity (magenta color) appeared to be shifted away from the center of mass weighted by face perception localizer activity. However, the observed changes were relatively small, all within 2 mm in distance. As a point of reference, the size of anterior shifts during scene retrieval has been reported to range from 10 to 30 mm, relative to encoding substrates (Silson et al., 2019; Steel et al., 2021; Srokova et al., 2022). Most importantly, there was no clear systematic shift in any direction.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Activity-weighted center locations of face selectivity, memory encoding, and retrieval activity within VTC face-selective regions. a, A schematic depicting center-of-mass approach for quantifying spatial shifts in activity across five different activation scenarios. The three potential spatial shifts (left to the dashed line, transformed, restricted, and fragmented) may be detected, whereas two potential spatial shifts (right to the dashed line, expanded and transformed outside of ROI) may not be detected. b, An example participant showing activity in the VTC within the boundaries of face-selective regions identified in Figure 2a during object localizer (face > all other visual categories), study (stimulus presentation > fixation), and test (hits > correct rejections) and along with other visual categories (limb, body, car, place, word, number, and instrument). Color maps illustrate the t statistic of activation of the contrast. Zoomed-in maps of each face-selective region illustrate the activity-weighted center of mass calculated without applying any t value thresholding to the activity level. c, An example participant summary polar plot showing topography of memory encoding and retrieval activity relative to face selectivity. The activity-weighted center of mass for face selectivity (green) in three face-selective regions is at unity (0,0). The centers of mass for memory encoding (orange) and retrieval (magenta) responses are plotted in relation to face selectivity for mFus (circle), pFus (square), and IOG (triangle). The X-axis represents the medial–lateral direction, and the y-axis represents the anterior–posterior direction. d, Topography of memory encoding and retrieval activity relative to face selectivity is shown for all participants across three face-selective regions. The centers of mass for memory encoding and retrieval activity are plotted in relation to the center of mass for face selectivity (unity, same as in b). Zoomed-in maps of each plot illustrate the averaged vector distance and direction of shift in the center of mass for memory encoding (orange) and memory retrieval (magenta) activity.

A similar pattern emerged when we examined the activity-weighted centers of mass for all participants. Figure 4d provides summary polar plots for the center-of-mass positions in the three face-selective regions (similar to the example participant in Fig. 4c). Overall, most shifts during memory retrieval were concentrated within a 2 mm distance from the center of mass for face perception and did not display a particular directional bias. More precisely, when we calculated the average shift in distance and direction (see Fig. 4d zoomed-in maps), we found that the average distance shift for study and test activity from center of face perception localizer activity was <0.5 mm. To formally compare the change in the center of mass, we first extract the center-of-mass coordinates from each face-selective region weighted by activity from the localizer, the study, and the test phase of the memory task. The coordinates consist of the anterior–posterior axis and medial–lateral axis. Using a linear mixed-effect model, we examined the change in the anterior–posterior axis coordinate values, with the task (localizer, study, test) and ROI (mFus, pFus, IOG) as fixed effects and subject as a random effect, and found no significant main effect of task (p = 0.998). Specifically, using pairwise Tukey's range test, with p values adjusted for comparing a family of three estimates, there was no significant shift in the anterior–posterior direction (localizer–study, t(239) = 0.274; p = 0.959; localizer–test, t(239) = 0.163; p = 0.9854; and study–test, t(239) = −0.111; p = 0.993). Similarly, there was no significant shift in the medial–lateral direction (localizer–study, t(239) = 0.021; p = 0.999; localizer–test, t(239) = 0.050; p = 0.999; and study–test, t(239) = 0.029; p = 0.999). Next, we examined the rank distribution of activity magnitudes within each face-selective ROI to determine whether the vertices with higher activity during the localizer/study phase also exhibited higher activity during test phase, complementing the activity-weighted center-of-mass analysis. We anticipated consistency between localizer and study since both involved viewing face stimuli. We also expected consistency between study and test, reflecting cortical reinstatement. Lastly, we anticipated a relatively low consistency between localizer and test, as localizer stimuli differed from those presented at the study or reinstated at the test. Spearman's rank correlation was used to assess the similarity of vertex activity among task phases. As predicted, we observed higher mean similarity between localizer versus study (ρ = 0.311) and study versus test (ρ = 0.196), compared with localizer versus test (ρ = 0.095). Moreover, these observed correlation differences were significant where localizer–study and study–test correlations were greater than localizer–test (localizer-vs-study–localizer-vs-test, t(84) = 4.780; p < 0.0001; study-vs-test–localizer-vs-test, t(84) = 2.085; p = 0.040). These results align with the earlier spatial overlap findings, showing higher degrees of spatial overlap were also observed between localizer and study and between study and test. Therefore, activity is similar between the study and test within face-selective regions, both in terms of activity magnitude and spatial overlap.

In summary, our results demonstrate that the anatomical locations of face-selective regions during face stimuli retrieval did not significantly differ from those observed during face perception. This lack of significant spatial shift between perception and memory retrieval suggests that spatial transformation, involving large-scale shifts in cortical activations, is not a prominent feature during the retrieval of face stimuli. Instead, the brain regions that are activated during the perception of faces are also involved in the retrieval of face memories, albeit with reduced activity levels.

Electrophysiological recordings from face-selective regions during perception, memory encoding, and retrieval

In order to better understand the neurophysiological basis of our fMRI findings and to examine the temporal dynamics of sensory reinstatement, we conducted a human intracranial ECoG experiment. Specifically, we had the opportunity to perform ECoG recordings from a rare patient with refractory epilepsy who underwent invasive monitoring with electrodes placed directly on VTC, uniquely covering a large portion of the left fusiform gyrus (Fig. 5c). To align ECoG data with our fMRI findings presented above, the patient performed identical visual object localizer and word–face memory tasks (Fig. 5a,b).

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

ECoG recordings from the VTC during face perception, memory encoding, and retrieval. a, Visual object localizer task for identifying face-selective regions. Grayscale images from eight visual categories were presented for 1,000 ms with a random ISI for 1–1.5 s. The patient had to provide a button press to indicate any one-back stimulus repetition (same experimental procedure and stimuli as fMRI, Fig. 1). Example stimuli from each of the eight visual categories are shown. Average spectrograms from one example electrode (indicated in b) are shown for each visual category; color maps reflect percentage changes in amplitude relative to the prestimulus period (the black line indicates stimulus presentation onset). The mean amplitude response of BBG activity (70–150 Hz) during stimulus presentation (0–1,000 ms) is shown for all eight visual categories (bottom left). The white dashed–lined box in the spectrogram denotes the frequency range and time window used to extract BBG (70–150 Hz) activity for response calculations. Error bars indicate SEM. These plots show that this electrode location displays strong face selectivity. The color of each electrode denotes the mean BBG response contrast, defined by the BBG response to face stimuli minus the response to all other visual categories combined (face − all). Warmer colors indicate stronger face selectivity. b, Memory experimental procedure and mean spectrogram from two example face-selective electrodes. During the study phase, word–image pairs were presented for 5,000 ms with a self- paced ISI. During the test phase, cue words (old/studied and new/unstudied words) were presented for 5,000 ms followed by a memory strength judgment. Patient was required to encode word–face associations and to retrieve the associated face image from previously studied word cues. Average spectrograms are shown for the study-phase and test-phase trials. Test phase trials were further separated based on cue type and behavior (hits and correct rejections). Color maps reflect percentage changes in amplitude relative to the prestimulus period (the black line indicates stimulus presentation onset). The mean amplitude response of BBG during stimulus presentation (0–2,000 ms) are shown for study, test, hits, and correct rejections (for Electrodes 1 and 2, bottom left and right). The white dashed–lined box in the spectrogram denotes the frequency range and time window used in BBG activity response calculation. Error bars indicate SEM. Middle plots show electrode locations that were active during test (bottom middle), specifically during the successful retrieval of old word–face pairs (hits). The color of each electrode denotes the mean BBG response contrast defined by the BBG response of hits − correct rejections. Warmer colors indicate successful memory retrieval. c, Anatomical locations of ventral temporal electrodes in one patient with the example electrodes outlined in red. Major ventral temporal anatomical landmarks colored: fusiform gyrus (yellow), collateral sulcus (green), midfusiform sulcus (orange), and occipitotemporal sulcus (blue). d, Average mean amplitude time course for BBG ranges (shading reflects SEM) for perception (face and other visual categories) and memory (encoding, hits, and correct rejections) for Electrodes 1 and 2.

First, using time–frequency analysis, we determined visual selectivity by examining the spectral response to each presented visual category (Fig. 5a, top) and then calculated the average broadband gamma (BBG) response (70–150 Hz; Bartoli et al., 2019). Face stimuli elicited higher BBG responses than all other categories, indicating face selectivity (Fig. 5a, bottom left). BBG analysis across VTC electrodes revealed two face-selective sites, likely recording from mFus (example electrode, outlined in white) and pFus. As electrodes between these face-selective sites lacked face-selective responses, this patient was classed as having a “separate” type configuration.

Next, we examined the extent to which these face-selective regions were electrophysiologically engaged during the study and test of the memory task. As with our fMRI study, the patient performed a memory task involving word–face pairs and, later, retrieved associated faces when given word cues (Fig. 5b). During the study phase, we expected that face images would elicit strong BBG responses within identified face-selective sites. Indeed, as expected, face-selective electrodes showed clear responses during the study (Fig. 5b top left), confirming activity overlap between localizer and the study phase, consistent with our fMRI finding. During the test phase, the patient was presented with cue words without associated face images. Notably, increased BBG activity was observed in response to cue words, although with a smaller response magnitude than during the study phase. Strikingly, when the responses were separated by hits (old) and correct rejections (new), BBG responses were substantially larger for hits (Fig. 5b middle left). To examine the spatial transformation hypothesis, we investigated whether the location of neural activity during memory retrieval shifted compared with perception. Notably, aside from Electrode 1, no other electrodes Exhibited a robust memory retrieval effect, including the other face-selective Electrode 2 (Fig. 5b, bottom middle). While both electrodes were face-selective during the localizer and showed clear activity during the study phase (Fig. 5b, top), during the test, Electrode 1 showed a much larger BBG response for hits compared with that for correct rejections, whereas the responses in Electrode 2 were smaller overall and similar in magnitude between hits and correct rejections (Fig. 5b right). Moreover, these electrodes also differ in their response temporal profiles, with Electrode 2 displaying an earlier onset and relatively transient BBG response compared with Electrode 1 (Fig. 5d). Lastly, intervening electrodes anterior to each face-selective electrode did not show any response during the test. These electrophysiological data suggest sensory reinstatement occurs without any spatial shift in activity, consistent with our fMRI findings.

Discussion

In this study, we aimed to investigate the organization, and potential transformation, of activity patterns within the human VTC during perception and retrieval of face stimuli. To do so, we conducted a multisession fMRI study to examine how identified individual-specific face–selective regions were engaged across these conditions. Our results showed that individuals displayed common types of variability in the size and configuration of face-selective regions, consistent with recent large-scale findings. During the test phase of the memory task, there was a reduction in activation levels of face-selective regions compared with the processing of face stimuli. Importantly, we did not find evidence of a consistent topographical shift in the anatomical location of face-selective regions during memory retrieval compared with that during perception, indicating the absence of spatial transformation. Our results highlight the importance of assessing category-specific functional organization at an individual level when studying memory reinstatement and examining other forms of memory transformations.

Using controlled face stimuli and a separate visual object localizer experiment, we identified three distinct face-selective regions (mFus, pFus, and IOG) for each participant. These regions were categorized into three configuration types: “separate,” “continuous,” and “single.” Despite having a more limited sample size, our regional delineation was consistent with previous studies on the functional organization of VTC (Pinsk et al., 2009; Weiner and Grill-Spector, 2010, 2012, 2013; Julian et al., 2012; McGugin et al., 2016). Historically, individual variability has not been given sufficient consideration in prior studies that explore the retrieval responses for face stimuli. For example, some studies have focused on activities within a singular FFA region, while others included activity from a broader area, such as the entire fusiform gyrus or VTC (Kuskowski and Pardo, 1999; Haxby et al., 2001; O’Toole et al., 2005; Prince et al., 2009; Cichy et al., 2011). As we show, functional organization of VTC is complex, with multiple visual category-selective regions, including regions responsible for processing faces, limbs, and bodies within the fusiform gyrus alone (Weiner and Grill-Spector, 2010). Previous studies have frequently defined face-selective regions based on contrasts of face or people versus place condition stimuli during encoding or used a priori anatomical masks from brain atlases, which may overestimate or misrepresent VTC regions specialized for face stimuli (Schwarz et al., 2019). In contrast, our focus on identifying individual-specific response patterns with stringent selectivity criteria (i.e., face vs multiple other visual categories) allowed us to provide a more precise examination of reinstatement properties by considering the individual-specific perceptual substrates of each participant. We found that ∼25% of face-selective vertices were reactivated during retrieval, indicating partial reactivation happening within each ROI. The notion of partial reactivation is in line with the concept of hippocampal “pattern completion” processes, where neural representations do not need to be perfectly reactivated for successful retrieval (Rugg et al., 2008; Horner et al., 2015; Danker et al., 2016). This difference in neural representation may be necessary to help distinguish between a current perceptual experience and the active recall of a prior perceptual experience.

Memory retrieval activity is not a perfect recapitulation of perception and encoding. A growing literature has focused on understanding the transformations of neural representation between perception and memory retrieval (Favila et al., 2020). Topographic spatial transformation has been chiefly supported by research on place-/scene-processing networks, which has shown an anterior shift in cortical regions during scene retrieval compared with those during perception (Fairhall and Ishai, 2007; Baldassano et al., 2016; Silson et al., 2016, 2019; Steel et al., 2021; Srokova et al., 2022). However, in our investigation, when carefully considering individual-specific face–selective regions, we did not observe a similar anterior shift during memory retrieval. There could be several reasons for this discrepancy. One possibility may be our experimental design. Our study only examined one visual category (i.e., faces), whereas other studies investigating spatial transformation during memory retrieval often employ, and contrast, two visual categories (e.g., face vs scene; Steel et al., 2021; Srokova et al., 2022), which may impact the ability to detect spatial shifts in retrieval activity. Another possibility may be the result of group-level analysis. However, Steel et al. (2021) conducted individual-level analyses and still observed an anterior shift in PPA activity during memory retrieval compared with that during perception. Interestingly, in the same study, Steel et al. (2021) examined both scene and face-related memory transformation and did not find an anterior shift for face stimuli in FFA. It is possible that the delineation of a single FFA might contribute to the absence of the shift. In our study, we carefully assessed the locations of perception and retrieval activity with face-selective regions identified for each individual but still did not find a consistent spatial transformation. Therefore, the anterior shift may be specific to the scene-processing network. Steel et al. (2021) suggested that PPA's neural representations exhibit an anterior–posterior gradient, shifting from concrete/perceptual during perception and encoding to more abstract/conceptual during memory retrieval. Although these studies provide some theoretical basis for the anteriorization of scene-related memory, it remains unclear why this gradient does not apply to other visual categories and how it can accommodate the complex interdigitated organization of category selectivity within VTC. For example, anteriorization would require striking plasticity of perceptual face-selective regions occupying other nonface-selective regions (body/limb or object) during retrieval. This issue may be less applicable to the more uniform and singular orientation of place-selective regions, such as the PPA, which may influence the ability to detect spatial shifts in activity. While we did not detect a systematic shift in the center of mass between perception/study and test phases, indicating no spatial transformation, the pattern of activity magnitude in face-selective regions varied between the study and test. Specifically, activation levels differed significantly among regions during the study but were similar during the test. This may indicate that rather than topographic differences distinguishing perception and memory, transformations may reflect the differences in the relative engagement of the same specialized network.

To complement our fMRI results, we conducted the same experiments on a patient with VTC ECoG recordings, providing high temporal resolution and anatomical precision. The ECoG findings supported our fMRI results, showing activity in the same face-selective regions during encoding as during visual perception. During face retrieval, specific face-selective regions were reactivated, indicating focal memory reinstatement. Importantly, we did not observe a clear spatial shift in memory retrieval activity, reinforcing the idea of localized cortical reinstatement rather than a spatial transformation. While many studies have focused on spatial memory transformations, our ECoG results revealed a temporal difference in neural activity during the test compared with that during the study. This shift in temporal dynamics likely reflects distinct processes involved in memory reinstatement compared with those involved in sensory-driven perception. Specifically, the delayed response during retrieval aligns with the notion that the hippocampus initiates pattern completion processes, prompting targeted sensory cortices to reinstate neural representations for memory retrieval. Typically, this sensory reinstatement occurs ∼500 ms after cue onset, contrasting with the rapid perceptual processing occurring within 100 ms after cue onset (Staresina and Wimber, 2019).

Our individualized approach allowed us to examine how retrieval-driven reinstatement of face stimuli conforms to face-selective cortices within individuals. However, there are some limitations to our study. First, the face stimuli used in the visual object localizer and memory task differed in perceptual features (grayscale images vs colored images) and familiarity (unfamiliar faces vs famous faces). Differences in the face stimuli used for perception and memory tasks likely result in different neural representations, which account for the observed differences in activation overlap and response magnitude between these two tasks. Additionally, our fMRI sample size was relatively small, and the electrophysiology recording was limited to a single-patient case study. Although a small sample size may limit the generalizability of findings to the larger population, it is worth noting that we observed similar proportions of face-selective region configuration types as those reported in the large-scale study of face-selectivity organization (Chen et al., 2023). While our data are not sufficiently powered to quantitatively replicate the observations of Chen et al. (2023), it is important to take into account this systematic topographic configuration of face selectivity within individuals when investigating functions of VTC and studying selectivity-related memory processes. This smaller sample size was due to a focus on carefully identifying face-selective regions within individuals via an additional scanning session and applying selectivity standards more common to contemporary studies of vision. This localizer approach represents a trade-off compared with the methods commonly used in memory studies, where selectivity is often defined through simple, dichotomous comparisons (e.g., face vs place) at encoding, potentially limiting accurate identification of functional visual selectivity (Rossion et al., 2012). Incorporating a sufficient number of visual categories and associated trial numbers into a paired-associate paradigm would have resulted in an impractically large fMRI memory experiment. Our primary objective was to examine activity patterns within appropriately defined face-selective cortex during successful memory retrieval. We achieved this by employing a separate localizer task for identifying face-selective regions and using a targeted memory paradigm with face stimuli. While our approach limits direct contrast between face retrieval and another visual category, our method specifically targeted the face-selective cortex with face stimuli and likely reflected face-related processing. Our results indicate that when comparing face-selective regions to the place-/scene-selective PPA as a control region, we observed significantly higher activity during retrieval in the face-selective regions. Lastly, our approach using face-selective ROI-based activity–weighted center-of-mass approach aimed to detect changes in center-of-mass location from perception/study to test while minimizing the influence of surrounding nonface-selective regions. However, this approach may not allow us to fully distinguish between different scenarios of spatial transformation. In some scenarios, a change may not be detected (see schematics in Fig. 4a). Future work can build on these limitations, to further incorporate more rigorous definitions of functional selectivity across perception to memory transformations.

In summary, our study used an individualized approach, identifying specific responses for each participant to investigate memory-driven reinstatement and consistent topographic transformations in activity patterns. Our findings revealed that retrieval-driven cortical reinstatement closely aligns with individual-specific encoding substrates. Importantly, we did not observe consistent spatial shifts in face-selective regions between perception and memory, inconsistent with findings from studies of scene-selective regions. Electrophysiological recordings from the human VTC further supported our findings but highlighted important changes in the timing of neural activity onset between perception and memory. Our results underscore the importance of assessing functional organization at an individual level in future efforts to elucidate the principles of perception–memory transformations in the human brain.

Footnotes

  • This work was supported by NIH Grants R01MH116914 to B.L.F and R01EY023336 to D.Y. We thank Dr. Kevin Weiner for advising on the characterization of face-selectivity data.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Brett L. Foster at brett.foster{at}pennmedicine.upenn.edu or Yvonne Y. Chen at yvonne.chen{at}pennmedicine.upenn.edu.

SfN exclusive license.

References

  1. ↵
    1. Bainbridge WA,
    2. Hall EH,
    3. Baker CI
    (2021) Distinct representational structure and localization for visual encoding and recall during visual imagery. Cereb Cortex 31:1898–1913. https://doi.org/10.1093/cercor/bhaa329 pmid:33285563
    OpenUrlCrossRefPubMed
  2. ↵
    1. Baldassano C,
    2. Esteva A,
    3. Fei-Fei L,
    4. Beck DM
    (2016) Two distinct scene-processing networks connecting vision and memory. eNeuro 3:ENEURO.0178-16.2016. https://doi.org/10.1523/ENEURO.0178-16.2016 pmid:27822493
    OpenUrlAbstract/FREE Full Text
  3. ↵
    1. Bartlett FC,
    2. Burt C
    (1932) Remembering: a study in experimental and social psychology. Br J Educ Psychol 3:187–192. https://doi.org/10.1111/j.2044-8279.1933.tb02913.x
    OpenUrl
  4. ↵
    1. Bartoli E,
    2. Bosking W,
    3. Chen Y,
    4. Li Y,
    5. Sheth SA,
    6. Beauchamp MS,
    7. Yoshor D,
    8. Foster BL
    (2019) Functionally distinct gamma range activity revealed by stimulus tuning in human visual cortex. Curr Biol 29:3345–3358.e7. https://doi.org/10.1016/j.cub.2019.08.004 pmid:31588003
    OpenUrlCrossRefPubMed
  5. ↵
    1. Beauchamp MS,
    2. Haxby JV,
    3. Jennings JE,
    4. DeYoe EA
    (1999) An fMRI version of the Farnsworth–Munsell 100-Hue test reveals multiple color-selective areas in human ventral occipitotemporal cortex. Cereb Cortex 9:257–263. https://doi.org/10.1093/cercor/9.3.257
    OpenUrlCrossRefPubMed
  6. ↵
    1. Bone MB,
    2. Ahmad F,
    3. Buchsbaum BR
    (2020) Feature-specific neural reactivation during episodic memory. Nat Commun 11:1945. https://doi.org/10.1038/s41467-020-15763-2 pmid:32327642
    OpenUrlCrossRefPubMed
  7. ↵
    1. Bosch SE,
    2. Jehee JFM,
    3. Fernandez G,
    4. Doeller CF
    (2014) Reinstatement of associative memories in early visual cortex is signaled by the hippocampus. J Neurosci 34:7493–7500. https://doi.org/10.1523/JNEUROSCI.0805-14.2014 pmid:24872554
    OpenUrlAbstract/FREE Full Text
  8. ↵
    1. Brainard DH
    (1997) The psychophysics toolbox. Spat Vis 10:433–436. https://doi.org/10.1163/156856897X00357
    OpenUrlCrossRefPubMed
  9. ↵
    1. Burock MA,
    2. Dale AM
    (2000) Estimation and detection of event-related fMRI signals with temporally correlated noise: a statistically efficient and unbiased approach. Hum Brain Mapp 11:249–260. https://doi.org/10.1002/1097-0193(200012)11:4<249::AID-HBM20>3.0.CO;2-5
    OpenUrlCrossRefPubMed
  10. ↵
    1. Chang L,
    2. Tsao DY
    (2017) The code for facial identity in the primate brain. Cell 169:1013–1028.e14. https://doi.org/10.1016/j.cell.2017.05.011 pmid:28575666
    OpenUrlCrossRefPubMed
  11. ↵
    1. Chao LL,
    2. Martin A
    (1999) Cortical regions associated with perceiving, naming, and knowing about colors. J Cogn Neurosci 11:25–35. https://doi.org/10.1162/089892999563229
    OpenUrlCrossRefPubMed
  12. ↵
    1. Chen X,
    2. Liu X,
    3. Parker BJ,
    4. Zhen Z,
    5. Weiner KS
    (2023) Functionally and structurally distinct fusiform face area(s) in over 1000 participants. NeuroImage 265:119765. https://doi.org/10.1016/j.neuroimage.2022.119765 pmid:36427753
    OpenUrlPubMed
  13. ↵
    1. Cichy RM,
    2. Chen Y,
    3. Haynes J-D
    (2011) Encoding the identity and location of objects in human LOC. NeuroImage 54:2297–2307. https://doi.org/10.1016/j.neuroimage.2010.09.044
    OpenUrlCrossRefPubMed
  14. ↵
    1. Cox RW
    (1996) AFNI: software for analysis and visualization of functional magnetic resonance neuroimages. Comput Biomed Res 29:162–173. https://doi.org/10.1006/cbmr.1996.0014
    OpenUrlCrossRefPubMed
  15. ↵
    1. Dale AM,
    2. Fischl B,
    3. Sereno MI
    (1999) Cortical surface-based analysis. NeuroImage 9:179–194. https://doi.org/10.1006/nimg.1998.0395
    OpenUrlCrossRefPubMed
  16. ↵
    1. Danker JF,
    2. Anderson JR
    (2010) The ghosts of brain states past: remembering reactivates the brain regions engaged during encoding. Psychol Bull 136:87–102. https://doi.org/10.1037/a0017937 pmid:20063927
    OpenUrlCrossRefPubMed
  17. ↵
    1. Danker JF,
    2. Tompary A,
    3. Davachi L
    (2016) Trial-by-trial hippocampal encoding activation predicts the fidelity of cortical reinstatement during subsequent retrieval. Cereb Cortex 27:3515–3524. https://doi.org/10.1093/cercor/bhw146 pmid:27288317
    OpenUrlCrossRefPubMed
  18. ↵
    1. Epstein R,
    2. Harris A,
    3. Stanley D,
    4. Kanwisher N
    (1999) The parahippocampal place area: recognition, navigation, or encoding? Neuron 23:115–125. https://doi.org/10.1016/S0896-6273(00)80758-8
    OpenUrlCrossRefPubMed
  19. ↵
    1. Esteban O, et al.
    (2019) fMRIPrep: a robust preprocessing pipeline for functional MRI. Nat Methods 16:111–116. https://doi.org/10.1038/s41592-018-0235-4 pmid:30532080
    OpenUrlCrossRefPubMed
  20. ↵
    1. Fairhall SL,
    2. Anzellotti S,
    3. Ubaldi S,
    4. Caramazza A
    (2014) Person- and place-selective neural substrates for entity-specific semantic access. Cereb Cortex 24:1687–1696. https://doi.org/10.1093/cercor/bht039
    OpenUrlCrossRefPubMed
  21. ↵
    1. Fairhall SL,
    2. Ishai A
    (2007) Effective connectivity within the distributed cortical network for face perception. Cereb Cortex 17:2400–2406. https://doi.org/10.1093/cercor/bhl148
    OpenUrlCrossRefPubMed
  22. ↵
    1. Favila SE,
    2. Lee H,
    3. Kuhl BA
    (2020) Transforming the concept of memory reactivation. Trends Neurosci 43:939–950. https://doi.org/10.1016/j.tins.2020.09.006 pmid:33041061
    OpenUrlCrossRefPubMed
  23. ↵
    1. Favila SE,
    2. Samide R,
    3. Sweigart SC,
    4. Kuhl BA
    (2018) Parietal representations of stimulus features are amplified during memory retrieval and flexibly aligned with top-down goals. J Neurosci 38:7809–7821. https://doi.org/10.1523/JNEUROSCI.0564-18.2018 pmid:30054390
    OpenUrlAbstract/FREE Full Text
  24. ↵
    1. Gauthier I,
    2. Skudlarski P,
    3. Gore JC,
    4. Anderson AW
    (2000) Expertise for cars and birds recruits brain areas involved in face recognition. Nat Neurosci 3:191–197. https://doi.org/10.1038/72140
    OpenUrlCrossRefPubMed
  25. ↵
    1. Gordon AM,
    2. Rissman J,
    3. Kiani R,
    4. Wagner AD
    (2014) Cortical reinstatement mediates the relationship between content-specific encoding activity and subsequent recollection decisions. Cereb Cortex 24:3350–3364. https://doi.org/10.1093/cercor/bht194 pmid:23921785
    OpenUrlCrossRefPubMed
  26. ↵
    1. Gorgolewski K,
    2. Burns CD,
    3. Madison C,
    4. Clark D,
    5. Halchenko YO,
    6. Waskom ML,
    7. Ghosh SS
    (2011) Nipype: a flexible, lightweight and extensible neuroimaging data processing framework in python. Front Neuroinformatics 5:13. https://doi.org/10.3389/fninf.2011.00013 pmid:21897815
    OpenUrlCrossRefPubMed
  27. ↵
    1. Grill-Spector K,
    2. Weiner KS
    (2014) The functional architecture of the ventral temporal cortex and its role in categorization. Nat Rev Neurosci 15:536–548. https://doi.org/10.1038/nrn3747 pmid:24962370
    OpenUrlCrossRefPubMed
  28. ↵
    1. Groppe DM,
    2. Bickel S,
    3. Dykstra AR,
    4. Wang X,
    5. Mégevand P,
    6. Mercier MR,
    7. Lado FA,
    8. Mehta AD,
    9. Honey CJ
    (2017) iELVis: an open source MATLAB toolbox for localizing and visualizing human intracranial electrode data. J Neurosci Methods 281:40–48. https://doi.org/10.1016/j.jneumeth.2017.01.022
    OpenUrlCrossRefPubMed
  29. ↵
    1. Haxby JV,
    2. Gobbini MI,
    3. Furey ML,
    4. Ishai A,
    5. Schouten JL,
    6. Pietrini P
    (2001) Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293:2425–2430. https://doi.org/10.1126/science.1063736
    OpenUrlAbstract/FREE Full Text
  30. ↵
    1. Hemmer P,
    2. Steyvers M
    (2009) A Bayesian account of reconstructive memory. Top Cogn Sci 1:189–202. https://doi.org/10.1111/j.1756-8765.2008.01010.x
    OpenUrl
  31. ↵
    1. Hesse JK,
    2. Tsao DY
    (2020) The macaque face patch system: a turtle’s underbelly for the brain. Nat Rev Neurosci 21:695–716. https://doi.org/10.1038/s41583-020-00393-w
    OpenUrlCrossRefPubMed
  32. ↵
    1. Horner AJ,
    2. Bisby JA,
    3. Bush D,
    4. Lin W-J,
    5. Burgess N
    (2015) Evidence for holistic episodic recollection via hippocampal pattern completion. Nat Commun 6:7462. https://doi.org/10.1038/ncomms8462 pmid:26136141
    OpenUrlCrossRefPubMed
  33. ↵
    1. Hsu NS,
    2. Kraemer DJM,
    3. Oliver RT,
    4. Schlichting ML,
    5. Thompson-Schill SL
    (2011) Color, context, and cognitive style: variations in color knowledge retrieval as a function of task and subject variables. J Cogn Neurosci 23:2544–2557. https://doi.org/10.1162/jocn.2011.21619
    OpenUrlCrossRefPubMed
  34. ↵
    1. Julian JB,
    2. Fedorenko E,
    3. Webster J,
    4. Kanwisher N
    (2012) An algorithmic method for functionally defining regions of interest in the ventral visual pathway. NeuroImage 60:2357–2364. https://doi.org/10.1016/j.neuroimage.2012.02.055
    OpenUrlCrossRefPubMed
  35. ↵
    1. Kanwisher N
    (2010) Functional specificity in the human brain: a window into the functional architecture of the mind. Proc Natl Acad Sci U S A 107:11163–11170. https://doi.org/10.1073/pnas.1005062107 pmid:20484679
    OpenUrlAbstract/FREE Full Text
  36. ↵
    1. Kanwisher N
    (2017) The quest for the FFA and where it led. J Neurosci 37:1056–1061. https://doi.org/10.1523/JNEUROSCI.1706-16.2016 pmid:28148806
    OpenUrlAbstract/FREE Full Text
  37. ↵
    1. Kanwisher N,
    2. McDermott J,
    3. Chun MM
    (1997) The fusiform face area: a module in human extrastriate cortex specialized for face perception. J Neurosci 17:4302–4311. https://doi.org/10.1523/JNEUROSCI.17-11-04302.1997 pmid:9151747
    OpenUrlAbstract/FREE Full Text
  38. ↵
    1. Kanwisher N,
    2. Yovel G
    (2006) The fusiform face area: a cortical region specialized for the perception of faces. Philos Trans R Soc B Biol Sci 361:2109–2128. https://doi.org/10.1098/rstb.2006.1934 pmid:17118927
    OpenUrlCrossRefPubMed
  39. ↵
    1. Kastner S,
    2. Pinsk MA,
    3. De Weerd P,
    4. Desimone R,
    5. Ungerleider LG
    (1999) Increased activity in human visual cortex during directed attention in the absence of visual stimulation. Neuron 22:751–761. https://doi.org/10.1016/S0896-6273(00)80734-5
    OpenUrlCrossRefPubMed
  40. ↵
    1. Kuhl BA,
    2. Chun MM
    (2014) Successful remembering elicits event-specific activity patterns in lateral parietal cortex. J Neurosci 34:8051–8060. https://doi.org/10.1523/JNEUROSCI.4328-13.2014 pmid:24899726
    OpenUrlAbstract/FREE Full Text
  41. ↵
    1. Kuskowski MA,
    2. Pardo JV
    (1999) The role of the fusiform gyrus in successful encoding of face stimuli. NeuroImage 9:599–610. https://doi.org/10.1006/nimg.1999.0442
    OpenUrlCrossRefPubMed
  42. ↵
    1. McClelland JL,
    2. McNaughton BL,
    3. O’Reilly RC
    (1995) Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Psychol Rev 102:419–457. https://doi.org/10.1037/0033-295X.102.3.419
    OpenUrlCrossRefPubMed
  43. ↵
    1. McGugin RW,
    2. Van Gulick AE,
    3. Gauthier I
    (2016) Cortical thickness in fusiform face area predicts face and object recognition performance. J Cogn Neurosci 28:282–294. https://doi.org/10.1162/jocn_a_00891 pmid:26439272
    OpenUrlCrossRefPubMed
  44. ↵
    1. O’Toole AJ,
    2. Jiang F,
    3. Abdi H,
    4. Haxby JV
    (2005) Partially distributed representations of objects and faces in ventral temporal cortex. J Cogn Neurosci 17:580–590. https://doi.org/10.1162/0898929053467550
    OpenUrlCrossRefPubMed
  45. ↵
    1. Pelli DG
    (1997) The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spat Vis 10:437–442. https://doi.org/10.1163/156856897X00366
    OpenUrlCrossRefPubMed
  46. ↵
    1. Pinsk MA,
    2. Arcaro M,
    3. Weiner KS,
    4. Kalkus JF,
    5. Inati SJ,
    6. Gross CG,
    7. Kastner S
    (2009) Neural representations of faces and body parts in macaque and human cortex: a comparative fMRI study. J Neurophysiol 101:2581–2600. https://doi.org/10.1152/jn.91198.2008 pmid:19225169
    OpenUrlCrossRefPubMed
  47. ↵
    1. Pitcher D,
    2. Walsh V,
    3. Duchaine B
    (2011) The role of the occipital face area in the cortical face perception network. Exp Brain Res 209:481–493. https://doi.org/10.1007/s00221-011-2579-1
    OpenUrlCrossRefPubMed
  48. ↵
    1. Polyn SM,
    2. Natu VS,
    3. Cohen JD,
    4. Norman KA
    (2005) Category-specific cortical activity precedes retrieval during memory search. Science 310:1963–1966. https://doi.org/10.1126/science.1117645
    OpenUrlAbstract/FREE Full Text
  49. ↵
    1. Prince SE,
    2. Dennis NA,
    3. Cabeza R
    (2009) Encoding and retrieving faces and places: distinguishing process- and stimulus-specific differences in brain activity. Neuropsychologia 47:2282–2289. https://doi.org/10.1016/j.neuropsychologia.2009.01.021 pmid:19524092
    OpenUrlCrossRefPubMed
  50. ↵
    1. Rauschecker AM,
    2. Bowen RF,
    3. Parvizi J,
    4. Wandell BA
    (2012) Position sensitivity in the visual word form area. Proc Natl Acad Sci U S A 109:E1568–77. https://doi.org/10.1073/pnas.1121304109 pmid:22570498
    OpenUrlAbstract/FREE Full Text
  51. ↵
    1. R Development Core Team
    (2010) R: A language and environment for statistical computing. Computer programme. http://www.R-project.org/.
  52. ↵
    1. Righart R,
    2. Andersson F,
    3. Schwartz S,
    4. Mayer E,
    5. Vuilleumier P
    (2010) Top-down activation of fusiform cortex without seeing faces in prosopagnosia. Cereb Cortex 20:1878–1890. https://doi.org/10.1093/cercor/bhp254
    OpenUrlCrossRefPubMed
  53. ↵
    1. Ritchey M,
    2. Wing EA,
    3. LaBar KS,
    4. Cabeza R
    (2013) Neural similarity between encoding and retrieval is related to memory via hippocampal interactions. Cereb Cortex 23:2818–2828. https://doi.org/10.1093/cercor/bhs258 pmid:22967731
    OpenUrlCrossRefPubMed
  54. ↵
    1. Roediger HL
    (2001) Reconstructive memory, psychology of. In: International encyclopedia of the social & behavioral sciences (Smelser NJ, Baltes PB, eds), pp 12844–12849. Oxford: Pergamon.
  55. ↵
    1. Rossion B,
    2. Hanseeuw B,
    3. Dricot L
    (2012) Defining face perception areas in the human brain: a large-scale factorial fMRI face localizer analysis. Brain Cogn 79:138–157. https://doi.org/10.1016/j.bandc.2012.01.001
    OpenUrlCrossRefPubMed
  56. ↵
    1. Rossion B,
    2. Jacques C,
    3. Jonas J
    (2018) Mapping face categorization in the human ventral occipitotemporal cortex with direct neural intracranial recordings. Ann N Y Acad Sci 1426:5–24. https://doi.org/10.1111/nyas.13596
    OpenUrl
  57. ↵
    1. Rugg MD,
    2. Johnson JD,
    3. Park H,
    4. Uncapher MR
    (2008) Encoding-retrieval overlap in human episodic memory: a functional neuroimaging perspective. Prog Brain Res 169:339–352. https://doi.org/10.1016/S0079-6123(07)00021-0
    OpenUrlCrossRefPubMed
  58. ↵
    1. Rugg MD,
    2. Thompson-Schill SL
    (2013) Moving forward with fMRI data. Perspect Psychol Sci 8:84–87. https://doi.org/10.1177/1745691612469030 pmid:25541584
    OpenUrlCrossRefPubMed
  59. ↵
    1. Schwarz L,
    2. Kreifelts B,
    3. Wildgruber D,
    4. Erb M,
    5. Scheffler K,
    6. Ethofer T
    (2019) Properties of face localizer activations and their application in functional magnetic resonance imaging (fMRI) fingerprinting. PLoS ONE 14:e0214997. https://doi.org/10.1371/journal.pone.0214997 pmid:31013276
    OpenUrlPubMed
  60. ↵
    1. Silson EH,
    2. Gilmore AW,
    3. Kalinowski SE,
    4. Steel A,
    5. Kidder A,
    6. Martin A,
    7. Baker CI
    (2019) A posterior–anterior distinction between scene perception and scene construction in human medial parietal cortex. J Neurosci 39:705–717. https://doi.org/10.1523/JNEUROSCI.1219-18.2018 pmid:30504281
    OpenUrlAbstract/FREE Full Text
  61. ↵
    1. Silson EH,
    2. Steel AD,
    3. Baker CI
    (2016) Scene-selectivity and retinotopy in medial parietal cortex. Front Hum Neurosci 10:412. https://doi.org/10.3389/fnhum.2016.00412 pmid:27588001
    OpenUrlCrossRefPubMed
  62. ↵
    1. Simmons WK,
    2. Ramjee V,
    3. Beauchamp MS,
    4. McRae K,
    5. Martin A,
    6. Barsalou LW
    (2007) A common neural substrate for perceiving and knowing about color. Neuropsychologia 45:2802–2810. https://doi.org/10.1016/j.neuropsychologia.2007.05.002 pmid:17575989
    OpenUrlCrossRefPubMed
  63. ↵
    1. Squire LR
    (2004) Memory systems of the brain: a brief history and current perspective. Neurobiol Learn Mem 82:171–177. https://doi.org/10.1016/j.nlm.2004.06.005
    OpenUrlCrossRefPubMed
  64. ↵
    1. Srokova S,
    2. Hill PF,
    3. Rugg MD
    (2022) The retrieval-related anterior shift is moderated by age and correlates with memory performance. J Neurosci 42:1765–1776. https://doi.org/10.1523/JNEUROSCI.1763-21.2021 pmid:35017225
    OpenUrlAbstract/FREE Full Text
  65. ↵
    1. St-Laurent M,
    2. Abdi H,
    3. Buchsbaum BR
    (2015) Distributed patterns of reactivation predict vividness of recollection. J Cogn Neurosci 27:2000–2018. https://doi.org/10.1162/jocn_a_00839
    OpenUrlCrossRefPubMed
  66. ↵
    1. Staresina BP,
    2. Wimber M
    (2019) A neural chronometry of memory recall. Trends Cogn Sci 23:1071–1085. https://doi.org/10.1016/j.tics.2019.09.011
    OpenUrlCrossRefPubMed
  67. ↵
    1. Steel A,
    2. Billings MM,
    3. Silson EH,
    4. Robertson CE
    (2021) A network linking scene perception and spatial memory systems in posterior cerebral cortex. Nat Commun 12:2632. https://doi.org/10.1038/s41467-021-22848-z pmid:33976141
    OpenUrlCrossRefPubMed
  68. ↵
    1. Stigliani A,
    2. Weiner KS,
    3. Grill-Spector K
    (2015) Temporal processing capacity in high-level visual cortex is domain specific. J Neurosci 35:12412–12424. https://doi.org/10.1523/JNEUROSCI.4822-14.2015 pmid:26354910
    OpenUrlAbstract/FREE Full Text
  69. ↵
    1. Tsao DY,
    2. Freiwald WA,
    3. Tootell RBH,
    4. Livingstone MS
    (2006) A cortical region consisting entirely of face-selective cells. Science 311:670–674. https://doi.org/10.1126/science.1119983 pmid:16456083
    OpenUrlAbstract/FREE Full Text
  70. ↵
    1. Tsao DY,
    2. Moeller S,
    3. Freiwald WA
    (2008) Comparing face patch systems in macaques and humans. Proc Natl Acad Sci U S A 105:19514–19519. https://doi.org/10.1073/pnas.0809662105 pmid:19033466
    OpenUrlAbstract/FREE Full Text
  71. ↵
    1. Tulving E,
    2. Thomson DM
    (1973) Encoding specificity and retrieval processes in episodic memory. Psychol Rev 80:352–373. https://doi.org/10.1037/h0020071
    OpenUrlCrossRef
  72. ↵
    1. Wang H, et al.
    (2022) Motor network reorganization after motor imagery training in stroke patients with moderate to severe upper limb impairment. CNS Neurosci Ther 29:619–632. https://doi.org/10.1111/cns.14065 pmid:36575865
    OpenUrlPubMed
  73. ↵
    1. Weiner KS,
    2. Grill-Spector K
    (2010) Sparsely-distributed organization of face and limb activations in human ventral temporal cortex. NeuroImage 52:1559–1573. https://doi.org/10.1016/j.neuroimage.2010.04.262 pmid:20457261
    OpenUrlCrossRefPubMed
  74. ↵
    1. Weiner KS,
    2. Grill-Spector K
    (2012) The improbable simplicity of the fusiform face area. Trends Cogn Sci 16:251–254. https://doi.org/10.1016/j.tics.2012.03.003
    OpenUrlCrossRefPubMed
  75. ↵
    1. Weiner KS,
    2. Grill-Spector K
    (2013) Neural representations of faces and limbs neighbor in human high-level visual cortex: evidence for a new organization principle. Psychol Res 77:74–97. https://doi.org/10.1007/s00426-011-0392-x pmid:22139022
    OpenUrlCrossRefPubMed
  76. ↵
    1. Xue G
    (2018) The neural representations underlying human episodic memory. Trends Cogn Sci 22:544–561. https://doi.org/10.1016/j.tics.2018.03.004
    OpenUrlCrossRefPubMed
  77. ↵
    1. Xue G
    (2022) From remembering to reconstruction: the transformative neural representation of episodic memory. Prog Neurobiol 219:102351. https://doi.org/10.1016/j.pneurobio.2022.102351
    OpenUrlCrossRef
  78. ↵
    1. Yeatman JD,
    2. Rauschecker AM,
    3. Wandell BA
    (2013) Anatomy of the visual word form area: adjacent cortical circuits and long-range white matter connections. Brain Lang 125:146–155. https://doi.org/10.1016/j.bandl.2012.04.010 pmid:22632810
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 44 (22)
Journal of Neuroscience
Vol. 44, Issue 22
29 May 2024
  • Table of Contents
  • About the Cover
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Perception and Memory Reinstatement Engage Overlapping Face-Selective Regions within Human Ventral Temporal Cortex
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Perception and Memory Reinstatement Engage Overlapping Face-Selective Regions within Human Ventral Temporal Cortex
Yvonne Y. Chen, Aruni Areti, Daniel Yoshor, Brett L. Foster
Journal of Neuroscience 29 May 2024, 44 (22) e2180232024; DOI: 10.1523/JNEUROSCI.2180-23.2024

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Perception and Memory Reinstatement Engage Overlapping Face-Selective Regions within Human Ventral Temporal Cortex
Yvonne Y. Chen, Aruni Areti, Daniel Yoshor, Brett L. Foster
Journal of Neuroscience 29 May 2024, 44 (22) e2180232024; DOI: 10.1523/JNEUROSCI.2180-23.2024
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • Peer Review
  • PDF

Keywords

  • episodic memory
  • face processing
  • fMRI
  • iEEG
  • reinstatement

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Functional alterations in parvalbumin-positive interneurons after traumatic brain injury in somatosensory cortex of mice
  • Disruption of grin2A, an epilepsy-associated gene, produces altered spontaneous swim behavior in zebrafish
  • Nicotine Withdrawal Drives Aversive Behaviors by Recruiting Inhibitory Interpeduncular Nucleus Inputs to the Laterodorsal Tegmentum in Mice
Show more Research Articles

Behavioral/Cognitive

  • Functional alterations in parvalbumin-positive interneurons after traumatic brain injury in somatosensory cortex of mice
  • Disruption of grin2A, an epilepsy-associated gene, produces altered spontaneous swim behavior in zebrafish
  • Nicotine Withdrawal Drives Aversive Behaviors by Recruiting Inhibitory Interpeduncular Nucleus Inputs to the Laterodorsal Tegmentum in Mice
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.