Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

Effects of Aging on Successful Object Encoding: Enhanced Semantic Representations Compensate for Impaired Visual Representations

Loris Naspi, Charlotte Stensholt, Anna E. Karlsson, Zachary A. Monge and Roberto Cabeza
Journal of Neuroscience 1 November 2023, 43 (44) 7337-7350; https://doi.org/10.1523/JNEUROSCI.2265-22.2023
Loris Naspi
1Department of Psychology, Humboldt University of Berlin, Berlin 10117, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Charlotte Stensholt
1Department of Psychology, Humboldt University of Berlin, Berlin 10117, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Anna E. Karlsson
1Department of Psychology, Humboldt University of Berlin, Berlin 10117, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Zachary A. Monge
2Center for Cognitive Neuroscience, Duke University, Durham, North Carolina 27708
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Roberto Cabeza
1Department of Psychology, Humboldt University of Berlin, Berlin 10117, Germany
2Center for Cognitive Neuroscience, Duke University, Durham, North Carolina 27708
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

Although episodic memory and visual processing decline substantially with healthy aging, semantic knowledge is generally spared. There is evidence that older adults' spared semantic knowledge can support episodic memory. Here, we used functional magnetic resonance imaging (fMRI) combined with representational similarity analyses (RSAs) to examine how novel visual and preexisting semantic representations at encoding predict subjective memory vividness at retrieval. Eighteen young and seventeen older adults (female and male participants) encoded images of objects during fMRI scanning and recalled these images while rating the vividness of their memories. After scanning, participants discriminated between studied images and similar lures. RSA based on a deep convolutional neural network and normative concept feature data were used to link patterns of neural activity during encoding to visual and semantic representations. Relative to young adults, the specificity of activation patterns for visual features was reduced in older adults, consistent with dedifferentiation. However, the specificity of activation patterns for semantic features was enhanced in older adults, consistent with hyperdifferentiation. Despite dedifferentiation, visual representations in early visual cortex (EVC) predicted high memory vividness in both age groups. In contrast, semantic representations in lingual gyrus (LG) and fusiform gyrus (FG) were associated with high memory vividness only in the older adults. Intriguingly, data suggests that older adults with lower specificity of visual representations in combination with higher specificity of semantic representations tended to rate their memories as more vivid. Our findings suggest that memory vividness in aging relies more on semantic representations over anterior regions, potentially compensating for age-related dedifferentiation of visual information in posterior regions.

SIGNIFICANCE STATEMENT Normal aging is associated with impaired memory for events while semantic knowledge might even improve. We investigated the effects of aging on the specificity of visual and semantic information in the brain when viewing common objects and how this information enables subsequent memory vividness for these objects. Using functional magnetic resonance imaging (fMRI) combined with modeling of the stimuli we found that visual information was represented with less specificity in older than young adults while still supporting memory vividness. In contrast semantic information supported memory vividness only in older adults and especially in those individuals that had the lowest specificity of visual information. These findings provide evidence for a spared semantic memory system increasingly recruited to compensate for degraded visual representations in older age.

  • aging
  • compensation
  • episodic memory fMRI memory encoding representational similarity analysis

Introduction

In general, healthy older adults show substantial decline in episodic memory, memory for context-specific events, but little or no deficits in semantic memory, general knowledge of the world (Hoyer and Verhaeghen, 2006). Perhaps for this reason, age-related episodic deficits are reduced, or even eliminated, when to-be-remembered episodic information is meaningful and fits well with prior semantic knowledge (Umanath and Marsh, 2014). In contrast, older adults have considerable difficulty remembering meaningless sensory stimuli (Monge and Madden, 2016). These results suggest that aging differentially impairs the encoding of arbitrary visual representations, but not the access to preexisting semantic representations. Although behavioral results are generally consistent with this hypothesis (Castel, 2005; Castel et al., 2013), the neural mechanisms are still unclear.

Visual object processing in the occipito-temporal cortex (OTC) is impaired in older adults, who display reduced specificity of activation patterns for different visual stimuli in this region (Carp et al., 2011; Koen et al., 2019, 2020; Trelle et al., 2019). This deficit, known as dedifferentiation (Koen and Rugg, 2019), correlates with poor cognitive performance in older adults, particularly episodic memory (Park et al., 2010; Du et al., 2016). However, our group recently found that although early visual cortex (EVC) displayed age-related dedifferentiation, the ventral anterior temporal lobe (vATL) showed an age-related increase in the specificity of activation patterns for semantic representations (Deng et al., 2021). This effect, which we called age-related hyperdifferentiation, was indirectly linked to memory because vATL also showed greater similarity between encoding and retrieval representations [encoding-retrieval similarity (ERS)] in older than young adults. The current study goes beyond Deng et al. (2021) by directly examining whether the quality of visual and semantic object representations at encoding differentially predicts subjective memory vividness in young and older adults.

Episodic retrieval is accompanied by a feeling of vividness. Although the association with accuracy varies across individuals (Clark and Maguire, 2020) and age groups (Folville et al., 2020), vividness ratings have been used extensively in functional magnetic resonance imaging (fMRI) studies of memory representations. There is evidence that both visual (Cooper et al., 2019; Bone et al., 2020) and semantic information (Cooper and Ritchey, 2022) influence memory vividness. In the present study, we relate the specificity of visual and semantic neural representations during encoding to the subjective experience of memory vividness in young and older adults.

Participants encoded pictures of everyday objects paired with their name. During retrieval, they were provided with the names of studied objects and required to recall the associated picture and rate the vividness of their memory. These subjective ratings were validated with a forced-choice recognition test outside the scanner. We used representational similarity analysis (RSA) to measure the quality of visual and semantic object representations at encoding, which was then used to predict memory vividness. In addition to the two regions of interest (ROIs) investigated by Deng et al. (2021), EVC and vATL, we also examined the lingual gyrus (LG), the fusiform gyrus (FG), and the lateral occipital cortex (LOC), regions known to encode new visual representations, store preexisting semantic knowledge, and predict object recognition (Davis et al., 2021; Naspi et al., 2021). We investigated two predictions. First, based on ERS results of Deng et al. (2021), we expected that visual representations in posterior OTC regions would predict memory vividness more strongly in young than older adults, whereas semantic representations in anterior OTC areas would predict memory vividness more strongly in older than young adults. Given that Deng et al. (2021) investigated scenes and the current study examines object stimuli, the specific OTC regions that would show these effects were more difficult to predict. Second, assuming semantic representations can play a compensatory role in older adults (Cabeza et al., 2018), we predicted that, in this group, the quality of semantic representations would not only be negatively correlated with the quality of visual representations (i.e., individuals with reduced specificity for visual representations compensate with increased specificity for semantic representations), but also associated with later memory vividness.

Materials and Methods

Participants

A total of 45 participants took part in the present study, consisting of 23 young adults aged 18–23 years and 22 older adults aged 65–80 years. Participants self-reported to be right-handed, native English speakers, and to have no significant health problems or are taking medications known to affect cognitive function or cerebral blood flow. The Beck Depression Inventory (BDI; Beck et al., 1961) was used to screen for depression, with an exclusion criterion of >10. The older adults were screened for dementia using the Montreal Cognitive Assessment (MoCA; Nasreddine et al., 2005), with an exclusion criterion of <26. Following the exclusion of participants based on the screening measures (see Table 1), technical errors, imaging artifacts, and lack of behavioral responses, a total of 18 young adults (M = 20.94, SD = 3.24; 8 females, 10 males) and 17 older adults (M = 70.5, SD = 3.38; 8 females, 9 males) were included in the present study. The study was conducted at Duke University, and all experimental procedures were approved by the Institutional Review Board at Duke University. Informed consent was obtained from all participants before testing, and they were given monetary compensation for their time following study completion.

View this table:
  • View inline
  • View popup
Table 1.

Demographic and cognitive tests data

Stimuli

The stimuli used in the present study consisted of 240 basic-level concepts that were chosen from a total of 995 concepts used for an online object norming experiment (Hovhannisyan et al., 2021). These were members of eight superordinate categories (Birds, Clothing, Fruits, Mammals, Outdoor items, Vehicles, Vegetables, Tools). A total of 120 were living and 120 were nonliving things. Two images for each basic-level concept were identified using image search engines Google Images, Bing Images, and Flickr. One of the two images was encoded, and the other was used as a distractor in a two-alternative forced-choice recognition test. The allocation of the two images to each condition (i.e., studied item or similar lure) in the two-alternative forced-choice recognition test was counterbalanced across participants. For a manipulation whose results are not reported here, half of the encoded objects were presented in their original resolution, and half slightly blurred. All encoded objects were subsequently tested in a free recall task, using the object name as a cue, and then in the two-alternative forced-choice recognition test. Each study and test list were presented in a unique, random trial order.

Procedure

As illustrated by Figure 1, the experiment consisted of two scanning sessions ∼24 h apart, one for encoding and one for retrieval. Stimuli were presented using E-Prime 3.0 software (Psychology Software Tools). In the scanner, stimuli were viewed on a back-projection screen via a mirror attached to the head coil. Earplugs were used to reduce scanner noise, and head motion was minimized using foam pads. During an incidental encoding phase, participants were shown the 240 object images with their accompanying object label. The object was centered on a white background with the object label below. The scanned encoding phase consisted of eight runs, with 30 trials in each run. Each object image and label were presented once. Participants made either a clear-blurry judgment or a living-nonliving judgment, but the results of these conditions were collapsed for the current study to increase statistical power. Each object was presented on the screen for 4 s, and participants made responses via a keypad. Each encoding trial was followed by an 8-s active baseline interval, where participants were presented with a number from one to four and asked to press the corresponding keypad button. At retrieval, the runs were identical in order and duration to those at encoding, except that only the object labels (e.g., table) from encoding were shown for 4 s. During this time, participants recalled the image of the corresponding encoded object in as much detail as possible and rated the vividness (1 = least amount of detail, 4 = highly detailed memory). This was again followed by an 8-s active baseline interval where participants performed the same task they completed at encoding. After exiting the scanner, participants performed the two-alternative forced-choice task. In each trial, participants saw a studied picture (e.g., a table) and a similar example of the same object (e.g., a different table), and they had to choose the encoded image and rate their confidence (1 = low confidence, 4 = high confidence). This task was self-paced.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Experimental paradigm. A, During the encoding phase on day 1, participants viewed 240 object images while making a clear-blurry or a living-nonliving judgment. B, During the retrieval phase on day 2, participants were presented with the object label and recalled the image in as much detail as possible rating its vividness. C, In the postscan two-alternative force-choice task participants had to discriminate between the studied image and a different exemplar of the same concept.

fMRI acquisition

The data were collected using a General Electric 3T Premier UHP MRI scanner and a 48-channel head coil. The scanning session started with a localizer scan, followed by a high-resolution anatomic image (162 axial slices parallel to the AC-PC plane, voxel dimension of 1.0 × 1.0 × 1.0 mm3). This was followed by a resting-state scan. Whole-brain functional scans were collected using a multiband echoplanar imaging (EPI) scan with a factor of three and acceleration factor of 2 (repetition time = 2 s, echo time = 30 ms, field of view = 256 mm2, oblique slices with voxel dimension of 2.0 × 2.0 × 2.0 mm3). The stimuli were projected onto a mirror at the back of the scanner bore, and a four-button fiber-optic response box was used to record the participants' in-scan responses. To minimize scanner noise, participants wore earplugs, foam pads were used to minimize head motion, and MRI-compatible lenses were used to correct vision if needed. At the end of the scan diffusion-weighted images were collected, but together with the resting-state scan they will not be reported on in the present study.

Image preprocessing

Data processing was performed using fMRIPrep (20.2.3; Esteban et al., 2019). First, each T1-weighted volume was corrected for intensity nonuniformity (INU) with N4BiasFieldCorrection (Tustison et al., 2010), distributed with ANTs 2.3.3 (Avants et al., 2008). The T1w-reference was then skull-stripped with a Nipype implementation of the antsBrainExtraction.sh workflow (from ANTs), using OASIS30ANTs as a target template. Brain tissue segmentation of CSF, white matter (WM) and gray matter (GM) was performed on the brain-extracted T1w using fast (FSL 5.0.9; Zhang et al., 2001). A T1w-reference map was computed after registration of 2 T1w images (after INU-correction) using mri_robust_template (FreeSurfer 6.0.1; Reuter et al., 2010). Volume-based spatial normalization to one standard space (MNI152NLin2009cAsym) was performed through nonlinear registration with antsRegistration (ANTs 2.3.3), using brain-extracted versions of both T1w reference and the T1w template. For each of the eight BOLD runs found per subject, the following preprocessing was performed. First, a reference volume and its skull-stripped version were generated using a custom methodology of fMRIPrep. A B0-nonuniformity map (or fieldmap) was estimated based on two (or more) echoplanar imaging (EPI) references with opposing phase-encoding directions with 3dQwarp (Cox and Hyde, 1997). Based on the estimated susceptibility distortion, a corrected EPI (echoplanar imaging) reference was calculated for a more accurate co-registration with the anatomic reference. The BOLD reference was then co-registered to the T1w reference using flirt (FSL 5.0.9; Jenkinson and Smith, 2001) with the boundary-based registration (Greve and Fischl, 2009) cost-function. Co-registration was configured with nine degrees of freedom to account for distortions remaining in the BOLD reference. Head-motion parameters with respect to the BOLD reference (transformation matrices, and six corresponding rotation and translation parameters) are estimated before any spatiotemporal filtering using mcflirt (FSL 5.0.9; Jenkinson et al., 2002). BOLD runs were slice-time corrected using 3dTshift from AFNI 20160207 (Cox and Hyde, 1997). The BOLD time series (including slice-timing correction when applied) were resampled onto their original, native space by applying a single, composite transform to correct for head-motion and susceptibility distortions. These resampled BOLD time series will be referred to as unsmoothed preprocessed BOLD in original space. For RSA, we used these BOLD time series to keep the finer-grained structure of activity.

ROIs

The ROIs used in the current study are shown in Figure 2. These regions were selected because they significantly predicted subsequent memory in two recent studies with young adults (Davis et al., 2021; Naspi et al., 2021) and age-related differences in memory reactivation (Deng et al., 2021). We defined five ROIs, including areas spanning the posterior and anterior ventral stream, which have been implicated in visual and semantic feature-based object recognition processes (Clarke and Tyler, 2014, 2015; Lambon Ralph et al., 2017; Devereux et al., 2018). Except where explicitly stated, ROIs were bilateral and defined in MNI space using the Harvard-Oxford structural atlas: (1) the EVC ROI (BA17/18) was defined using the Julich probabilistic cytoarchitectonic maps (Amunts et al., 2000) from the SPM Anatomy toolbox (Eickhoff et al., 2005); (2) LG; (3) FG; (4) LOC; (5) and vATL ROI that included voxels with >30% probability of being in the anterior division of the inferior temporal gyrus (ITG) and >30% probability of being in the anterior division of the FG. After preprocessing, the ROIs in MNI space were converted to each subject's functional native space using the inverse affine transformation matrix outputted by fMRIPrep (20.2.3; Esteban et al., 2019). The ROIs in Figure 2 are mapped on a pial representation of cortex using the Connectome Workbench (https://www.humanconnectome.org/software/connectome-workbench).

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Binary region of interests (ROIs) overlaid on a pial cortical surface based on the normalized structural image averaged over participants. Colored ROIs represent regions known to be engaged in episodic encoding and in visual or semantic cognition. EVC (BA17/18) = early visual cortex; LG = lingual gyrus; FG = fusiform gyrus; LOC = lateral occipital cortex; vATL = ventral anterior temporal lobe.

Statistical analysis

Behavioral analysis

To assess whether differences in reaction time (RT) at encoding are predicted by whether the items were subsequently classified with high versus low vividness at retrieval, we used a linear mixed effect model with vividness ratings (high vividness coded as 3 and 4 and low vividness coded as 1 and 2) and age group (young adults and older adults) as predictors, and RT as the outcome variable. At test, we first analyzed the in-scan subjective memory vividness and then the performance in the postscan memory recognition for each participant by measuring the high/low vividness or correct/incorrect rate (i.e., the number of high vividness trials or correct trials over the number of total trials). Then, to confirm the validity of in-scan memory vividness ratings as a memory measure, we used a generalized linear mixed effect model to investigate age-related differences in postscan memory accuracy (0, 1) predicted by the vividness rating during retrieval.

Multivariate fMRI analysis

Overview

The goal of our study was threefold. In the first analysis, we aimed to replicate and extend to older adults the key findings regarding visual and semantic object processing regardless of subsequent memory vividness and/or performance (Clarke and Tyler, 2014; Lambon Ralph et al., 2017; Devereux et al., 2018). More specifically, we tested whether the ventral stream showed a gradient of informational specificity, from image-based visual properties in EVC and OTC to semantic properties in the same OTC regions, but also more anteriorly in vATL. We also investigated whether older adults showed reduced specificity of activation patterns for visual properties, reflecting dedifferentiation, as well as increased specificity of activation patterns for semantic properties, reflecting hyperdifferentiation. The analysis involved four steps (Fig. 3), which are standard in RSA studies (Kriegeskorte et al., 2008). (1) The visual and semantic properties of all the stimuli were used to create 2 different representational dissimilarity matrices (RDMs). In an RDM, the rows and columns correspond to the stimuli (240 in the current study) and the cells contain values of the predicted dissimilarity between pairs of stimuli. (2) A pattern of activity was extracted for each region of interest (ROI). This matrix has the same structure as the RDM (stimuli in rows and columns), but the cells do not contain a measure of dissimilarity in stimulus properties as in the RDM, but dissimilarity in the fMRI activation patterns by the stimuli. (3a) After vectorizing the lower triangle of both the model RDMs and brain RDMs, for each subject, we obtained the global model-brain fit reflecting the quality of the visual and semantic representation. (4a) We then tested for significance of the global model-brain fit at the group level using a one-sample Wilcoxon signed-rank test and a two-sample Wilcoxon rank-sum test for age-related differences.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Four steps of the method employed. (1) Model representational dissimilairty matrices (RDMs) are generated for each visual and semantic representation type investigated. (2) Brain RDMs are created for each region of interest. This matrix tracks the dissimilarity between the fMRI activation patterns for all voxels in the ROI for each pair of stimuli, yielding a matrix of dissimilarity values of the same dimensions as the model RDMs. (3a) In the first analysis, each model RDM is correlated with the brain RDM at once, yielding a global model-brain fit for the region. (4a) Significance of the global model-brain fit is tested using Wilcoxon tests. (3b) In the second analysis, for each brain region, each row of the model representational dissimilarity matrix (RDM) is correlated with each row of the brain RDM, yielding a stimulus-brain fit (IRAF) measure for the single item. (4b) The IRAF is used as an independent variable in a generalized linear mixed effect model to identify regions where the IRAF of each RDM predicted memory vividness in young and older adults.

In the second analysis, we investigated whether the specificity of those visual and semantic representations also contributed to subsequent memory vividness. Although recent studies suggest that these dimensions are key factors for recognition memory (Davis et al., 2021; Naspi et al., 2021), little is known about their contribution to memory vividness in young and older adults (but see Deng et al., 2021). For this analysis, we started from the model and brain RDMs created earlier (steps 1 and 2), but followed a slightly different procedure (Fig. 3; but also Davis et al., 2021). We first computed the correlation between the dissimilarity of each object with the rest of the objects in terms of stimulus properties (each row of the model RDM) and the dissimilarity of the same object with the rest of the objects in terms of activation patterns (each row of the brain RDM) and identified brain regions that demonstrated a significant correlation across all items and subjects. Davis et al. (2021) termed the strength of this second-order correlation as the item-wise RDM-activity fit (IRAF). The IRAF in a brain region is therefore an index of the sensitivity of that region to that particular kind of visual or semantic representation. Note that such an item-wise approach differs from the typical method of assessing such second-order correlations between brain and model RDMs (Kriegeskorte and Kievit, 2013), which typically relates the entire item × item matrix at once. Thus, the item-wise approach adopted here is important for linking visual and semantic representations at encoding to subsequent memory vividness for specific objects. Finally, to test whether these representations predicted memory vividness, we ran a generalized linear mixed effect model and post hoc differences of estimated marginal means (Fig. 4b).

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Multidimensional scale plots, model RDMs for visual and semantic similarities, and model correlations. Pairwise dissimilarities were calculated to create RDMs. A, Visual dissimilarity codes for a combination of orientation and edges (e.g., round objects toward the top-right, horizontal shapes on the left, vertical shapes on the bottom-right). B, Semantic dissimilarity codes for finer-grained distinctions based on features of each concept (e.g., fruit and vegetables on top-left, nonliving things on the middle-right, and many categories of animal on the bottom-left). C, Visual dissimilarity model including all the items. From the top-left corner items were sorted by categories (Birds, Clothing, Fruits, Mammals, Outdoor items, Vehicles, Vegetables, Tools). D, Semantic dissimilarity model including all the items. From the top-left corner, items were sorted by categories (Birds, Clothing, Fruits, Mammals, Outdoor items, Vehicles, Vegetables, Tools). E, Spearman's correlation of visual and semantic RDMs.

In the third analysis, we investigated our hypothesis that regions engaged in semantic processing at encoding in the older group can play a compensatory role (Cabeza et al., 2018). We first tested whether the quality of semantic representations would be negatively correlated with the quality of visual representations (i.e., individuals with reduced specificity for visual representations compensate with increased specificity for semantic representations); then, we investigated whether the negative interaction between visual and semantic representations would also be positively associated with later memory vividness.

For all the analyses, we also supplemented null-hypothesis significance testing with Bayes factor (BF) analyses. We used BayesFactor package (version 0.9.12–4.4; Rouder et al., 2009) for Wilcoxon tests and Pearson correlations, and an updated function for generalized linear mixed effect models (https://n400peanuts.github.io/languagelearninglab/usefulFunctions-explained.html#BayesFactor_updated; see also Dienes, 2014). Bayes Factors were used to provide evidence for the null hypothesis where this was important for interpretation of findings. Although categorization of levels of Bayesian evidence is necessarily arbitrary, for clarity, we adopted the labels used by Lee and Wagenmakers (2013), i.e., that BF < 3 indicates “anecdotal” evidence, 3 < BF < 10 “moderate,” 10 < BF < 30 “strong,” 30 < BF < 100 “very strong,” and BF >100 “extreme” evidence. Both the Bayesian t tests and GLMM used uninformative Cauchy priors with M = 0 and width = 0.71. The implementation of the steps of the three analyses is outlined in depth in the following sections.

Model RDMs

We first created two theoretical RDMs using low-level visual and object-specific semantic feature measures. Figure 4 illustrates the multidimensional scale plots for the visual and semantic relations expressed by these models (A, B), the corresponding model RDMs (C, D), and the correlation between the RDMs (E).

(1) The visual model was derived using a pretrained version of VGG16 from the visual geometry group (Simonyan and Zisserman, 2015), a widely used deep convolutional neural network (DNN). The VGG16 consists of 16 layers, thirteen of which are convolutional layers and three of which are fully connected layers. Evidence suggests that DNNs can simulate the processing that occurs in the ventral visual pathway, where earlier layers detect lower-level visual features and later layers detect higher-level visual features (Zeiler and Fergus, 2014). In the present study we extracted features from the second convolutional layer (layer 2), as a previous study found this layer to approximate low-level visual features such as orientation and edges better than layer 1 (Bone et al., 2020). To extract activation values from layer 2 we used a Python toolbox (Muttenthaler and Hebart, 2021). The visual model was created by feeding the object images through the DNN, extracting the activation values from layer 2, and correlating the values between the stimuli resulting in a 240 × 240 similarity matrix. This was then turned into a dissimilarity matrix by computing the pairwise dissimilarity values as 1 – Pearson's correlation, resulting in the final visual model.

(2) Construction of the semantic feature RDM followed Clarke and Tyler (2014) but used updated property norms (Hovhannisyan et al., 2021). We first computed pairwise feature similarity between concepts from a semantic feature matrix in which each concept is represented by a binary vector indicating whether a given feature is associated with the concept or not. Pairwise dissimilarity between concepts was computed as 1 – S, where S is equal to the cosine angle between feature vectors. This RDM captures both categorical similarity between objects (as objects from similar categories have similar features) and within-category object individuation (as objects are composed of a unique set of features). We excluded taxonomic features as they reflect category-level, but not concept-specific, information (Taylor et al., 2012). The concept-specific features are a set of shared and more distinctive features that allow the processing and identification of the individual concept. Analyses were implemented using custom MATLAB version 2021a (The MathWorks) and Python version 3.8.

RSA first-level GLM

To perform the representational similarity analyses a β estimate image for each trial was created. This was conducted in SPM12 using the first-level general linear model (GLM) and a least-squares-all (LSA) methods (Mumford et al., 2012). A design matrix was created for each participant which included one regressor for each trial, computed by convolving the 4-s duration stimulus function with a canonical hemodynamic response function. For each run, six motion regressors were included consisting of the three translations and three rotations estimated during spatial realignment, as well as session constant for each of the eight runs. The model was fit to native space preprocessed functional images using variational Bayes estimation with an AR(3) autocorrelation model (Penny et al., 2005). A high-pass filter cutoff of 128 s was applied, and the data were scaled to a grand mean of 100 across all voxels and scans within sessions. A whole brain-mask was created by including voxels which had at least a 0.1 probability of being in gray or white matter, as indicated by the tissue segmentation of the participants' T1 scan.

Brain RDMs

In addition to the model RDMs describing feature dissimilarity, we also created brain RDMs, or activity pattern matrices, which represent the dissimilarity in the voxel activation pattern across all stimuli. Thus, the activation pattern matrices have a dissimilarity structure as the RDM with stimuli as rows and columns. However, whereas each cell of an RDM contains a measure of dissimilarity in stimulus' properties, each cell of an activity pattern dissimilarity matrix contains a measure of dissimilarity in activation patterns across stimuli. As noted above, the activation patterns were extracted for five ROIs and correlated with Pearson's r. As with the model RDMs, each matrix was then transformed into brain RDMs by calculating 1 – Pearson's correlation.

Model-brain fit for the object processing analysis regardless of memory vividness

For the first analysis, for each subject and for each ROI, the brain RDM was compared with the model RDM using Spearman's rank correlation, and the resulting dissimilarity values were Fisher-transformed. Then, for each group separately, we tested for significant positive similarities between model RDMs and brain RDMs using a one-sample Wilcoxon signed-rank test. We then tested age-related differences in visual and semantic object processing using a two-sample Wilcoxon rank-sum test (Fig. 3, steps 3a and 4a). These tests provide valid inference and treat the variation across subjects as a random effect, thus supporting inference to the population (Nili et al., 2014). We applied a false discovery rate correction for the five ROIs.

Model-brain fit for the subsequent memory vividness analysis

For the subsequent memory vividness analysis, each row of the model RDM was correlated with each row of the brain RDM, to obtain an IRAF measure for each item, in each region. Spearman's rank correlations values were Fisher transformed and mapped back to each region-of-interest. We then used the IRAF as an independent variable in a generalized linear mixed effect model to predict subsequent memory vividness (Fig. 3, steps 3b and 4b). The IRAF from the visual and semantic RDMs was used as a continuous predictor in a single model. Thus, the model comprised the IRAF from the visual and semantic model in each of the five ROIs for both young and older adults (ROIs × IRAF × group × model). The binary outcome variable was memory vividness indexed as 0 (least number of details recalled, rated as 1 or 2 for vividness) or 1 (maximum number of details recalled, rated as 3 or 4 for vividness). Thus, we measured the predictive effect of each model term by examining the t statistics for the fixed effect based on β estimates for each of the five ROIs (EVC, LG, FG, LOC, vATL) in addition to the groups of interest (young adults, older adults) and the type of model (visual model, semantic model). Subjects and stimuli were both also entered as random effects. Since the results of the analysis are constrained by the chosen reference-level, we ran post hoc tests to assess the interaction of our continuous predictor (i.e., IRAF) with each factor (i.e., ROIs, group, and model) on memory vividness by calculating the estimated marginal means. Thus, we derived an estimation for each group in each ROI, and a pairwise comparison of estimated marginal means between groups. We applied a false discovery rate correction for the five ROIs. These analyses were performed using the lme4 and emmeans packages in RStudio version 1.3.1093.

Testing the compensatory role of regions supporting semantic cognition in older adults

After the analysis of the visual and semantic representations involved in object processing and subsequent memory vividness, we tested our hypothesis that regions observed to support semantic cognition may play a compensatory role in the older adults. This hypothesis generates two predictions: (1) a negative correlation between the specificity of semantic representations and the specificity of visual representations (i.e., individuals with less specific visual representations compensate with more specific semantic representations); (2) a corresponding negative interaction between the specificity of visual representations and semantic representations that also predicts memory vividness. To test these hypotheses, for each group separately, we first performed across-subjects Pearson's correlation between the model-brain fit (i.e., Spearman correlation coefficient) of regions that revealed age-related dedifferentiation of visual representations with the model-brain fit (i.e., Spearman correlation coefficient) of regions that revealed age-related hyperdifferentiation of semantic representations. Then, within brain regions showing a negative correlation, we performed a generalized linear mixed effect model that comprised the IRAF from the visual model (predictor 1) and the IRAF from the semantic model (predictor 2), and their interaction (visual IRAF × semantic IRAF). The binary outcome variable was memory vividness indexed as 0 (lowest level of vividness, rated as 1 or 2) or 1 (highest level of vividness, rated as 3 or 4). Subjects and stimuli were both also entered as random effects. We were interested in the predictive effect of the interaction between visual and semantic representations by examining the t statistics for the fixed effect based on the β estimates.

Results

Memory vividness and memory performance

The linear mixed model at encoding did not reveal any difference in RT related to subsequent old items that were later remembered with high relative to low vividness in the young adults (β = 26.17, SE = 16.43, t = 1.59, p = 0.111). Similarly, there was no difference in RT related to subsequent old items that were later remembered with high relative to low vividness in the older adults (β = −9.35, S = 17.50, t = −0.53, p = 0.593). However, older adults showed slower RT compared with young adults for those items that were later recalled with both low vividness (β = 322.91, SE = 103.62, t = 3.12, p = 0.002), and high vividness (β = 287.00, SE = 104.00, t = 2.77, p = 0.006). We then investigated the memory vividness of the two age groups. During the scanned retrieval session, no age differences were found between the proportion of memories subjectively reported to be recalled with high (young adults: M = 42.18, older adults: M = 43.33, t(33) = −0.20, p = 0.846) and low vividness (young adults: M = 52.80, older adults: M = 54.24, t(33) = −0.25, p = 0.805). In the postscan recognition test, older adults also showed comparable levels of correct responses compared with young adults (young adults: M = 72.11, older adults: M = 71.00, t = 0.38, p = 0.707), for both high (young adults: M = 51.48, older adults: M = 53.38, t = −0.35, p = 0.748) and low (young adults: M = 48.52, older adults: M = 46.72, t = 0.35, p = 0.748) confidence, suggesting no age-related memory decline. At retrieval, suggesting some correspondence between our subjective measure and objective memory, the generalized linear mixed effect model (Fig. 5) showed that postscan memory accuracy was positively related to vividness rating, as is indicated by a significant effect of memory vividness in both young (β = 0.15, SE = 0.04, z = 3.81, p < 0.001) and older adults (β = 0.21, SE = 0.04, z = 4.65, p < 0.001). However, there was no age-related difference in the postscan memory accuracy as a function of vividness ratings (β = 0.06, SE = 0.06, z = 0.98, p = 0.328).

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

The line plot shows for each in-scan vividness rating value, the corresponding postscan memory accuracy (hit rate). Shaded colors around the line plots indicate confidence levels (CI) across subjects. Purple and green asterisks indicate p values for significant estimated marginal means of linear trends for each group. *p < 0.05, **p < 0.01, ***p < 0.001.

Visual and semantic object processing regardless of subsequent memory vividness

We first identified brain regions that coded for visual and semantic object representations during perception, independent of subsequent vividness ratings. As in the studies by Clarke and Tyler (2014) and Naspi et al. (2021), this was done by including all trials in RSA analyses of model-brain regardless of their subsequent memory status. As illustrated in Figure 6, a one-sample Wilcoxon signed-rank test revealed that in young adults, pattern of activity in all five ROIs showed a significant model-brain fit with the visual model. These included EVC (T = 171, p < 0.001, r = 0.878), LG (T = 169, p < 0.001, r = 0.857), FG (T = 169, p < 0.001, r = 0.857), LOC (T = 165, p < 0.001, r = 0.816), and vATL (T = 162, p < 0.001, r = 0.785). Older adults displayed significant relationship with the visual model in EVC (T = 153, p < 0.001, r = 0.878), LG (T = 152, p < 0.001, r = 0.867), FG (T = 153, p < 0.001, r = 0.878), and LOC (T = 142, p < 0.001, r = 0.752), but not in vATL (T = 66, p = 0.694, r = 0.121, with moderate Bayesian evidence for the null, BF = 0.13). As predicted, the two-sample Wilcoxon sum-rank test also revealed age-related differences in the model-brain fit for the visual model. More specifically, older adults showed decreased quality of visual representations in EVC (T = 238, p = 0.015, r = 0.474), LG (T = 211, p = 0.048, r = 0.324), and vATL (T = 227, p = 0.018, r = 0.413), whereas pattern of activity in FG and LOC were of similar quality as in young adults (respectively, T = 199, p = 0.084, r = 0.257, with anecdotal Bayesian evidence for the null, BF = 0.98; and T = 178, p = 0.212, r = 0.139, with anecdotal Bayesian evidence for the null, BF = 0.81).

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

Visual and semantic representations represented in region of interests (ROIs) during perception. Plots represent the specificity of visual and semantic representations at the group level within patterns of activity along the ventral stream. Error bars indicate SEM across subjects. Black asterisks above the bars indicate p values for tests of whether each individual Spearman's correlation is >0 [one-sided Wilcoxon signed-rank test; false-discovery rate (FDR) correction calculated across the number of ROIs, i.e., 5]. Red asterisks indicate p values for age-related differences in Spearman's correlation [two-sided Wilcoxon rank-sum test; false-discovery rate (FDR) correction calculated across the number of ROIs, i.e., 5 per group]. *p < 0.05, **p < 0.01, ***p < 0.001.

Turning to the semantic model, the one-sample Wilcoxon signed-rank tests revealed that in young adults, pattern of activity reflecting semantic object features were significantly coded in LOC (T = 164, p < 0.001, r = 0.806) and, more anteriorly, in FG (T = 147, p = 0.008, r = 0.631). None of the other three ROIs showed significant effects in young adults (respectively, T = 45, p = 0.963, r = 416, with strong Bayesian evidence for the null, BF = 0.07 for EVC; T = 95, p = 0.585, r = 0.098, with moderate Bayesian evidence for the null, BF = 0.20 for LG; T = 57, p = 0.963, r = 0.293, with strong Bayesian evidence for the null, BF = 0.08 for vATL). Like young adults, older adults showed reliable model-brain fits for the semantic model in LOC (T = 153, p < 0.001, r = 0.878) and FG (T = 148, p < 0.001, r = 0.821), but they additionally displayed a significant pattern of activity in LG (T = 136, p = 0.003, r = 0.683). As in young adults, pattern of activity in EVC (T = 88, p = 0.306, r = 0.132, with anecdotal Bayesian evidence for the null, BF = 0.44), and vATL (T = 92, p = 0.305, r = 0.178, with anecdotal Bayesian evidence for the null, BF = 0.49) did not show a significant relationship with the semantic model in older adults. As for the visual model, a two-sample Wilcoxon sum-rank test was used to compare the quality of semantic representations in the two groups. In sharp contrast with the results of visual representations, older adults showed stronger semantic representations than young adults. This effect was found in LG and FG (respectively, T = 227, p = 0.018, r = 0.413 and T = 231, p = 0.018, r = 0.435), but not in the other 3 ROIs (respectively, T = 206, p = 0.060, r = 0.296, with anecdotal Bayesian evidence for the null, BF = 2.55 for EVC; T = 204, p = 0.060, r = 0.285, with anecdotal Bayesian evidence for the null, BF = 1.70 for LOC; T = 200, p = 0.063, r = 0.262, with anecdotal Bayesian evidence for the null, BF = 1.55 for vATL).

In each ROI, we also checked which representations showed unique effects after controlling for effects of the other model using Spearman's partial correlation. However, given the low correlation between models (Fig. 4E), the results of the analysis revealed an identical pattern of findings and, thus, are not reported further. In sum, visual representations were of higher quality in the young than in the older adults (in EVC, LG, and vATL), consistent with the idea of dedifferentiation (Koen and Rugg, 2019). At the same time, semantic representations showed the opposite effect and were stronger in the older adults relative to the young adults (in LG and FG), consistent with the idea of hyperdifferentiation (Deng et al., 2021). This conclusion is supported by a two-way ANOVA that revealed a significant interaction between group (young adults, older adults) and type of model (visual, semantic; F(1,346) = 17.5, p < 0.001).

Effect of stored visual and semantic representation on subsequent memory vividness

To examine whether visual and semantic representations at encoding predicted memory vividness of the object pictures, we ran a generalized linear mixed effect model. The results of the estimated marginal means derived from the full model are reported below in Table 2 and displayed in Figure 7.

View this table:
  • View inline
  • View popup
Table 2.

Estimated marginal means for the visual and semantic models separated by age group

Figure 7.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 7.

The line plots represent the effect of visual and semantic representations on the probability of recalling studied items with high vividness on young (purple lines) and older (green lines) adults. Shaded colors around the line plots indicate standard error of the mean (SEM) across subjects. Purple and green asterisks indicate p values for significant estimated marginal means of linear trends for each ROI in each group; false-discovery rate (FDR) correction calculated across the number of ROIs, i.e., five per group. Red asterisks within the middle top of the panels indicate significant age-related differences [difference between estimated marginal means of linear trends; false-discovery rate (FDR) correction calculated across the number of ROIs, i.e., 5 per contrast between groups]. *p < 0.05, **p < 0.01, ***p < 0.001.

In young adults, the estimating marginal means revealed that visual representations predicted subsequent memory vividness in the EVC (OR = 2.74, 95% CI [0.37, 1.64]), LG (OR = 3.55, 95% CI [0.51, 2.02]), FG (OR = 3.26, 95% CI [0.51, 1.85]), and LOC (OR = 3.27, 95% CI [0.45, 1.92]), but not in the vATL (OR = 2.29, 95% CI [−0.24, 1.90], with anecdotal Bayesian evidence for the null, BF = 1.12). In older adults, visual representations contributed to memory vividness only in the EVC (OR = 4.41, 95% CI [0.50, 2.46]). No other result was significant [respectively, LG (OR = 2.99, 95% CI [0.03, 2.16], with anecdotal Bayesian evidence for the null, BF = 1.71); FG (OR = 2.22, 95% CI [−0.13, 1.72], with anecdotal Bayesian evidence for the null, BF = 1.27); LOC (OR = 1.12, 95% CI [−0.93, 1.16], with anecdotal Bayesian evidence for the null, BF = 0.70); and vATL (OR = 1.18, 95% CI [−1.09, 1.41], with anecdotal Bayesian evidence for the null, BF = 0.75)]. Unlike visual representations, in young adults the estimated marginal means revealed that in none of the ROIs did semantic representations predict memory vividness [respectively, EVC (OR = 0.50, 95% CI [−1.80, 0.40], with anecdotal Bayesian evidence for the null, BF = 0.95); LG (OR = 1.30, 95% CI [−0.82, 1.34], with anecdotal Bayesian evidence for the null, BF = 0.73); FG (OR = 0.74, 95% CI [−1.33, 0.74], with anecdotal Bayesian evidence for the null, BF = 0.73); LOC (OR = 0.46, 95% CI [−1.61, 0.04], with anecdotal Bayesian evidence for the null, BF = 1.51); and vATL (OR = 0.76, 95% CI [−1.35, 0.80], with anecdotal Bayesian evidence for the null, BF = 0.73)]. In older adults, in contrast, semantic representations predicted memory vividness in LG (OR = 17.30, 95% CI [1.72, 3.98]) and in FG (OR = 9.28, 95% CI [1.16, 3.30]). No other regions were significant [respectively, EVC (OR = 1.68, 95% CI [−0.60, 1.64], with anecdotal Bayesian evidence for the null, BF = 0.83); LOC (OR = 0.78, 95% CI [−1.02, 0.53], with anecdotal Bayesian evidence for the null, BF = 0.67); and vATL (OR = 0.82, 95% CI [−1.29, 0.90], with anecdotal Bayesian evidence for the null, BF = 0.72)].

The contrasts (older adults–young adults) derived from the estimated marginal means are reported below in Table 3 and displayed in Figure 7.

View this table:
  • View inline
  • View popup
Table 3.

Contrasts between the estimated marginal means reflecting significant differences between groups

Although more regions that coded for visual information contributed to later memory vividness in young than older adults, post hoc contrasts between age groups (older adults–young adults) revealed no age-related differences (respectively, OR = 1.61, 95% CI [−0.69, 1.64], with anecdotal Bayesian evidence for the null, BF = 0.81 for EVC; OR = 0.84, 95% CI [−1.48, 1.13], with anecdotal Bayesian evidence for the null, BF = 0.76 for LG; OR = 0.68, 95% CI [−1.53, 0.76], with anecdotal Bayesian evidence for the null, BF = 0.77 for FG; OR = 0.34, 95% CI [−2.35, 0.21], with anecdotal Bayesian evidence for the null, BF = 1.22 for LOC; OR = 0.51, 95% CI [−2.31, 0.98], with anecdotal Bayesian evidence for the null, BF = 0.87 for vATL). Unlike visual representations, post hoc contrasts between age groups (older adults–young adults) revealed that, in LG (OR = 13.33, 95% CI [1.02, 4.16]) and in FG (OR = 12.50, 95% CI [1.04, 4.01]), semantic representations predicted memory vividness to a greater extent in older than in young adults. No other regions showed age-related differences (respectively, OR = 3.39, 95% CI [−0.35, 2.80], with anecdotal Bayesian evidence for the null, BF = 1.13 for EVC; OR = 1.72, 95% CI [−0.58, 1.67], with anecdotal Bayesian evidence for the null, BF = 0.84 for LOC; OR = 1.08, 95% CI [−1.446, 1.61], with anecdotal Bayesian evidence for the null, BF = 0.79 for vATL).

Age-related dedifferentiation is compensated by hyperdifferentiation in the fusiform gyrus

Finally, to examine whether there is an association between regions that showed age-related dedifferentiation and regions that showed age-related hyperdifferentation at encoding, we computed an across-subject Pearson's correlation coefficient between (1) the visual Spearman's model-brain fit in EVC with the semantic Spearman's model-brain fit in LG and FG and (2) the visual Spearman's model-brain fit in LG with the semantic Spearman's model-brain fit in LG and FG. In young adults there was a nonsignificant correlation between EVC and LG, r(16) = 0.11, p = 0.652, with anecdotal Bayesian evidence for the null, BF = 0.54; and between EVC and FG, r(16) = 0.07, p = 0.769, with anecdotal Bayesian evidence for the null, BF = 0.51. However, in older adults there was a significant correlation between EVC and FG, r(15) = −0.61, p = 0.009, but not between EVC and LG, r(15) = −0.34, p = 0.186, with anecdotal Bayesian evidence for the null, BF = 0.99. Moreover, in young adults there was a nonsignificant correlation between LG and LG, r(16) = 0.41, p = 0.087, with anecdotal Bayesian evidence for the null, BF = 1.53; and between LG and FG, r(16) = 0.25, p = 0.323, with anecdotal Bayesian evidence for the null, BF = 0.72. Similarly, in older adults there was a nonsignificant correlation between LG and LG, r(15) = 0.14, p = 0.591, with anecdotal Bayesian evidence for the null, BF = 0.57; and between LG and FG, r(15) = −0.17, p = 0.515, with anecdotal Bayesian evidence for the null, BF = 0.60. The results are shown below in Figure 8.

Figure 8.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 8.

The line plots represent Pearson's correlation between the visual Spearman's model-brain fit in early visual cortex (EVC) and lingual gyrus (LG) with the semantic Spearman's model-brain fit in LG and fusiform gyrus (FG) for both young (purple line) and older (green line) adults. Purple and green asterisks in the middle of the panels indicate p values for significant correlation. *p < 0.05, **p < 0.01, ***p < 0.001.

Overall, the results illustrated in Figure 8 suggest that, in older adults, as the quality of visual representations in EVC decreases, the quality of semantic representations in FG increases. To assess whether older adults' compensation at the neural level also contribute to the subjective experience of vividness, we related the neural data with memory vividness in a generalized linear mixed effect model using the IRAF values obtained earlier. Since the IRAF is an index of the specificity of the representation for the single items, we were able to assess how the interaction of the specificity of these representations predicted memory vividness. Thus, we tested the interaction between the visual IRAF in EVC and the semantic IRAF in FG. The findings showed a significant negative interaction between the quality of visual representations in EVC and the quality of semantic representations in FG (β = −39.79, SE = 7.14, z-value = −5.57, p > 0.001). In other words, the decrease in specificity of visual representations in EVC was associated with higher probability of judging a memory as highly vivid only when high-quality semantic representations were available in FG and, crucially, this was unique to the older adults (OR = 0.02, 95% CI [0.01, 0.03] for FG; see Fig. 9).

Figure 9.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 9.

The line plots represent the interaction of visual and semantic representations on the probability of memory vividness in older adults. The quality of semantic representations is divided into two panels representing the minimum and maximum semantic model-brain fit in the fusiform gyrus (FG). Shaded lines around the line plots indicate standard error (SE) across subjects. The red asterisks within the middle top of the panels indicate significant interaction of visual and semantic representations indicating successful compensation. *p < 0.05, **p < 0.01, ***p < 0.001.

Discussion

Our results showed that the specificity of visual and semantic representations differed in young and older adults and predicted memory vividness differently across age groups. Relative to young adults, the specificity of visual representations was reduced in older adults, consistent with dedifferentiation (Koen and Rugg, 2019), whereas the specificity of semantic representations was enhanced in older adults, consistent with hyperdifferentiation (Deng et al., 2021). Interestingly, the specificity of visual representations in the early and ventral visual pathway at encoding predicted memory vividness in young adults. Older adults only showed this effect in the EVC, although the overall specificity of those representations was reduced in this group. Compared with the young group, older adults displayed higher specificity of semantic representations over more anterior regions at encoding which contributed to their memory vividness. Finally, we found that reliance on semantic representations plays a compensatory role in older age. Specifically, a negative correlation between visual and semantic representational specificity showed that older adults with less specific visual representations in EVC displayed more specific semantic representations in FG, and this predicted higher subsequent memory vividness. Thus, the current findings strongly suggests that while young adults seem to recruit detailed visual representations in the service of episodic memory, older adults appear to over-rely on semantic information, presumably because of degraded visual representations early in the processing hierarchy.

Previous studies have suggested that aging is associated with neural dedifferentiation (Carp et al., 2011; Koen and Rugg, 2019; Koen et al., 2019, 2020; Trelle et al., 2019). Accordingly, our results revealed that the specificity of older adults' visual representations was reduced compared with young adults in EVC, LG, and vATL. The finding in EVC is in line with Deng et al. (2021), which also found impaired visual representations in older adults in this region. Extending those results, the current study additionally found an age-related reduction in visual representational specificity in vATL. Although often considered an amodal semantic hub (Bonner and Price, 2013), vATL is likely to have some degree of modality-specificity (Yi et al., 2007). Indeed, visual object features were significantly represented in vATL in young adults (Fig. 6). This finding is consistent with the results of a meta-analysis showing that visual object processing often recruits vATL (Visser et al., 2010). An open question is whether the age-related visual representational deficit we observed in vATL reflects a local effect or rather the reduced neural specificity found in EVC, which cascades downstream and influences visuo-semantic processing performed along the ventral pathway, ultimately impacting representational content in vATL. Although older adults showed reduced representational specificity of visual information in EVC and LG at encoding, our findings suggest that they could still use such information when attempting to recover vivid memories. This finding fits with the behavioral results showing comparable levels of subjective vividness as well as comparable levels of objective memory accuracy between age groups. Thus, while neural dedifferentiation compromises the specificity of visual representations (Koen et al., 2020), overreliance on semantic representations presumably serves a compensatory role in episodic memory later in life.

Our results revealed that semantic representations were more specific in older compared with young adults in the LG and FG. The role of occipito-temporal regions in processing preexisting semantic knowledge is well documented in young adults (Koutstaal et al., 2001; Simons et al., 2003; Tyler et al., 2013). Davis et al. (2021) and Naspi et al. (2021) found that some of these regions coded for both visual and semantic information. In particular, emerging data suggest that the FG processes observable (but also verbalizable) semantic features, supporting extraction of meaning from vision. For example, Devereux et al. (2018) combined deep visual and semantic attractor networks to model the transformation of vision to semantics, revealing a confluence of late visual and semantic feature representations in FG (see also Tyler et al., 2013). Our findings clarify that both image-based visual and semantic features are represented here during successful encoding in both young and older adults. Together, the data further suggest that this initial extraction of visual semantic features is important for the effective encoding of memories of specific objects more in older than young adults.

An important and novel finding of our study is that the specificity of semantic representations in these occipito-temporal regions also predicted whether older adults recovered high-vivid memories, which is consistent with the results of univariate activation fMRI studies in young adults (Stern et al., 1996; Kirchhoff et al., 2000; Vaidya et al., 2002; Garoff et al., 2005; Kim, 2011). It is unclear why young adults did not benefit from semantic representations as much as the older adults. We believe that age-related differences in representational specificity are partly because of differences in strategies used during encoding. Representations are not “hard-wired” into the brain and the representational space can be warped in response to attention and hence encoding strategies (Çukur et al., 2013; Martin et al., 2018; Wang et al., 2018). We speculate that the young adults paid more attention to visual features, whereas older adults emphasized more preexisting semantic knowledge. This is consistent with the notion of greater age-related reliance on gist at encoding, as hypothesized by fuzzy trace theory (Brainerd and Reyna, 2005), and predicts an enhanced fit of activation patterns with the semantic model.

Unlike Deng et al. (2021), the vATL was not engaged in semantic processing in older adults, and consequently did not contribute to memory vividness. The lack of engagement of this region may be partly because of stimuli differences; we investigated semantic representations for everyday objects, whereas Deng et al. (2021) examined semantic representations for natural scenes. The semantic properties of a scene (e.g., a farm) are more complex because they contain several objects, which are also likely to appear in other scenes. Thus, to integrate a series of objects into a specific scene, a greater level of semantic abstraction might be required. Thus, it is reasonable to speculate that processing the meaning of scenes is more dependent on the more abstract, interpretive function of vATL than the processing of objects, whose semantic properties could be extracted earlier, in regions such as FG (Bonner and Price, 2013; Lambon Ralph et al., 2017). Further research is required to directly compare the semantic representations of objects and scenes.

Our finding that older adults' semantic information is represented in more anterior regions (e.g., FG) than young adults is consistent with the posterior-anterior shift in aging (PASA; Davis et al., 2008). Evidence for PASA was first reported by Grady et al. (1994) in a positron emission tomography (PET) study that investigated perception of faces and locations. In both conditions, reduced processing efficiency in EVC of older adults led to recruitment of other cortical regions that included, for example, the FG (i.e., area 37) or the prefrontal cortex (PFC; see also Davis et al., 2008). Our results parallel these early findings and more directly suggest that the FG is relevant for semantic processing (Binney et al., 2010; Mion et al., 2010). The interaction between visual and semantic processes in this region could also explain why semantic recruitment in older adults could enhance performance in the postscan forced-choice task, whose accuracy varied with the in-scan vividness ratings. Although greater semantic processing in older adults may in part explain the absence of age-related differences in memory vividness, it could also possible that it lowers the vividness criterion through metamemory changes, leading to an overestimation of the richness of their memory representations (Folville et al., 2020). Moreover, years of education can also account for some of the variance in cognitive ability among older adults. More years of education have been associated with a slower age-related decline in memory (Colsher and Wallace, 1991; Evans et al., 1993). The results of the cognitive tests suggest that older adults may efficiently use strategies that depend on executive functions in support of memory, especially when the task is cognitively demanding.

In conclusion, visual and semantic representations allow both young and older adults to form memories of specific objects. The data supports the idea that aging is associated with neural dedifferentiation of visual features, but also shows that older adults can rely on semantic representations with sufficient level of specificity to compensate for their memory deficits (Cabeza et al., 2018). This finding challenges the idea that aging is simply associated with a general decline in cognitive abilities and their underlying neural mechanisms (Fraundorf et al., 2019). Instead, our results revealed that some cognitive and brain mechanisms are not only spared but even enhanced by aging (Umanath and Marsh, 2014). One aspect of cognition that is spared in older age is preexisting semantic knowledge, but the neural mechanisms of these spared functions are largely unknown. Here, we showed that the LG and FG are part of a semantic network recruited to support episodic memory. We suggest that older adults use this circuit as a compensatory mechanism when task demands are high, like in a memory vividness task that requires the formation and recovery of highly detailed memory representation.

Footnotes

  • This work was supported by Duke University research funds and by the Einstein Foundation Berlin.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Loris Naspi at loris.naspi{at}hu-berlin.de

SfN exclusive license.

References

  1. ↵
    1. Amunts K,
    2. Malikovic A,
    3. Mohlberg H,
    4. Schormann T,
    5. Zilles K
    (2000) Brodmann's areas 17 and 18 brought into stereotaxic space–where and how variable? Neuroimage 11:66–84. https://doi.org/10.1006/nimg.1999.0516 pmid:10686118
    OpenUrlCrossRefPubMed
  2. ↵
    1. Avants BB,
    2. Epstein CL,
    3. Grossman M,
    4. Gee JC
    (2008) Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Med Image Anal 12:26–41. https://doi.org/10.1016/j.media.2007.06.004 pmid:17659998
    OpenUrlCrossRefPubMed
  3. ↵
    1. Beck AT,
    2. Ward CH,
    3. Mendelson M,
    4. Mock J,
    5. Erbaugh J
    (1961) An inventory for measuring depression. Arch Gen Psychiatry 4:561–571. https://doi.org/10.1001/archpsyc.1961.01710120031004 pmid:13688369
    OpenUrlCrossRefPubMed
  4. ↵
    1. Binney RJ,
    2. Embleton K. v,
    3. Jefferies E,
    4. Parker GJM,
    5. Lambon Ralph MA
    (2010) The ventral and inferolateral aspects of the anterior temporal lobe are crucial in semantic memory: evidence from a novel direct comparison of distortion-corrected fMRI, rTMS, and semantic dementia. Cereb Cortex 20:2728–2738. https://doi.org/10.1093/cercor/bhq019 pmid:20190005
    OpenUrlCrossRefPubMed
  5. ↵
    1. Bone MB,
    2. Ahmad F,
    3. Buchsbaum BR
    (2020) Feature-specific neural reactivation during episodic memory. Nat Commun 11:1945. https://doi.org/10.1038/s41467-020-15763-2 pmid:32327642
    OpenUrlCrossRefPubMed
  6. ↵
    1. Bonner MF,
    2. Price AR
    (2013) Where is the anterior temporal lobe and what does it do? J Neurosci 33:4213–4215. https://doi.org/10.1523/JNEUROSCI.0041-13.2013 pmid:23467339
    OpenUrlFREE Full Text
  7. ↵
    1. Brainerd CJ,
    2. Reyna VF
    (2005) The science of false memory. Oxford: Oxford University Press.
  8. ↵
    1. Cabeza R,
    2. Albert M,
    3. Belleville S,
    4. Craik FIM,
    5. Duarte A,
    6. Grady CL,
    7. Lindenberger U,
    8. Nyberg L,
    9. Park DC,
    10. Reuter-Lorenz PA,
    11. Rugg MD,
    12. Steffener J,
    13. Rajah MN
    (2018) Maintenance, reserve and compensation: the cognitive neuroscience of healthy ageing. Nat Rev Neurosci 19:701–710. https://doi.org/10.1038/s41583-018-0068-2 pmid:30305711
    OpenUrlCrossRefPubMed
  9. ↵
    1. Carp J,
    2. Park J,
    3. Polk TA,
    4. Park DC
    (2011) Age differences in neural distinctiveness revealed by multi-voxel pattern analysis. Neuroimage 56:736–743. https://doi.org/10.1016/j.neuroimage.2010.04.267 pmid:20451629
    OpenUrlCrossRefPubMed
  10. ↵
    1. Castel AD
    (2005) Memory for grocery prices in younger and older adults: the role of schematic support. Psychol Aging 20:718–721. https://doi.org/10.1037/0882-7974.20.4.718 pmid:16420146
    OpenUrlCrossRefPubMed
  11. ↵
    1. Castel AD,
    2. McGillivray S,
    3. Worden KM
    (2013) Back to the future: past and future era-based schematic support and associative memory for prices in younger and older adults. Psychol Aging 28:996–1003. https://doi.org/10.1037/a0034160 pmid:24128073
    OpenUrlCrossRefPubMed
  12. ↵
    1. Clark IA,
    2. Maguire EA
    (2020) Do questionnaires reflect their purported cognitive functions? Cognition 195:104114. https://doi.org/10.1016/j.cognition.2019.104114 pmid:31869709
    OpenUrlCrossRefPubMed
  13. ↵
    1. Clarke A,
    2. Tyler LK
    (2014) Object-specific semantic coding in human perirhinal cortex. J Neurosci 34:4766–4775. https://doi.org/10.1523/JNEUROSCI.2828-13.2014 pmid:24695697
    OpenUrlAbstract/FREE Full Text
  14. ↵
    1. Clarke A,
    2. Tyler LK
    (2015) Understanding what we see: how we derive meaning from vision. Trends Cogn Sci 19:677–687. https://doi.org/10.1016/j.tics.2015.08.008 pmid:26440124
    OpenUrlCrossRefPubMed
  15. ↵
    1. Colsher PL,
    2. Wallace RB
    (1991) Longitudinal application of cognitive function measures in a defined population of community-dwelling elders. Ann Epidemiol 1:215–230. https://doi.org/10.1016/1047-2797(91)90001-s pmid:1669503
    OpenUrlPubMed
  16. ↵
    1. Cooper RA,
    2. Ritchey M
    (2022) Patterns of episodic content and specificity predicting subjective memory vividness. Mem Cognit 50:1629–1643. https://doi.org/10.3758/s13421-022-01291-5 pmid:35246786
    OpenUrlPubMed
  17. ↵
    1. Cooper RA,
    2. Kensinger EA,
    3. Ritchey M
    (2019) Memories fade: the relationship between memory vividness and remembered visual salience. Psychol Sci 30:657–668. https://doi.org/10.1177/0956797619836093 pmid:30897035
    OpenUrlCrossRefPubMed
  18. ↵
    1. Cox RW,
    2. Hyde JS
    (1997) Software tools for analysis and visualization of fMRI data. NMR Biomed 10:171–178. https://doi.org/10.1002/(SICI)1099-1492(199706/08)10:4/5<171::AID-NBM453>3.0.CO;2-L
    OpenUrlCrossRefPubMed
  19. ↵
    1. Çukur T,
    2. Nishimoto S,
    3. Huth AG,
    4. Gallant JL
    (2013) Attention during natural vision warps semantic representation across the human brain. Nat Neurosci 16:763–770. https://doi.org/10.1038/nn.3381 pmid:23603707
    OpenUrlCrossRefPubMed
  20. ↵
    1. Davis SW,
    2. Dennis NA,
    3. Daselaar SM,
    4. Fleck MS,
    5. Cabeza R
    (2008) Que PASA? the posterior-anterior shift in aging. Cereb Cortex 18:1201–1209. https://doi.org/10.1093/cercor/bhm155 pmid:17925295
    OpenUrlCrossRefPubMed
  21. ↵
    1. Davis SW,
    2. Geib BR,
    3. Wing EA,
    4. Wang WC,
    5. Hovhannisyan M,
    6. Monge ZA,
    7. Cabeza R
    (2021) Visual and semantic representations predict subsequent memory in perceptual and conceptual memory tests. Cereb Cortex 31:974–992. https://doi.org/10.1093/cercor/bhaa269 pmid:32935833
    OpenUrlCrossRefPubMed
  22. ↵
    1. Deng L,
    2. Davis SW,
    3. Monge ZA,
    4. Wing EA,
    5. Geib BR,
    6. Raghunandan A,
    7. Cabeza R
    (2021) Age-related dedifferentiation and hyperdifferentiation of perceptual and mnemonic representations. Neurobiol Aging 106:55–67. https://doi.org/10.1016/j.neurobiolaging.2021.05.021 pmid:34246857
    OpenUrlCrossRefPubMed
  23. ↵
    1. Devereux BJ,
    2. Clarke A,
    3. Tyler LK
    (2018) Integrated deep visual and semantic attractor neural networks predict fMRI pattern-information along the ventral object processing pathway. Sci Rep 8:10636–10612. https://doi.org/10.1038/s41598-018-28865-1 pmid:30006530
    OpenUrlCrossRefPubMed
  24. ↵
    1. Dienes Z
    (2014) Using Bayes to get the most out of non-significant results. Front Psychol 5:781. https://doi.org/10.3389/fpsyg.2014.00781 pmid:25120503
    OpenUrlCrossRefPubMed
  25. ↵
    1. Du Y,
    2. Buchsbaum BR,
    3. Grady CL,
    4. Alain C
    (2016) Increased activity in frontal motor cortex compensates impaired speech perception in older adults. Nat Commun 7:12241. https://doi.org/10.1038/ncomms12241 pmid:27483187
    OpenUrlCrossRefPubMed
  26. ↵
    1. Eickhoff SB,
    2. Stephan KE,
    3. Mohlberg H,
    4. Grefkes C,
    5. Fink GR,
    6. Amunts K,
    7. Zilles K
    (2005) A new SPM toolbox for combining probabilistic cytoarchitectonic maps and functional imaging data. Neuroimage 25:1325–1335. https://doi.org/10.1016/j.neuroimage.2004.12.034 pmid:15850749
    OpenUrlCrossRefPubMed
  27. ↵
    1. Esteban O,
    2. Markiewicz CJ,
    3. Blair RW,
    4. Moodie CA,
    5. Isik AI,
    6. Erramuzpe A,
    7. Kent JD,
    8. Goncalves M,
    9. DuPre E,
    10. Snyder M,
    11. Oya H,
    12. Ghosh SS,
    13. Wright J,
    14. Durnez J,
    15. Poldrack RA,
    16. Gorgolewski KJ
    (2019) fMRIPrep: a robust preprocessing pipeline for functional MRI. Nat Methods 16:111–116. https://doi.org/10.1038/s41592-018-0235-4 pmid:30532080
    OpenUrlCrossRefPubMed
  28. ↵
    1. Evans DA,
    2. Beckett LA,
    3. Albert MS,
    4. Hebert LE,
    5. Scherr PA,
    6. Funkenstein HH,
    7. Taylor JO
    (1993) Level of education and change in cognitive function in a community population of older persons. Ann Epidemiol 3:71–77. https://doi.org/10.1016/1047-2797(93)90012-s pmid:8287159
    OpenUrlPubMed
  29. ↵
    1. Folville A,
    2. D'Argembeau A,
    3. Bastin C
    (2020) Deciphering the relationship between objective and subjective aspects of recollection in healthy aging. Memory 28:362–373. https://doi.org/10.1080/09658211.2020.1720741 pmid:31992142
    OpenUrlPubMed
  30. ↵
    1. Fraundorf SH,
    2. Hourihan KL,
    3. Peters RA,
    4. Benjamin AS
    (2019) Aging and recognition memory: a meta-analysis. Psychol Bull 145:339–371. https://doi.org/10.1037/bul0000185 pmid:30640498
    OpenUrlPubMed
  31. ↵
    1. Garoff RJ,
    2. Slotnick SD,
    3. Schacter DL
    (2005) The neural origins of specific and general memory: the role of the fusiform cortex. Neuropsychologia 43:847–859. https://doi.org/10.1016/j.neuropsychologia.2004.09.014 pmid:15716157
    OpenUrlCrossRefPubMed
  32. ↵
    1. Grady CL,
    2. Maisog JM,
    3. Horwitz B,
    4. Ungerleider LG,
    5. Mentis MJ,
    6. Salerno JA,
    7. Pietrini P,
    8. Wagner E,
    9. Haxby JV
    (1994) Age-related changes in cortical blood flow activation during visual processing of faces and location. J Neurosci 14:1450–1462.
    OpenUrlAbstract/FREE Full Text
  33. ↵
    1. Greve DN,
    2. Fischl B
    (2009) Accurate and robust brain image alignment using boundary-based registration. Neuroimage 48:63–72. https://doi.org/10.1016/j.neuroimage.2009.06.060 pmid:19573611
    OpenUrlCrossRefPubMed
  34. ↵
    1. Hovhannisyan M,
    2. Clarke A,
    3. Geib BR,
    4. Cicchinelli R,
    5. Monge Z,
    6. Worth T,
    7. Szymanski A,
    8. Cabeza R,
    9. Davis SW
    (2021) The visual and semantic features that predict object memory: concept property norms for 1,000 object images. Mem Cognit 49:712–731. https://doi.org/10.3758/s13421-020-01130-5 pmid:33469881
    OpenUrlCrossRefPubMed
  35. ↵
    1. Hoyer WJ,
    2. Verhaeghen P
    (2006) Memory Aging. In: Handbook of the psychology of aging (Birren JE, Schaire KW, eds), pp 209–232. Elsevier. https://doi.org/10.1016/B978-012101264-9/50013-6.
  36. ↵
    1. Jenkinson M,
    2. Smith S
    (2001) A global optimisation method for robust affine registration of brain images. Med Image Anal 5:143–156. https://doi.org/10.1016/s1361-8415(01)00036-6 pmid:11516708
    OpenUrlCrossRefPubMed
  37. ↵
    1. Jenkinson M,
    2. Bannister P,
    3. Brady M,
    4. Smith S
    (2002) Improved optimization for the robust and accurate linear registration and motion correction of brain Images. Neuroimage 17:825–841. https://doi.org/10.1006/nimg.2002.1132
    OpenUrlCrossRefPubMed
  38. ↵
    1. Kim H
    (2011) Neural activity that predicts subsequent memory and forgetting: a meta-analysis of 74 fMRI studies. Neuroimage 54:2446–2461. https://doi.org/10.1016/j.neuroimage.2010.09.045 pmid:20869446
    OpenUrlCrossRefPubMed
  39. ↵
    1. Kirchhoff BA,
    2. Wagner AD,
    3. Maril A,
    4. Stern CE
    (2000) Prefrontal-temporal circuitry for episodic encoding and subsequent memory. J Neurosci 20:6173–6180. https://doi.org/10.1523/JNEUROSCI.20-16-06173.2000 pmid:10934267
    OpenUrlAbstract/FREE Full Text
  40. ↵
    1. Koen JD,
    2. Rugg MD
    (2019) Neural dedifferentiation in the aging brain. Trends Cogn Sci 23:547–559. https://doi.org/10.1016/j.tics.2019.04.012 pmid:31174975
    OpenUrlCrossRefPubMed
  41. ↵
    1. Koen JD,
    2. Hauck N,
    3. Rugg MD
    (2019) The relationship between age, neural differentiation, and memory performance. J Neurosci 39:149–162. https://doi.org/10.1523/JNEUROSCI.1498-18.2018 pmid:30389841
    OpenUrlAbstract/FREE Full Text
  42. ↵
    1. Koen JD,
    2. Srokova S,
    3. Rugg MD
    (2020) Age-related neural dedifferentiation and cognition. Curr Opin Behav Sci 32:7–14. https://doi.org/10.1016/j.cobeha.2020.01.006 pmid:32095492
    OpenUrlCrossRefPubMed
  43. ↵
    1. Koutstaal W,
    2. Wagner AD,
    3. Rotte M,
    4. Maril A,
    5. Buckner RL,
    6. Schacter DL
    (2001) Perceptual specificity in visual object priming: functional magnetic resonance imaging evidence for a laterality difference in fusiform cortex. Neuropsychologia 39:184–199. https://doi.org/10.1016/s0028-3932(00)00087-7 pmid:11163375
    OpenUrlCrossRefPubMed
  44. ↵
    1. Kriegeskorte N,
    2. Kievit RA
    (2013) Representational geometry: integrating cognition, computation, and the brain. Trends Cogn Sci 17:401–412. https://doi.org/10.1016/j.tics.2013.06.007 pmid:23876494
    OpenUrlCrossRefPubMed
  45. ↵
    1. Kriegeskorte N,
    2. Mur M,
    3. Bandettini P
    (2008) Representational similarity analysis – connecting the branches of systems neuroscience. Front Syst Neurosci 2:4–28. https://doi.org/10.3389/neuro.06.004.2008 pmid:19104670
    OpenUrlCrossRefPubMed
  46. ↵
    1. Lambon Ralph MA,
    2. Jefferies E,
    3. Patterson K,
    4. Rogers TT
    (2017) The neural and computational bases of semantic cognition. Nat Rev Neurosci 18:42–55. https://doi.org/10.1038/nrn.2016.150 pmid:27881854
    OpenUrlCrossRefPubMed
  47. ↵
    1. Lee MD,
    2. Wagenmakers E-J
    (2013) Bayesian cognitive modeling: a practical course. New York: Cambridge University Press.
  48. ↵
    1. Martin CB,
    2. Douglas D,
    3. Newsome RN,
    4. Man LLY,
    5. Barense MD
    (2018) Integrative and distinctive coding of visual and conceptual object features in the ventral visual stream. Elife 7:e31873. https://doi.org/10.7554/eLife.31873
    OpenUrlCrossRefPubMed
  49. ↵
    1. Mion M,
    2. Patterson K,
    3. Acosta-Cabronero J,
    4. Pengas G,
    5. Izquierdo-Garcia D,
    6. Hong YT,
    7. Fryer TD,
    8. Williams GB,
    9. Hodges JR,
    10. Nestor PJ
    (2010) What the left and right anterior fusiform gyri tell us about semantic memory. Brain 133:3256–3268. https://doi.org/10.1093/brain/awq272 pmid:20952377
    OpenUrlCrossRefPubMed
  50. ↵
    1. Monge ZA,
    2. Madden DJ
    (2016) Linking cognitive and visual perceptual decline in healthy aging: the information degradation hypothesis. Neurosci Biobehav Rev 69:166–173. https://doi.org/10.1016/j.neubiorev.2016.07.031 pmid:27484869
    OpenUrlPubMed
  51. ↵
    1. Mumford JA,
    2. Turner BO,
    3. Ashby FG,
    4. Poldrack RA
    (2012) Deconvolving BOLD activation in event-related designs for multivoxel pattern classification analyses. Neuroimage 59:2636–2643. https://doi.org/10.1016/j.neuroimage.2011.08.076 pmid:21924359
    OpenUrlCrossRefPubMed
  52. ↵
    1. Muttenthaler L,
    2. Hebart MN
    (2021) THINGSvision: a python toolbox for streamlining the extraction of activations from deep neural networks. Front Neuroinform 15:679838. https://doi.org/10.3389/fninf.2021.679838 pmid:34630062
    OpenUrlPubMed
  53. ↵
    1. Naspi L,
    2. Hoffman P,
    3. Devereux B,
    4. Morcom AM
    (2021) Perceptual and semantic representations at encoding contribute to true and false recognition of objects. J Neurosci 41:8375–8389. https://doi.org/10.1523/JNEUROSCI.0677-21.2021 pmid:34413205
    OpenUrlAbstract/FREE Full Text
  54. ↵
    1. Nasreddine ZS,
    2. Phillips NA,
    3. Bédirian V,
    4. Charbonneau S,
    5. Whitehead V,
    6. Collin I,
    7. Cummings JL,
    8. Chertkow H
    (2005) The Montreal cognitive assessment, MoCA: a brief screening tool for mild cognitive impairment. J Am Geriatr Soc 43:695–699.
    OpenUrl
  55. ↵
    1. Nili H,
    2. Wingfield C,
    3. Walther A,
    4. Su L,
    5. Marslen-wilson W,
    6. Kriegeskorte N
    (2014) A toolbox for representational similarity analysis. PLoS Comput Biol 10:e1003553. https://doi.org/10.1371/journal.pcbi.1003553 pmid:24743308
    OpenUrlCrossRefPubMed
  56. ↵
    1. Park J,
    2. Carp J,
    3. Hebrank A,
    4. Park DC,
    5. Polk TA
    (2010) Neural specificity predicts fluid processing ability in older adults. J Neurosci 30:9253–9259. https://doi.org/10.1523/JNEUROSCI.0853-10.2010 pmid:20610760
    OpenUrlAbstract/FREE Full Text
  57. ↵
    1. Penny WD,
    2. Trujillo-Barreto NJ,
    3. Friston KJ
    (2005) Bayesian fMRI time series analysis with spatial priors. Neuroimage 24:350–362. https://doi.org/10.1016/j.neuroimage.2004.08.034 pmid:15627578
    OpenUrlCrossRefPubMed
  58. ↵
    1. Reuter M,
    2. Rosas HD,
    3. Fischl B
    (2010) Highly accurate inverse consistent registration: a robust approach. Neuroimage 53:1181–1196. https://doi.org/10.1016/j.neuroimage.2010.07.020 pmid:20637289
    OpenUrlCrossRefPubMed
  59. ↵
    1. Rouder JN,
    2. Speckman PL,
    3. Sun D,
    4. Morey RD,
    5. Iverson G
    (2009) Bayesian t tests for accepting and rejecting the null hypothesis. Psychon Bull Rev 16:225–237. https://doi.org/10.3758/PBR.16.2.225 pmid:19293088
    OpenUrlCrossRefPubMed
  60. ↵
    1. Simons JS,
    2. Koutstaal W,
    3. Prince S,
    4. Wagner AD,
    5. Schacter DL
    (2003) Neural mechanisms of visual object priming: evidence for perceptual and semantic distinctions in fusiform cortex. Neuroimage 19:613–626. https://doi.org/10.1016/s1053-8119(03)00096-x pmid:12880792
    OpenUrlCrossRefPubMed
  61. ↵
    1. Simonyan K,
    2. Zisserman A
    (2015) Very deep convolutional networks for large-scale image recognition. The 3rd International Conference on Learning Representations (ICLR 2015), pp 1–14. San Diego, CA.
  62. ↵
    1. Stern CE,
    2. Corkin S,
    3. González RG,
    4. Guimaraes AR,
    5. Baker JR,
    6. Jennings PJ,
    7. Carr CA,
    8. Sugiura RM,
    9. Vedantham V,
    10. Rosen BR
    (1996) The hippocampal formation participates in novel picture encoding: evidence from functional magnetic resonance imaging. Proc Natl Acad Sci U S A 93:8660–8665. https://doi.org/10.1073/pnas.93.16.8660 pmid:8710927
    OpenUrlAbstract/FREE Full Text
  63. ↵
    1. Taylor KI,
    2. Devereux BJ,
    3. Acres K,
    4. Randall B,
    5. Tyler LK
    (2012) Contrasting effects of feature-based statistics on the categorisation and basic-level identification of visual objects. Cognition 122:363–374. https://doi.org/10.1016/j.cognition.2011.11.001 pmid:22137770
    OpenUrlCrossRefPubMed
  64. ↵
    1. Trelle AN,
    2. Henson RN,
    3. Simons JS
    (2019) Neural evidence for age-related differences in representational quality and strategic retrieval processes. Neurobiol Aging 84:50–60. https://doi.org/10.1016/j.neurobiolaging.2019.07.012 pmid:31491595
    OpenUrlCrossRefPubMed
  65. ↵
    1. Tustison NJ,
    2. Avants BB,
    3. Cook PA,
    4. Zheng Y,
    5. Egan A,
    6. Yushkevich PA,
    7. Gee JC
    (2010) N4ITK: improved N3 bias correction. IEEE Trans Med Imaging 29:1310–1320. https://doi.org/10.1109/TMI.2010.2046908 pmid:20378467
    OpenUrlCrossRefPubMed
  66. ↵
    1. Tyler LK,
    2. Chiu S,
    3. Zhuang J,
    4. Randall B,
    5. Devereux BJ,
    6. Wright P,
    7. Clarke A,
    8. Taylor KI
    (2013) Objects and categories: feature statistics and object processing in the ventral stream. J Cogn Neurosci 25:1723–1735. https://doi.org/10.1162/jocn_a_00419 pmid:23662861
    OpenUrlCrossRefPubMed
  67. ↵
    1. Umanath S,
    2. Marsh EJ
    (2014) Understanding how prior knowledge influences memory in older adults. Perspect Psychol Sci 9:408–426. https://doi.org/10.1177/1745691614535933 pmid:26173273
    OpenUrlCrossRefPubMed
  68. ↵
    1. Vaidya CJ,
    2. Zhao M,
    3. Desmond JE,
    4. Gabrieli JDE
    (2002) Evidence for cortical encoding specificity in episodic memory: memory-induced re-activation of picture processing areas. Neuropsychologia 40:2136–2143. https://doi.org/10.1016/s0028-3932(02)00053-2 pmid:12208009
    OpenUrlCrossRefPubMed
  69. ↵
    1. Visser M,
    2. Jefferies E,
    3. Lambon Ralph MA
    (2010) Semantic processing in the anterior temporal lobes: a meta-analysis of the functional neuroimaging literature. J Cogn Neurosci 22:1083–1094. https://doi.org/10.1162/jocn.2009.21309 pmid:19583477
    OpenUrlCrossRefPubMed
  70. ↵
    1. Wang WC,
    2. Brashier NM,
    3. Wing EA,
    4. Marsh EJ,
    5. Cabeza R
    (2018) Neural basis of goal-driven changes in knowledge activation. Eur J Neurosci 48:3389–3396. https://doi.org/10.1111/ejn.14196 pmid:30290029
    OpenUrlPubMed
  71. ↵
    1. Yi HA,
    2. Moore P,
    3. Grossman M
    (2007) Reversal of the concreteness effect for verbs in patients with semantic dementia. Neuropsychology 21:9–19. https://doi.org/10.1037/0894-4105.21.1.9 pmid:17201526
    OpenUrlCrossRefPubMed
  72. ↵
    1. Zeiler MD,
    2. Fergus R
    (2014) Visualizing and understanding convolutional networks. In: European conference on computer vision. (Fleet D, ed), pp 813–833. Cham: Spring International Publishing.
  73. ↵
    1. Zhang Y,
    2. Brady M,
    3. Smith S
    (2001) Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm. IEEE Trans Med Imaging 20:45–57. https://doi.org/10.1109/42.906424 pmid:11293691
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 43 (44)
Journal of Neuroscience
Vol. 43, Issue 44
1 Nov 2023
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Effects of Aging on Successful Object Encoding: Enhanced Semantic Representations Compensate for Impaired Visual Representations
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Effects of Aging on Successful Object Encoding: Enhanced Semantic Representations Compensate for Impaired Visual Representations
Loris Naspi, Charlotte Stensholt, Anna E. Karlsson, Zachary A. Monge, Roberto Cabeza
Journal of Neuroscience 1 November 2023, 43 (44) 7337-7350; DOI: 10.1523/JNEUROSCI.2265-22.2023

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Effects of Aging on Successful Object Encoding: Enhanced Semantic Representations Compensate for Impaired Visual Representations
Loris Naspi, Charlotte Stensholt, Anna E. Karlsson, Zachary A. Monge, Roberto Cabeza
Journal of Neuroscience 1 November 2023, 43 (44) 7337-7350; DOI: 10.1523/JNEUROSCI.2265-22.2023
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • aging
  • compensation
  • episodic memory fMRI memory encoding representational similarity analysis

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Dendrite-targeting inhibitory interneurons form biased circuits with deep and superficial pyramidal cells in hippocampal CA1
  • High-pressure freezing EM tomography of entire ribbon synapses in the retina
  • The Hidden Benefits of Noise: Low-Frequency tRNS and Dynamic Visual Noise Enhance Visual Processing
Show more Research Articles

Behavioral/Cognitive

  • Dendrite-targeting inhibitory interneurons form biased circuits with deep and superficial pyramidal cells in hippocampal CA1
  • High-pressure freezing EM tomography of entire ribbon synapses in the retina
  • The Hidden Benefits of Noise: Low-Frequency tRNS and Dynamic Visual Noise Enhance Visual Processing
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.