Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Viewpoints

Thinking as Analogy-Making: Toward a Neural Process Account of General Intelligence

Stephanie Theves
Journal of Neuroscience 30 April 2025, 45 (18) e1555242025; https://doi.org/10.1523/JNEUROSCI.1555-24.2025
Stephanie Theves
Max Planck Institute for Empirical Aesthetics, Frankfurt am Main 60322, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Stephanie Theves
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

What is the secret of human intelligence? A key discovery in psychology is that performance correlations across diverse cognitive tasks are explained by a few broad abilities and one overarching general factor, which is also predictive of real-life achievements. Whether these factors correspond to biological processes is a century-old debate. While previous research focused on localizing their correlates in brain structure, connectivity, and activation levels, the mechanisms of neural information processing related to intelligence are still unexplored. I outline a new approach integrating psychometrics with neuroscientific advances in identifying the computations underlying single tasks from their representational geometry to provide a novel perspective on this topic. In particular, I propose a neural process account of the general factor that builds on the central role of structure mapping—the process of abstracting and reasoning based on relational knowledge—in human cognition. Neural coding properties in the hippocampal and prefrontal–parietal systems that enable inferential leaps through structural abstraction might contribute to the general factor. In general, integrating neuro-representational and psychometric research has the potential to uncover core principles of natural intelligence.

The Challenge of Definition

What is the secret of human intelligence? The question is at the heart of the cognitive sciences, including psychology, neuroscience, and artificial intelligence. Addressing this question is challenging because it first requires defining and measuring intelligence. In psychology, intelligence is an umbrella term for the sum of cognitive capacities (APA, 2015) and has been studied through performance patterns in multitask batteries (Box 1). Relatedly, a key goal is to explain generality in how humans can perform multiple tasks in different domains. This article is concerned with psychometric research on the structure of cognitive abilities, particularly in quantifying the overlap between diverse cognitive tasks, and how these insights can effectively be integrated with advances in cognitive and systems neuroscience.

Positive Manifold and the Structure of Intelligence

A key psychometric discovery is the positive manifold, which refers to the observation that individuals who perform well on one cognitive task tend to perform well on others and is quantified as the general factor g (Spearman, 1904). Specifically, it is derived through factor analysis, a statistical technique to identify the extent of shared variance among performances across diverse tasks. Decades of psychometric research have converged on the well-validated (Caemmerer et al., 2020) Cattell–Horn–Carroll (CHC) taxonomy, which posits a hierarchical factor structure of cognitive abilities (Carroll, 1993): A few broad abilities that underlie a wide range of specific skills load to different degrees on a common factor g. Among these broad abilities, fluid intelligence (gf), the capacity for reasoning and problem solving, is statistically almost identical to g. Reasoning tasks, such as progressive matrices, series completion, and analogies, show the highest correlation with most other tasks and provide a strong indicator of g (Arendasy et al., 2008; Reynolds et al., 2013; Caemmerer et al., 2020). As g represents the largest component of the common factor variance, the best single predictor of educational, occupational, and health outcomes (Schmitt and Hunter, 2004; Deary et al., 2007; Strenze, 2007; Duncan, 2013; Roth et al., 2015; Calvin et al., 2017), and the most stable trait across development (Deary et al., 2013), it suggests that these tasks might tap into cognitive mechanisms of wide real-life importance. The hierarchical factor structure (Fig. 1A; CHC model) has been replicated across different samples and independent of the test-battery used, with almost perfectly correlated g-factors (Johnson et al., 2004; 2008; Caemmerer et al., 2020). This implies that g is not dependent on the specific selection of tasks and reinforces the notion of a unitary construct. However, there is no agreement on how to interpret g. While from a psychometric perspective, g as a measure of the positive manifold is established (Johnson et al., 2008), its nature as a psychological construct remains debated. In particular, the debate revolves around whether g can be equated with cognitive or biological variables and whether a causal factor underlies this latent variable. The dominant view considers g an emergent unitary property of interacting neural systems or processes (Gottfredson, 2016). Different theoretical accounts vary in the degree to which g can be equated with cognitive or neural variables that exist intraindividually and can be considered a trait (g-theory) or merely reflects a statistical index (e.g., sampling theory, mutualism; see Van der Maas et al., 2006; Gottfredson, 2016; Kovacs and Conway, 2016, for further discussion of these views).

Box 1.

Construction and application of intelligence tests. During the construction of an intelligence test, a battery of tasks is selected to cover a broad range of cognitive skills and is administered to a large sample. Factor analysis then identifies underlying factors that explain the patterns of performance correlations among the tasks. Factor loadings represent the strength and direction of the relationship between each task and the underlying factor. For example, a high positive factor loading indicates that performance on a specific task is strongly related to the factor being measured. Tasks with higher factor loadings have a greater influence on the factor score. Tests are constructed and evaluated based on certain quality criteria (e.g., objectivity, reliability, and validity of test items) on large cohorts of participants, often grouped by broad demographic variables including age and educational attainment. Individual test scores can then be used in conjunction with the factor loadings obtained from the factor analysis to estimate an individual's factor scores.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Investigating the relation between the structure of cognitive abilities and neural mechanisms of structure mapping. A, Illustration of how neural structure mapping may contribute to the emergence of g. Depicted is a reduced version of the CHC taxonomy (Carroll, 1993). A few broad abilities that underlie a range of ∼80 narrow abilities (opaque) load to different degrees on a common factor g. Thickness of lines reflects g-loading. Gf, gc, glm, gv (reasoning, knowledge, learning and memory, visuospatial reasoning) show the highest g-loading among second-level factors (Caemmerer et al., 2020) with relational reasoning tasks being excellent proxies of g. Bold vertical arrows depict hypothesized direct effects, and horizontal arrows depict mutually reinforcing interactions between reasoning (gf), knowledge (gc), and learning (glm) via structure mapping. B, Structural priors (gray) and alignment with known structures (green) may help to draw inferences about novel concepts (blue). Figure logic adapted from Gentner and Smith (2012). C, Moving from localization-focused neuroimaging approaches to studying g-related differences in representational geometry (left; example of a decision-making task space from Okazawa et al., 2021) may offer insights at the intersections of cognitive science disciplines (right).

Neural Correlates of Intelligence Factors

While intelligence is one of the best-investigated traits in differential psychology, the neural underpinnings of this latent construct remain unresolved. This may invite a recalibration of what we aim to understand at this stage. Previous investigations on its neurobiological correlates focused on localizing the brain regions that show intelligence-related variation in anatomical structure [e.g., total or region-specific gray matter volume (Colom et al., 2006; Pietschnig et al., 2015), white matter integrity (Schmithorst et al., 2005; Penke et al., 2012), microstructure (Genç et al., 2018)], as well as intrinsic structural and functional connectivity (van den Heuvel et al., 2009; Barbey, 2018) and task-related activation strength (Jung and Haier, 2007; Duncan, 2010). Some functional studies contrasted highly g-loaded tasks with control tasks rather than studying interindividual differences. Overall, neuroimaging studies predominantly located g-related variance within brain structure and activation levels in the parietofrontal multiple demand system (Jung and Haier, 2007; Duncan, 2010). In a processing-related account, this has been attributed to attentional control mechanisms for goal-directed problem solving (Duncan, 2013). Another prominent hypothesis related g to lower energy consumption and more efficient processing, based on early PET and fMRI studies that reported negative correlations between intelligence measures and activation in parietofrontal regions in some studies, while the opposite relationship was observed in others (Jung and Haier, 2007; Neubauer and Fink, 2009). Furthermore, several measures of network connectivity, either on a structural or functional level, have been related to g (e.g., how flexibly different network states reorganize; Barbey, 2018), as most neuro-related measures to date, each with relatively low correlation strength. Importantly, while the localization-centered brain imaging approach in this field of research has been instrumental, it is less well suited to further address how, as opposed to where in the brain, information processing differs with respect to intelligence factors. To better understand the nature of human intelligence and how its arguably distinctive qualities are afforded by the brain, a fruitful approach might be to now study how common factors of multitask performance relate to interindividual differences in neural task representations, which can provide a window into the underlying computations.

The Quest for a Neural Process Account and the Neural Geometry of Task Representations

First considerations of a process account date back to the discovery of the positive manifold itself, as indirectly reflected in Spearman's noegenetic laws (Horn and McArdle, 2007). Advances in cognitive neuroscience now provide the tools to investigate the underlying computations via neural task representations. The approach rests on the premises that the format in which information (e.g., task variables) is represented can be more or less advantageous to mental operations (Bernardi et al., 2020; Ebitz and Hayden, 2021) and that representational geometries reflecting the same content or experience are in principle shared across individuals (Chen et al., 2021, 79; Meshulam et al., 2021). For instance, in real-life learning scenarios, such as students taking a computer science course, neural alignment of the individual learner's neural representations to those of experts and other learners increased with learning and predicted performance on the final exam (Meshulam et al., 2021). Thus, if g results from variance in a set of key neural information processing principles, interindividual differences in broad cognitive abilities might be reflected in deviations from (multi)task-optimal neural geometries. As outlined above, factor-analytical research revealed that the factor gf, representing the general ability for reasoning, is statistically almost identical to g (Caemmerer et al., 2020). Derived from these statistical patterns, it can be hypothesized that g is related to neural formats that facilitate reasoning and particularly reflect on those factors that are the strongest indicators of overall intelligence (Fig. 1A). The emphasis on reasoning aligns with seemingly distinctive facets of human intelligence compared with AI or other species. AI can fall short of the human capacity to make context-appropriate inferences from ambiguous input that align with common sense (Marcus and Davis, 2019). What makes human thinking so flexible? A central perspective in cognitive science is that human reasoning often depends on analogical reasoning and that humans understand new situations in alignment with existing knowledge structures (Gentner, 1983; Tenenbaum et al., 2011). The following outlines why neural structure mapping, the brain mechanisms of extracting structural information from experiences, and retrieving it to predict, interpret, and learn in new situations may reflect a neurocognitive processing trait related to the emergence of a general factor.

Structure Mapping as a g-Related Mechanism

Analogical reasoning entails drawing inferences based on abstract relational similarities rather than perceptual similarity or statistical covariation between two situations. For instance, understanding the analogy “an atom is like the solar system” requires evaluating the higher-order similarity of the relations in both terms (Gentner and Smith, 2012). Recognizing and exploiting relational similarities foster inferential leaps in learning and reasoning (Gentner, 1983, Lake et al., 2015) and allow for the transfer of knowledge across domains (Gick and Holyoak, 1980; 1983), which is key to creative insight and scientific advance (Holyoak and Thagard, 1995, Dunbar and Klahr, 2012). Analogical reasoning has been considered uniquely advanced and central to human cognition (Richland et al., 2007; Penn et al., 2008; Gentner and Smith, 2012; Aho et al., 2023). For instance, it has been shown that participants were more likely to solve a novel problem when prompted with two problem analogues rather than one, which facilitated the prior recognition of relational similarities and abstraction of a solution structure (Gick and Holyoak, 1983). Analogical reasoning also appears as an unprompted natural strategy, for instance, employed by scientists in explaining and generating new scientific concepts (Dunbar, 1995; Dunbar and Klahr, 2012). The ability to reason analogically develops gradually, reflected in a shift from reliance on perceptual features to using relational similarities to solve problems in later childhood, and is considered a key driver in the development of abstract thinking and advanced reasoning (Gentner and Toupin, 1986; Gentner and Medina, 1998). Relatedly, relational reasoning is also the best single proxy measure of the general factor of intelligence (Snow et al., 1984). Without analogical reasoning, human learning would be confined to direct observation and trial and error. In contrast, analogical reasoning allows for inferences beyond immediate experience, positioning it as a mechanism that fosters creativity and abstract thought.

Accordingly, a cognitive theory of human reasoning holds that humans form abstract knowledge structures, i.e., concepts, and understand new situations by alignment with known structures. The aligned structures provide the basis to draw inferences via structural pattern completion, highlighting the elements missing in the new situation (Gentner and Smith, 2012; Fig. 1B). In a classic example, learning the concept of an atom is facilitated by discovering relational similarity to the solar system, with respect to several electrons orbiting one nucleus. This relational alignment can foster inferences about further properties of atoms, such that the nucleus might be more massive than the electrons. In line with this idea, a computational framework for concept learning and abstraction (Tenenbaum et al., 2011) notes that many natural data can be described by a few basic structural patterns, such as grids, circles, or trees, which reflect organizational principles across domains. For instance, tree structures can represent taxonomic hierarchies in biology as well as authority relations in corporate structures. The recurrence of structures may facilitate the learning of new concepts by inferring and evaluating generative models of the data (Lake et al., 2015) and highlights the efficiency of structure mapping as a processing mechanism.

If processing mechanisms underlie the general factor, their influence on the broad abilities might be proportional to the g-loading of those abilities. Neural mechanisms of relational reasoning may particularly affect those broad abilities that correlate highest with the general factor (Fig. 1A). These are reasoning (gf), visuospatial reasoning (gv), the breadth and depth of knowledge (gc), and learning and memory (glm; Caemmerer et al., 2020). Structural knowledge can help predict, infer, and generalize, i.e., it aids reasoning (gf) and accelerates learning (glm) by enabling the acquisition of more information than is unambiguously provided by the environment. This leads to broader knowledge (gc), including more explicit concepts to draw upon, which in turn facilitates new learning (glm) and reasoning (gf). Thus, in addition to potential direct effects of structure mapping on the relatively g-loaded broad abilities, mutually reinforcing interactions between them may contribute to positive manifold via a virtuous cycle (Fig. 1A).

Neural Implementations of Structure Mapping: Cognitive Maps and Manifold Alignment

Structural abstraction via knowledge factorization in hippocampal–entorhinal cells

Ideas on how structure mapping could be implemented in the brain, in particular in the hippocampal system and prefrontal–parietal cortices, recently gained traction. The role of the hippocampus in relational inference in paired associative inference (Zeithamova and Preston, 2010; Koster et al., 2018; Morton et al., 2020) and transitive inference tasks (Bunsey and Eichenbaum, 1996; Eichenbaum et al., 1999; Devito et al., 2010) as well as in concept learning and categorization (Zaki, 2004; Kumaran et al., 2009; Mack et al., 2018; Theves et al., 2019; 2024; Zeithamova and Bowman, 2020; Broschard et al., 2024) is well established.

Recent computational and systems neuroscience studies further investigated the circuit-level implementation of these processes. Cells in the hippocampal–entorhinal system similarly encode mental maps of space (O'Keefe and Dostrovsky, 1971; Hafting et al., 2005) and relationships in nonspatial tasks (Aronov et al., 2017; Sun et al., 2020; Samborska et al., 2022). For instance, hippocampal place cells fire at a particular location, and regularly repeating firing fields of entorhinal grid cells provide a general metric allowing vector computations. An integrative computational model of spatial and nonspatial inference (Whittington et al., 2020) suggests that hippocampal–entorhinal cells might support generalization by factorizing experiences into a representation of their structure, encoded by grid and vector cells in the medial entorhinal cortex, and sensory specifics, encoded in the lateral entorhinal cortex. The benefit of this factorization is that structural knowledge can be flexibly recombined to map new situations (conjunction by place cells; Fig. 2A). When the model learned different knowledge structures through factorization, its components exhibited characteristics of hippocampal–entorhinal cell firing, suggesting these cells as a biological basis. Congruent with the notion that the medial entorhinal cortex encodes structural information abstractly, i.e., independent of sensory specifics, is the fact that grid cells keep their correlation structure across environments and do not remap. Due to consistency in the medial entorhinal cortex input, hippocampal remapping may also not be random as previously thought (Whittington et al., 2020). Related to this finding, it was shown that experiences in one rat can be decoded based on hippocampal representations of the same experience in another rat when place cell activity is geometrically aligned across subjects (Chen et al., 2021; Nieh et al., 2021). Also, border (Solstad et al., 2008) and object vector cells (Høydal et al., 2019) in the medial entorhinal cortex fire invariantly with respect to specific features, as they encode vectorial distances to any object or border and maintain their vectors across contexts. In sum, this invariance contributes to the idea that these firing patterns may reflect a process of abstraction (Whittington et al., 2020).

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Neural implementation of structure mapping in single-cell tuning and population geometry. A, Knowledge factorization (structural abstraction) in the hippocampal–entorhinal system (as formalized in Whittington et al., 2020). Grid cells, border cells, and object vector cells (OVC) in the medial entorhinal cortex are thought to encode structure abstractly (green), while hippocampal place cells are thought to reflect conjunctions of structural input from the medial entorhinal cortex and specific sensory input (blue). B, Understanding neural activity in geometric terms via state space representations. A state refers to the configuration of activity within a neural population (left), depicted as vectors within the neuron-dimensional activity space (top right). Dimensionality reduction methods (e.g., principal component analysis) identify orthogonal directions (bottom right) in the neural state space that explain most variance. Projecting activity onto those PCs condenses the neuron- (or voxel-/sensor-) dimensional representation into a low-dimensional depiction. Modified from Ebitz and Hayden (2021). C, Parallel and normalized representations can facilitate the transfer of relational knowledge (green) to another context (blue). If items from different contexts (e.g., drinks vs natural phenomena) are represented on a relative scale (top), not an absolute scale (bottom), a classifier (lilac) discriminating cold and hot in one context (drinks) can generalize to the other context (natural phenomena). Figure logic adapted from Sheahan et al. (2021). D, During a reversal learning task (Bernardi et al., 2020), where two sets of stimulus–response–outcome associations alternated (Context 1 and 2; top), hippocampus and PFC neurons exhibited both abstract and high-dimensional representational geometries (top right). Four conditions of task variable (e.g., context, value, and stimulus) combinations are arranged in a way (factorized on a distorted square) that a linear decoder (lilac) can separate all three dichotomies, two of them in an abstract form: the classifier can generalize to new conditions (blue) not seen during training. Deviations from this geometry can reduce generalization ability (bottom left) or reduce the number of decodable variables (bottom right). From Bernardi et al. (2020).

In particular, simulations of hippocampal cell recordings have described their representational schemes as compositional in nature: they would, in principle, allow for the representation of novel or alternative scenarios by recombining their representational elements (e.g., code for borders, objects, etc.; Bakermans et al., 2023), as required for imagination and planning. This is in line with the observation that replay, the time-compressed sequential activation of place cells, does not only apply to experienced trajectories but also to novel shortcuts or alternative routes and can thus be considered generative (Gupta et al., 2010).

Most importantly, if variation in these coding properties contributes to the emergence of general factors, they should be domain-general. In fact, hippocampal–entorhinal cells exhibit similar patterns in spatial navigation and coding various nonspatial task features, such as sound frequency (Aronov et al., 2017), gaze-position (Killian and Buffalo, 2018), and numbers (Kutter et al., 2018). Likewise, corresponding representational signatures in human neuroimaging, such as grid-like hexadirectional signal modulation, and distance and boundary representations, have been observed for spatial environments (Doeller et al., 2010; Morgan et al., 2011) and other task spaces (Tavares et al., 2015; Bao et al., 2019; Theves et al., 2019; Park et al., 2021). For instance, fMRI studies have demonstrated a role of the hippocampal–prefrontal system in the abstraction of concepts in category-learning tasks, including the different features of a conceptual space (i.e., the structure of feature space, category boundaries, prototypes; Theves et al., 2020; Schäfer et al., 2024).

In sum, hippocampal–entorhinal representations are described as domain-general, abstract, compositional, and generative—qualities that align with human intelligence. Correspondingly, a specific relationship between interindividual differences in relational map-like encoding of task information in the hippocampus and fluid intelligence (gf) has been observed in humans (Tenderra and Theves, 2024a, b).

Structural generalization via manifold alignment in the hippocampus and prefrontal–parietal cortex

Activity patterns of neural populations can be described as manifolds in a high-dimensional neural state space, where coordinates represent the activity of individual neurons or units of activations (Fig. 2B). The properties of a neural network can confine neural trajectories to a subregion of this space, the manifold. Quantifying geometric properties of manifolds gained momentum as a tool to infer the computations underlying specific tasks (Ebitz and Hayden, 2021). In particular, recent studies provided first neural evidence in support of structural abstraction and alignment at the population level in the hippocampus and in prefrontal–parietal cortices (Luyckx et al., 2019; Bernardi et al., 2020; Morton et al., 2020; Sheahan et al., 2021). These studies suggest that when sets of stimuli share relational structure across contexts (e.g., they can be ordered along a linear dimension), they get represented on parallel low-dimensional neural manifolds, such that a decoder trained in one context (e.g., to discriminate cold and hot drinks) could also perform in the other context (e.g., distinguish cold and hot natural phenomena; Luyckx et al., 2019; Sheahan et al., 2021; Fig. 2C,D). In particular, the parietal cortex encodes abstract linear representations of magnitude, also referred to as mental number lines (Hubbard et al., 2005; Nieder, 2016). Congruent with a structure mapping account, these neural representations might be repurposed to map newly learned ranks and afford inferences about unobserved pairs. Accordingly, it was shown that parietal EEG signals reflected parallel representations of numerical ranks and newly learned reward probabilities (Luyckx et al., 2019). Another study investigated cross-context generalization in a reversal learning paradigm (Bernardi et al., 2020), showing that inference of context-dependent stimulus–response–outcome mappings required an abstract (i.e., classifiers could generalize to conditions not seen during training) and simultaneously high-dimensional geometry of task variables in the hippocampus and PFC (Fig. 2D). Abstract representations of event-structure in the hippocampus and parietal cortex can further afford vector-based retrieval in support of fast inference times (Morton et al., 2020). Lastly, analogical reasoning must account for the fact that relational patterns can manifest across different physical scales (e.g., drinks and natural phenomena differ in absolute temperature) or lack a physical scale, for instance, in metaphorical uses (e.g., hot and cold emotions). In such cases, the transfer of structural knowledge might require normalized abstract representations. Indicative of this principle, it was shown that during a number comparison task in which the relative magnitude within a context was informative, numbers were represented on parallel and normalized context manifolds in parietal EEG signals (Sheahan et al., 2021).

While recent research has identified key roles of hippocampal and prefrontal–parietal regions in structure mapping, their involvement may not necessarily be exclusive. In sum, the advance of measuring neural task representations now provides the potential to test process theories of the positive manifold empirically, including, for instance, further exploration of the link between structure mapping and human intelligence.

Grounding Characteristics of g in Neural Structure Mapping

The general factor of intelligence features characteristics (Kovacs and Conway, 2016), which can inform the investigation of putative neural processing mechanisms. Firstly, as outlined above, g and the secondary factor gf are statistically almost identical. Thus, candidate processing mechanisms should explain the pivotal component of gf within g. In particular, individual differences in structure mapping might manifest in measures of cognitive abilities through a virtuous cycle of learning, reasoning, and knowledge, as outlined in Figure 1A. Secondly, it has been suggested that complex tests correlate more strongly with g than less complex tests—for instance, reverse as compared with forward digit span tasks have higher g-loadings (Jensen, 1981). Whether interindividual differences in structure mapping manifest in performance would conceivably be influenced by task complexity, as complexity can be related to the number of possible mechanisms to solve a task (Newell and Simon, 1972). For instance, finding a single item in a new supermarket is easily learned from scratch, but locating 30 items benefits from prior structural knowledge of general supermarket layouts, enabling to infer some of the mappings and learn more quickly. In the first case, less effective structure mapping is compensated by alternative processes, but in the second case its advantage becomes significant. Thirdly, the correlation between g and worst performances in a test is higher than the correlation with the best performances, particularly in highly g-loaded tests (Larson and Alderton, 1990; Coyle, 2001; 2003). Evaluation of candidate processes may therefore take into account the constraints that lead to errors in intricate tasks (Kovacs and Conway, 2016; Griffith, 2020). In complex tasks, differences in the recruitment of structure mapping can define a baseline performance, above which its relative contribution to score differences may decrease. Lastly, a putative neural processing mechanism should account for developmental trajectories of g. In average aging trajectories gf declines from middle adulthood, while gains in gc progress (Horn and Cattell, 1967), which was hypothesized to reflect compensation of fluid abilities through reliance on knowledge (Reuter-Lorenz and Cappell, 2008). However, intraindividual aging trajectories revealed that individuals with less decline in gf show stronger increases in gc, which is better captured by a unitary g-factor (Tucker-Drob et al., 2022). Since structure mapping can be assumed to support reciprocal interactions between fluid (gf) and crystallized (gc) intelligence (Fig. 1A), it is conceivable that a developmental decline in forming structured and abstract neural task representations could relate to this observation. So far, there is evidence of instability in the spatial tuning of place and grid cells with age. Hippocampal place cells lose their temporal stability in older as compared with younger mice, characterized by remapping between episodes of the same environment (Barnes et al., 1997) and reduced spatial tuning (Yan et al., 2003). Furthermore, elderly as compared with younger human participants showed weaker to no grid-like representations of space during virtual navigation, reflecting the temporal instability of orientation estimates in entorhinal cortex voxels (Stangl et al., 2018).

Concluding Remarks

I suggest that combining psychometric findings with cognitive-computational neuroscience approaches can offer a novel perspective on the nature of human intelligence. Investigating the structure of cognitive abilities, as inferred from multitask performances, at the level of neural task representations can advance our understanding of generality in key processes and contribute to an empirical definition of what lets individuals perform well across diverse tasks. Vice versa, psychometric findings may provide an integrative view to guide investigations on general processing principles in the brain. Whether statistical g can at all be translated into biological terms (e.g., neural processing mechanisms) is debated (Gottfredson, 2016; Kovacs and Conway, 2016). This is less contentious for broad abilities in the CHC taxonomy (Fig. 1A), and a combined approach may at least approximate process-generality on a domain level.

An influential view on the relation of g and cognitive processing has been based on the consistency of parietal and prefrontal effects (Jung and Haier, 2007) and focused on the role of attention and executive processing for cognition (Duncan, 2010; 2013). It was hypothesized that prefrontal–parietal cortices might contribute to intelligence by segmenting tasks into subgoals and selectively focusing attention to those (Duncan, 2013). While the present viewpoint shifts the focus to structural abstraction and generalization in the hippocampal–entorhinal and prefrontal–parietal system as processes with a broad impact on cognition, both accounts might complement each other.

Could lesion studies provide insight on the role of hippocampal and parietofrontal structure coding in intelligence? While systematic data on pre- to postlesion intelligence tests are lacking, the case study of H.M. reported no reduction in IQ from before to after his medial temporal lobe resection, which however might be attributed to the reduced epileptic activity and related cognitive impairment. Postmortem histology further revealed less extensive and less selective hippocampal damage as previously reported (Annese et al., 2014). Inferring function from lesion cases is thus challenging, also because lesions could be compensated by or affect functioning in interconnected regions. Structure mapping in particular could contribute to g also via interactive and long-term effects (Fig. 1A), which could obscure potential immediate effects of a regional damage. For further related considerations, it is important to first understand the specific roles of, and interactions between, hippocampal and prefrontal–parietal areas in structure mapping.

While humans analogical reasoning capacity is considered particularly advanced (Penn et al., 2008), critical neural structures in the hippocampal system are similar across species. The interaction with a larger PFC in humans might contribute to this advance, potentially by facilitating higher levels of abstraction. In particular, long-term consolidation processes and the presence of language may enhance abstraction. Thus while cells in the hippocampal system share a similar computational blueprint across species, the interconnectedness with the human neocortex may elevate the impact of these mechanisms on cognition.

This viewpoint highlights the potential of empirically testing process accounts of intelligence at the level of neural task representations. Insight into the underlying processing mechanisms may lead to a re-evaluation of previous beliefs. For instance, evidence that training on a specific task can generalize to other tasks (far transfer) is scarce and controversial (Guerin et al., 2021; Ripp et al., 2022). Having defined general factors at the level of neural computations, one might test whether training targeted to those computations can yield transfer effects.

From a methodological perspective, planning corresponding studies could benefit from characterizing the retest stability of relevant neural representational measures. Furthermore, as with research on any single cognitive faculty, multitask assessments are influenced by preconceptions about the tested construct during task selection. Since tasks are inherently culture-specific (e.g., with respect to the content and familiarity of the material; the form in which tests are administered), so will be their potential to access the underlying construct of interest. While generalizability of the discovered factor model can be explored further, the methodological benefit of combining psychometric and neurocomputational approaches may be a critical step toward identifying general principles underlying human cognition.

Footnotes

  • The author declares no competing financial interests.

  • Correspondence should be addressed to Stephanie Theves at stephanie.theves{at}ae.mpg.de.

SfN exclusive license.

References

  1. ↵
    1. Aho K,
    2. Roads BD,
    3. Love BC
    (2023) Signatures of cross-modal alignment in children's early concepts. Proc Natl Acad Sci U S A 120:e2309688120. https://doi.org/10.1073/pnas.2309688120 pmid:37819984
    OpenUrlCrossRefPubMed
  2. ↵
    American Psychological Association (2015) APA dictionary of psychology, 2nd ed.
  3. ↵
    1. Annese J, et al.
    (2014) Postmortem examination of patient H.M.'s brain based on histological sectioning and digital 3D reconstruction. Nat Commun 5:3122. https://doi.org/10.1038/ncomms4122 pmid:24473151
    OpenUrlCrossRefPubMed
  4. ↵
    1. Arendasy ME,
    2. Hergovich A,
    3. Sommer M
    (2008) Investigating the ‘g’-saturation of various stratum-two factors using automatic item generation. Intelligence 36:574–583. https://doi.org/10.1016/j.intell.2007.11.005
    OpenUrlCrossRef
  5. ↵
    1. Aronov D,
    2. Nevers R,
    3. Tank DW
    (2017) Mapping of a non-spatial dimension by the hippocampal-entorhinal circuit. Nature 543:719–722. https://doi.org/10.1038/nature21692 pmid:28358077
    OpenUrlCrossRefPubMed
  6. ↵
    1. Bakermans JW,
    2. Warren J,
    3. Whittington JCR,
    4. Behrens TEJ
    . (2023) Constructing future behaviour in the hippocampal formation through composition and replay. bioRxiv 2023.04.07.536053.
  7. ↵
    1. Bao X,
    2. Gjorgieva E,
    3. Shanahan LK,
    4. Howard JD,
    5. Kahnt T,
    6. Gottfried JA
    (2019) Grid-like neural representations support olfactory navigation of a two-dimensional odor space. Neuron 102:1066–1075.e5. https://doi.org/10.1016/j.neuron.2019.03.034 pmid:31023509
    OpenUrlCrossRefPubMed
  8. ↵
    1. Barbey AK
    (2018) Network neuroscience theory of human intelligence. Trends Cogn Sci (Regul Ed) 22:8–20. https://doi.org/10.1016/j.tics.2017.10.001
    OpenUrlCrossRef
  9. ↵
    1. Barnes CA,
    2. Suster MS,
    3. Shen J,
    4. McNaughton BL
    (1997) Multistability of cognitive maps in the hippocampus of old rats. Nature 388:272–275. https://doi.org/10.1038/40859
    OpenUrlCrossRefPubMed
  10. ↵
    1. Bernardi S,
    2. Benna MK,
    3. Rigotti M,
    4. Munuera J,
    5. Fusi S,
    6. Salzman CD
    (2020) The geometry of abstraction in the hippocampus and prefrontal cortex. Cell 183:954–967.e21. https://doi.org/10.1016/j.cell.2020.09.031 pmid:33058757
    OpenUrlCrossRefPubMed
  11. ↵
    1. Broschard MB,
    2. Kim J,
    3. Love BC,
    4. Halverson HE,
    5. Freeman JH
    (2024) Disrupting dorsal hippocampus impairs category learning in rats. Neurobiol Learn Mem 212:107941. https://doi.org/10.1016/j.nlm.2024.107941
    OpenUrlCrossRefPubMed
  12. ↵
    1. Bunsey M,
    2. Eichenbaum H
    (1996) Conservation of hippocampal memory function in rats and humans. Nature 379:255–257. https://doi.org/10.1038/379255a0
    OpenUrlCrossRefPubMed
  13. ↵
    1. Caemmerer JM,
    2. Keith TZ,
    3. Reynolds MR
    (2020) Beyond individual intelligence tests: application of Cattell–Horn–Carroll theory. Intelligence 79:Article 101433. https://doi.org/10.1016/j.intell.2020.101433
    OpenUrlCrossRef
  14. ↵
    1. Calvin CM, et al.
    (2017) Childhood intelligence in relation to major causes of death in 68 year follow-up: prospective population study. Brit Med J 357:2708. https://doi.org/10.1136/bmj.j2708 pmid:28659274
    OpenUrlPubMed
  15. ↵
    1. Carroll JB
    (1993) Human cognitive abilities: a factor-analytic study. Cambridge: Cambridge University Press.
  16. ↵
    1. Chen HT,
    2. Manning JR,
    3. van der Meer MAA
    (2021) Between-subject prediction reveals a shared representational geometry in the rodent hippocampus. Curr Biol 31:4293–4304.e5. https://doi.org/10.1016/j.cub.2021.07.061
    OpenUrlCrossRefPubMed
  17. ↵
    1. Colom R,
    2. Jung RE,
    3. Haier RJ
    (2006) Distributed brain sites for the g-factor of intelligence. Neuroimage 31:1359–1365. https://doi.org/10.1016/j.neuroimage.2006.01.006
    OpenUrlCrossRefPubMed
  18. ↵
    1. Coyle TR
    (2001) IQ is related to the worst performance rule in a memory task involving children. Intelligence 29:117–129. doi:10.1016/S0160-2896(00)00044-1
    OpenUrlCrossRef
  19. ↵
    1. Coyle TR
    (2003) A review of the worst performance rule: evidence, theory, and alternative hypotheses. Intelligence 31:567–587. https://doi.org/10.1016/S0160-2896(03)00054-0
    OpenUrlCrossRef
  20. ↵
    1. Deary IJ,
    2. Strand S,
    3. Smith P,
    4. Fernandes C
    (2007) Intelligence and educational achievement. Intelligence 35:13–21. https://doi.org/10.1016/j.intell.2006.02.001
    OpenUrlCrossRef
  21. ↵
    1. Deary IJ,
    2. Pattie A,
    3. Starr JM
    (2013) The stability of intelligence from age 11 to age 90 years: the Lothian birth cohort of 1921. Psychol Sci 24:2361–2368. https://doi.org/10.1177/0956797613486487
    OpenUrlCrossRefPubMed
  22. ↵
    1. Devito LM,
    2. Kanter BR,
    3. Eichenbaum H
    (2010) The hippocampus contributes to memory expression during transitive inference in mice. Hippocampus 20:208–217. https://doi.org/10.1002/hipo.20610 pmid:19405137
    OpenUrlCrossRefPubMed
  23. ↵
    1. Doeller CF,
    2. Barry C,
    3. Burgess N
    (2010) Evidence for grid cells in a human memory network. Nature 463:657–661. https://doi.org/10.1038/nature08704 pmid:20090680
    OpenUrlCrossRefPubMed
  24. ↵
    1. Dunbar K
    (1995) How scientists really reason: scientific reasoning in real-world laboratories. In: The nature of insight (Sternberg RJ, Davidson JE, eds), pp 365–395. The MIT Press.
  25. ↵
    1. Dunbar KN,
    2. Klahr D
    (2012) Scientific thinking and reasoning. In: The Oxford handbook of thinking and reasoning (Holyoak KJ, Morrison RG, eds), pp 701–718. New York: Oxford University Press.
  26. ↵
    1. Duncan J
    (2010) The multiple-demand (MD) system of the primate brain: mental programs for intelligent behaviour. Trends Cogn Sci (Regul Ed) 14:172–179. https://doi.org/10.1016/j.tics.2010.01.004
    OpenUrlCrossRef
  27. ↵
    1. Duncan J
    (2013) The structure of cognition: attentional episodes in mind and brain. Neuron 80:35–50. https://doi.org/10.1016/j.neuron.2013.09.015 pmid:24094101
    OpenUrlCrossRefPubMed
  28. ↵
    1. Ebitz RB,
    2. Hayden BY
    (2021) The population doctrine in cognitive neuroscience. Neuron 109:3055–3068. https://doi.org/10.1016/j.neuron.2021.07.011 pmid:34416170
    OpenUrlCrossRefPubMed
  29. ↵
    1. Eichenbaum H,
    2. Dudchenko P,
    3. Wood E,
    4. Shapiro M,
    5. Tanila H
    (1999) The hippocampus, memory, and place cells: is it spatial memory or a memory space? Neuron 23:209–226. https://doi.org/10.1016/s0896-6273(00)80773-4
    OpenUrlCrossRefPubMed
  30. ↵
    1. Genç E,
    2. Fraenz C,
    3. Schlüter C,
    4. Friedrich P,
    5. Hossiep R,
    6. Voelkle MC,
    7. Ling JM,
    8. Güntürkün O,
    9. Jung RE
    (2018) Diffusion markers of dendritic density and arborization in gray matter predict differences in intelligence. Nat Commun 9:1905. https://doi.org/10.1038/s41467-018-04268-8 pmid:29765024
    OpenUrlCrossRefPubMed
  31. ↵
    1. Gentner D
    (1983) Structure-mapping: a theoretical framework for analogy. Cogn Sci 7:155–170. https://doi.org/10.1016/S0364-0213(83)80009-3
    OpenUrlCrossRef
  32. ↵
    1. Gentner D,
    2. Medina J
    (1998) Similarity and the development of rules. Cognition 65:263–297. https://doi.org/10.1016/S0010-0277(98)00002-X
    OpenUrlCrossRefPubMed
  33. ↵
    1. Gentner D,
    2. Smith L
    (2012) Analogical reasoning. Encycl Hum Behav 2:130–136. https://doi.org/10.1016/B978-0-12-375000-6.00022-7
    OpenUrl
  34. ↵
    1. Gentner D,
    2. Toupin C
    (1986) Systematicity and surface similarity in the development of analogy. Cogn Sci 10:277–300. https://doi.org/10.1207/s15516709cog1003_2
    OpenUrlCrossRef
  35. ↵
    1. Gick ML,
    2. Holyoak KJ
    (1980) Analogical problem solving. Cogn Psychol 12:306–355. https://doi.org/10.1016/0010-0285(80)90013-4
    OpenUrlCrossRef
  36. ↵
    1. Gick ML,
    2. Holyoak KJ
    (1983) Schema induction and analogical transfer. Cogn Psychol 15:1–38. https://doi.org/10.1016/0010-0285(83)90002-6
    OpenUrlCrossRef
  37. ↵
    1. Gottfredson L
    (2016) A g theorist on why Kovacs and Conway’s process overlap theory amplifies, not opposes, g theory. Psychol Inq 27:210–217. https://doi.org/10.1080/1047840X.2016.120323
    OpenUrlCrossRef
  38. ↵
    1. Griffiths TL
    (2020) Understanding human intelligence through human limitations. Trends Cogn Sci (Regul Ed) 24:873–883. https://doi.org/10.1016/j.tics.2020.09.001
    OpenUrlCrossRef
  39. ↵
    1. Guerin JM,
    2. Wade SL,
    3. Mano QR
    (2021) Does reasoning training improve fluid reasoning and academic achievement for children and adolescents? A systematic review. Trends Neurosci Educ 23:100153. https://doi.org/10.1016/j.tine.2021.100153
    OpenUrlCrossRefPubMed
  40. ↵
    1. Gupta AS,
    2. van der Meer MA,
    3. Touretzky DS,
    4. Redish AD
    (2010) Hippocampal replay is not a simple function of experience. Neuron 65:695–705. https://doi.org/10.1016/j.neuron.2010.01.034 pmid:20223204
    OpenUrlCrossRefPubMed
  41. ↵
    1. Hafting T,
    2. Fyhn M,
    3. Molden S,
    4. Moser MB,
    5. Moser EI
    (2005) Microstructure of a spatial map in the entorhinal cortex. Nature 436:801–806. https://doi.org/10.1038/nature03721
    OpenUrlCrossRefPubMed
  42. ↵
    1. Holyoak KJ,
    2. Thagard P
    (1995) Mental leaps: analogy in creative thought. The MIT Press. https://doi.org/10.7551/mitpress/4549.001.0001
  43. ↵
    1. Horn JL,
    2. Cattell RB
    (1967) Age differences in fluid and crystallized intelligence. Acta Psychol (Amst) 26:107–129. https://doi.org/10.1016/0001-6918(67)90011-x
    OpenUrlCrossRefPubMed
  44. ↵
    1. Horn JL,
    2. McArdle JJ
    (2007) Factor analysis at 100: historical developments and future directions (Cudeck R, MacCallum RC, eds), pp 205–247. Lawrence Erlbaum Associates Publishers.
  45. ↵
    1. Høydal ØA,
    2. Skytøen ER,
    3. Andersson SO,
    4. Moser MB,
    5. Moser EI
    (2019) Object-vector coding in the medial entorhinal cortex. Nature 568:400–404. https://doi.org/10.1038/s41586-019-1077-7
    OpenUrlCrossRefPubMed
  46. ↵
    1. Hubbard EM,
    2. Piazza M,
    3. Pinel P,
    4. Dehaene S
    (2005) Interactions between number and space in parietal cortex. Nat Rev Neurosci 6:435–448. https://doi.org/10.1038/nrn1684
    OpenUrlCrossRefPubMed
  47. ↵
    1. Jensen AR
    (1981) Reaction time and intelligence. In: Intelligence and learning (Friedman M, Das JP, O’Connor N, eds), pp 39–50. New York: Plenum.
  48. ↵
    1. Johnson W,
    2. Bouchard Jr TJ,
    3. Krueger RF,
    4. McGue M,
    5. Gottesman II
    (2004) Just one g: consistent results from three test batteries. Intelligence 32:95–107. https://doi.org/10.1016/S0160-2896(03)00062-X
    OpenUrlCrossRef
  49. ↵
    1. Johnson W,
    2. te Nijenhuis J,
    3. Bouchard Jr TJ
    (2008) Still just 1 g: consistent results from five test batteries. Intelligence 36:81–95. https://doi.org/10.1016/j.intell.2007.06.001
    OpenUrlCrossRef
  50. ↵
    1. Jung RE,
    2. Haier RJ
    (2007) The parieto-frontal integration theory (P-FIT) of intelligence: converging neuroimaging evidence. Behav Brain Sci 30:135–187. https://doi.org/10.1017/S0140525X0700118
    OpenUrlCrossRefPubMed
  51. ↵
    1. Killian NJ,
    2. Buffalo EA
    (2018) Grid cells map the visual world. Nat Neurosci 21:161–162. https://doi.org/10.1038/s41593-017-0062-4
    OpenUrlCrossRefPubMed
  52. ↵
    1. Koster R,
    2. Chadwick MJ,
    3. Chen Y,
    4. Berron D,
    5. Banino A,
    6. Düzel E,
    7. Hassabis D,
    8. Kumaran D
    (2018 Sep 19) Big-loop recurrence within the hippocampal system supports integration of information across episodes. Neuron 99:1342–1354.e6. https://doi.org/10.1016/j.neuron.2018.08.009
    OpenUrlCrossRefPubMed
  53. ↵
    1. Kovacs K,
    2. Conway ARA
    (2016) Process overlap theory: a unified account of the general factor of intelligence. Psychol Inq 27:151–177. https://doi.org/10.1080/1047840X.2016.1153946
    OpenUrlCrossRef
  54. ↵
    1. Kumaran D,
    2. Summerfield JJ,
    3. Hassabis D,
    4. Maguire EA
    (2009) Tracking the emergence of conceptual knowledge during human decision making. Neuron 63:889–901. https://doi.org/10.1016/j.neuron.2009.07.030 pmid:19778516
    OpenUrlCrossRefPubMed
  55. ↵
    1. Kutter EF,
    2. Bostroem J,
    3. Elger CE,
    4. Mormann F,
    5. Nieder A
    (2018) Single neurons in the human brain encode numbers. Neuron 100:753–761.e4. https://doi.org/10.1016/j.neuron.2018.08.036
    OpenUrlCrossRefPubMed
  56. ↵
    1. Lake BM,
    2. Salakhutdinov R,
    3. Tenenbaum JB
    (2015) Human-level concept learning through probabilistic program induction. Science 350:1332–1338. https://doi.org/10.1126/science.aab3050
    OpenUrlAbstract/FREE Full Text
  57. ↵
    1. Larson GE,
    2. Alderton DL
    (1990) Reaction time variability and intelligence: a “worst performance” analysis of individual differences. Intelligence 14:309–325. https://doi.org/10.1016/0160-2896(90)90021-K
    OpenUrlCrossRef
  58. ↵
    1. Luyckx F,
    2. Nili H,
    3. Spitzer B,
    4. Summerfield C
    (2019) Neural structure mapping in human probabilistic reward learning. Elife 8:e42816. https://doi.org/10.7554/eLife.42816 pmid:30843789
    OpenUrlCrossRefPubMed
  59. ↵
    1. Mack ML,
    2. Love BC,
    3. Preston AR
    (2018) Building concepts one episode at a time: the hippocampus and concept formation. Neurosci Lett 680:31–38. https://doi.org/10.1016/j.neulet.2017.07.061 pmid:28801273
    OpenUrlCrossRefPubMed
  60. ↵
    1. Marcus G,
    2. Davis E
    (2019) Rebooting AI: building artificial intelligence we can trust. New York: Pantheon Books.
  61. ↵
    1. Meshulam M,
    2. Hasenfratz L,
    3. Hillman H,
    4. Liu YF,
    5. Nguyen M,
    6. Norman KA,
    7. Hasson U
    (2021) Neural alignment predicts learning outcomes in students taking an introduction to computer science course. Nat Commun 12:1922. https://doi.org/10.1038/s41467-021-22202-3 pmid:33771999
    OpenUrlCrossRefPubMed
  62. ↵
    1. Morgan LK,
    2. Macevoy SP,
    3. Aguirre GK,
    4. Epstein RA
    (2011) Distances between real-world locations are represented in the human hippocampus. J Neurosci 31:1238–1245. https://doi.org/10.1523/JNEUROSCI.4667-10.2011 pmid:21273408
    OpenUrlAbstract/FREE Full Text
  63. ↵
    1. Morton NW,
    2. Schlichting ML,
    3. Preston AR
    (2020) Representations of common event structure in medial temporal lobe and frontoparietal cortex support efficient inference. Proc Natl Acad Sci U S A 117:29338–29345. https://doi.org/10.1073/pnas.191233811
    OpenUrlAbstract/FREE Full Text
  64. ↵
    1. Neubauer AC,
    2. Fink A
    (2009) Intelligence and neural efficiency. Neurosci Biobehav Rev 33:1004–1023. https://doi.org/10.1016/j.neubiorev.2009.04.001
    OpenUrlCrossRefPubMed
  65. ↵
    1. Newell A,
    2. Simon HA
    (1972) Human problem solving. Prentice-Hall.
  66. ↵
    1. Nieder A
    (2016) The neuronal code for number. Nat Rev Neurosci 17:366–382. https://doi.org/10.1038/nrn.2016.40
    OpenUrlCrossRefPubMed
  67. ↵
    1. Nieh EH,
    2. Schottdorf M,
    3. Freeman NW,
    4. Low RJ,
    5. Lewallen S,
    6. Koay SA,
    7. Pinto L,
    8. Gauthier JL,
    9. Brody CD,
    10. Tank DW
    (2021) Geometry of abstract learned knowledge in the hippocampus. Nature 595:80–84. https://doi.org/10.1038/s41586-021-03652-7 pmid:34135512
    OpenUrlCrossRefPubMed
  68. ↵
    1. Okazawa G,
    2. Hatch CE,
    3. Mancoo A,
    4. Machens CK,
    5. Kiani R
    (2021) Representational geometry of perceptual decisions in the monkey parietal cortex. Cell 184:3748–3761.e18. https://doi.org/10.1016/j.cell.2021.05.022 pmid:34171308
    OpenUrlCrossRefPubMed
  69. ↵
    1. O'Keefe J,
    2. Dostrovsky J
    (1971) The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat. Brain Res 34:171–175. https://doi.org/10.1016/0006-8993(71)90358-1
    OpenUrlCrossRefPubMed
  70. ↵
    1. Park SA,
    2. Miller DS,
    3. Boorman ED
    (2021) Inferences on a multidimensional social hierarchy use a grid-like code. Nat Neurosci 24:1292–1301. https://doi.org/10.1038/s41593-021-00916-3 pmid:34465915
    OpenUrlCrossRefPubMed
  71. ↵
    1. Penke L,
    2. Maniega SM,
    3. Bastin ME,
    4. Vald es Hern andez MC,
    5. Murray C,
    6. Royle NA,
    7. Starr JM,
    8. Wardlaw JM,
    9. Deary IJ
    (2012) Brain white matter tract integrity as a neural foundation for general intelligence. Mol Psychiatry 17:1026–1030. https://doi.org/10.1038/mp.2012.66
    OpenUrlCrossRefPubMed
  72. ↵
    1. Penn DC,
    2. Holyoak KJ,
    3. Povinelli DJ
    (2008) Darwin's mistake: explaining the discontinuity between human and nonhuman minds. Behav Brain Sci 31:109–130. https://doi.org/10.1017/S0140525X08003543
    OpenUrlCrossRefPubMed
  73. ↵
    1. Pietschnig J,
    2. Penke L,
    3. Wicherts JM,
    4. Zeiler M,
    5. Voracek M
    (2015) Meta-analysis of associations between human brain volume and intelligence differences: how strong are they and what do they mean? Neurosci Biobehav Rev 57:411–432. https://doi.org/10.1016/j.neubiorev.2015.09.017
    OpenUrlCrossRefPubMed
  74. ↵
    1. Reuter-Lorenz PA,
    2. Cappell KA
    (2008) Neurocognitive aging and the compensation hypothesis. Curr Dir Psychol Sci 17:177–182. https://doi.org/10.1111/j.1467-8721.2008.00570
    OpenUrlCrossRef
  75. ↵
    1. Reynolds MR,
    2. Floyd RG,
    3. Niileksela CR
    (2013) How well is psychometric g indexed by global composites? Evidence from three popular intelligence tests. Psychol Assess 25:1314–1321. https://doi.org/10.1037/a0034102
    OpenUrlCrossRefPubMed
  76. ↵
    1. Richland LE,
    2. Zur O,
    3. Holyoak KJ
    (2007) Cognitive supports for analogies in the mathematics classroom. Science 316:1128–1129. https://doi.org/10.1126/science.1142103
    OpenUrlAbstract/FREE Full Text
  77. ↵
    1. Ripp I,
    2. Emch M,
    3. Wu Q,
    4. Lizarraga A,
    5. Udale R,
    6. von Bastian CC,
    7. Koch K,
    8. Yakushev I
    (2022) Adaptive working memory training does not produce transfer effects in cognition and neuroimaging. Transl Psychiatry 12:512. https://doi.org/10.1038/s41398-022-02272-7 pmid:36513642
    OpenUrlCrossRefPubMed
  78. ↵
    1. Roth B,
    2. Becker N,
    3. Romeyke S,
    4. Schäfer S,
    5. Domnick F,
    6. Spinath FM
    (2015) Intelligence and school grades: a meta-analysis. Intelligence 53:118–137. https://doi.org/10.1016/j.intell.2015.09.002
    OpenUrlCrossRef
  79. ↵
    1. Samborska V,
    2. Butler JL,
    3. Walton ME,
    4. Behrens TEJ,
    5. Akam T
    (2022) Complementary task representations in hippocampus and prefrontal cortex for generalizing the structure of problems. Nat Neurosci 25:1314–1326. https://doi.org/10.1038/s41593-022-01149-8 pmid:36171429
    OpenUrlCrossRefPubMed
  80. ↵
    1. Schäfer T,
    2. Thalmann M,
    3. Schulz E,
    4. Doeller CF,
    5. Theves S
    (2024) The hippocampus support interpolation into new states during category abstraction. bioRxiv 2024.05.14.594185.
  81. ↵
    1. Schmidt FL,
    2. Hunter J
    (2004) General mental ability in the world of work: occupational attainment and job performance. J Pers Soc Psychol 86:162–173. https://doi.org/10.1037/0022-3514.86.1.162
    OpenUrlCrossRefPubMed
  82. ↵
    1. Schmithorst VJ,
    2. Wilke M,
    3. Dardzinski BJ,
    4. Holland SK
    (2005) Cognitive functions correlate with white matter architecture in a normal pediatric population: a diffusion tensor MRI study. Hum Brain Mapp 26:139–147. https://doi.org/10.1002/hbm.20149 pmid:15858815
    OpenUrlCrossRefPubMed
  83. ↵
    1. Sheahan H,
    2. Luyckx F,
    3. Nelli S,
    4. Teupe C,
    5. Summerfield C
    (2021) Neural state space alignment for magnitude generalization in humans and recurrent networks. Neuron 109:1214–1226.e8. https://doi.org/10.1016/j.neuron.2021.02.004
    OpenUrlCrossRefPubMed
  84. ↵
    1. Snow RE,
    2. Kyllonen PC,
    3. Marshalek B
    (1984) The topography of ability and learning correlations. In: Advances in the psychology of human intelligence (Sternberg RJ, ed), pp 47–103. Hillsdale, NJ: Erlbaum.
  85. ↵
    1. Solstad T,
    2. Boccara CN,
    3. Kropff E,
    4. Moser MB,
    5. Moser EI
    (2008) Representation of geometric borders in the entorhinal cortex. Science 322:1865–1868. https://doi.org/10.1126/science.1166466
    OpenUrlAbstract/FREE Full Text
  86. ↵
    1. Spearman C
    (1904) General intelligence, objectively determined and measured. Am J Psychol 15:201–293. https://doi.org/10.2307/1412107
    OpenUrlCrossRef
  87. ↵
    1. Stangl M,
    2. Achtzehn J,
    3. Huber K,
    4. Dietrich C,
    5. Tempelmann C,
    6. Wolbers T
    (2018) Compromised grid-cell-like representations in old age as a key mechanism to explain age-related navigational deficits. Curr Biol 28:1108–1115.e6. https://doi.org/10.1016/j.cub.2018.02.038 pmid:29551413
    OpenUrlCrossRefPubMed
  88. ↵
    1. Strenze T
    (2007) Intelligence and socioeconomic success: a meta-analytic review of longitudinal research. Intelligence 35:401–426. https://doi.org/10.1016/j.intell.2006.09.004
    OpenUrlCrossRef
  89. ↵
    1. Sun C,
    2. Yang W,
    3. Martin J,
    4. Tonegawa S
    (2020) Hippocampal neurons represent events as transferable units of experience. Nat Neurosci 23:651–663. https://doi.org/10.1038/s41593-020-0614-x pmid:32251386
    OpenUrlCrossRefPubMed
  90. ↵
    1. Tavares RM,
    2. Mendelsohn A,
    3. Grossman Y,
    4. Williams CH,
    5. Shapiro M,
    6. Trope Y,
    7. Schiller D
    (2015) A map for social navigation in the human brain. Neuron 87:231–243. https://doi.org/10.1016/j.neuron.2015.06.011 pmid:26139376
    OpenUrlCrossRefPubMed
  91. ↵
    1. Tenderra RM,
    2. Theves S
    (2024a) The relation between cognitive maps and cognitive abilities. (Poster abstract). Society of Neuroscience (SfN) Annual Meeting, Chicago, US.
  92. ↵
    1. Tenderra RM,
    2. Theves S
    (2024b) Fluid intelligence relates to neural measures of cognitive map formation. bioarXiv. https://doi.org/10.1101/2024.12.28.630614
  93. ↵
    1. Tenenbaum JB,
    2. Kemp C,
    3. Griffiths TL,
    4. Goodman ND
    (2011) How to grow a mind: statistics, structure, and abstraction. Science 331:1279–1285. https://doi.org/10.1126/science.1192788
    OpenUrlAbstract/FREE Full Text
  94. ↵
    1. Theves S,
    2. Fernandez G,
    3. Doeller CF
    (2019) The hippocampus encodes distances in multidimensional feature space. Curr Biol 29:1226–1231.e3. https://doi.org/10.1016/j.cub.2019.02.035
    OpenUrlCrossRefPubMed
  95. ↵
    1. Theves S,
    2. Fernández G,
    3. Doeller CF
    (2020) The hippocampus maps concept space, not feature space. J Neurosci 40:7318–7325. https://doi.org/10.1523/JNEUROSCI.0494-20.2020 pmid:32826311
    OpenUrlAbstract/FREE Full Text
  96. ↵
    1. Theves S,
    2. Schäfer TAJ,
    3. Reisner V,
    4. de Cothi W,
    5. Barry C
    (2024) Category boundaries modulate memory in a place-cell-like manner. Curr Biol 34:5546–5553.e3. https://doi.org/10.1016/j.cub.2024.09.083
    OpenUrlCrossRefPubMed
  97. ↵
    1. Tucker-Drob EM,
    2. de la Fuente J,
    3. Köhncke Y,
    4. Brandmaier AM,
    5. Nyberg L,
    6. Lindenberger U
    (2022) A strong dependency between changes in fluid and crystallized abilities in human cognitive aging. Sci Adv 8:eabj2422. https://doi.org/10.1126/sciadv.abj2422 pmid:35108051
    OpenUrlCrossRefPubMed
  98. ↵
    1. van den Heuvel MP,
    2. Stam CJ,
    3. Kahn RS,
    4. Hulshoff Pol HE
    (2009) Efficiency of functional brain networks and intellectual performance. J Neurosci 29:7619–7624. https://doi.org/10.1523/JNEUROSCI.1443-09.2009 pmid:19515930
    OpenUrlAbstract/FREE Full Text
  99. ↵
    1. Van der Maas HLJ,
    2. Dolan CV,
    3. Grasman RPPP,
    4. Wicherts JM,
    5. Huizenga HM,
    6. Raijmakers MEJ
    (2006) A dynamical model of general intelligence: the positive manifold of intelligence by mutualism. Psychol Rev 113:842–861.
    OpenUrlCrossRefPubMed
  100. ↵
    1. Whittington JCR,
    2. Muller TH,
    3. Mark S,
    4. Chen G,
    5. Barry C,
    6. Burgess N,
    7. Behrens TEJ
    (2020) The Tolman-Eichenbaum machine: unifying space and relational memory through generalization in the hippocampal formation. Cell 183:1249–1263.e23. https://doi.org/10.1016/j.cell.2020.10.024 pmid:33181068
    OpenUrlCrossRefPubMed
  101. ↵
    1. Yan J,
    2. Zhang Y,
    3. Roder J,
    4. McDonald RJ
    (2003) Aging effects on spatial tuning of hippocampal place cells in mice. Exp Brain Res 150:184–193. https://doi.org/10.1007/s00221-003-1396-6
    OpenUrlCrossRefPubMed
  102. ↵
    1. Zaki SR
    (2004) Is categorization performance really intact in amnesia? A meta-analysis. Psychon Bull Rev 11:1048–1054. https://doi.org/10.3758/bf03196735
    OpenUrlCrossRefPubMed
  103. ↵
    1. Zeithamova D,
    2. Bowman CR
    (2020) Generalization and the hippocampus: more than one story? Neurobiol Learn Mem 175:107317. https://doi.org/10.1016/j.nlm.2020.107317 pmid:33007461
    OpenUrlCrossRefPubMed
  104. ↵
    1. Zeithamova D,
    2. Preston AR
    (2010) Flexible memories: differential roles for medial temporal lobe and prefrontal cortex in cross-episode binding. J Neurosci 30:14676–14684. https://doi.org/10.1523/JNEUROSCI.3250-10.2010 pmid:21048124
    OpenUrlAbstract/FREE Full Text
Back to top

In this issue

The Journal of Neuroscience: 45 (18)
Journal of Neuroscience
Vol. 45, Issue 18
30 Apr 2025
  • Table of Contents
  • About the Cover
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Thinking as Analogy-Making: Toward a Neural Process Account of General Intelligence
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Thinking as Analogy-Making: Toward a Neural Process Account of General Intelligence
Stephanie Theves
Journal of Neuroscience 30 April 2025, 45 (18) e1555242025; DOI: 10.1523/JNEUROSCI.1555-24.2025

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Thinking as Analogy-Making: Toward a Neural Process Account of General Intelligence
Stephanie Theves
Journal of Neuroscience 30 April 2025, 45 (18) e1555242025; DOI: 10.1523/JNEUROSCI.1555-24.2025
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • The Challenge of Definition
    • Positive Manifold and the Structure of Intelligence
    • Neural Correlates of Intelligence Factors
    • The Quest for a Neural Process Account and the Neural Geometry of Task Representations
    • Structure Mapping as a g-Related Mechanism
    • Neural Implementations of Structure Mapping: Cognitive Maps and Manifold Alignment
    • Grounding Characteristics of g in Neural Structure Mapping
    • Concluding Remarks
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

  • Close Encounters with Art of Neuroscience 14th Edition
  • New Perspective of the Persistent Gender and Diversity Gap in Nobel Prizes
Show more Viewpoints

Subjects

  • Neuro and Beyond
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.