Abstract
What is the secret of human intelligence? A key discovery in psychology is that performance correlations across diverse cognitive tasks are explained by a few broad abilities and one overarching general factor, which is also predictive of real-life achievements. Whether these factors correspond to biological processes is a century-old debate. While previous research focused on localizing their correlates in brain structure, connectivity, and activation levels, the mechanisms of neural information processing related to intelligence are still unexplored. I outline a new approach integrating psychometrics with neuroscientific advances in identifying the computations underlying single tasks from their representational geometry to provide a novel perspective on this topic. In particular, I propose a neural process account of the general factor that builds on the central role of structure mapping—the process of abstracting and reasoning based on relational knowledge—in human cognition. Neural coding properties in the hippocampal and prefrontal–parietal systems that enable inferential leaps through structural abstraction might contribute to the general factor. In general, integrating neuro-representational and psychometric research has the potential to uncover core principles of natural intelligence.
The Challenge of Definition
What is the secret of human intelligence? The question is at the heart of the cognitive sciences, including psychology, neuroscience, and artificial intelligence. Addressing this question is challenging because it first requires defining and measuring intelligence. In psychology, intelligence is an umbrella term for the sum of cognitive capacities (APA, 2015) and has been studied through performance patterns in multitask batteries (Box 1). Relatedly, a key goal is to explain generality in how humans can perform multiple tasks in different domains. This article is concerned with psychometric research on the structure of cognitive abilities, particularly in quantifying the overlap between diverse cognitive tasks, and how these insights can effectively be integrated with advances in cognitive and systems neuroscience.
Positive Manifold and the Structure of Intelligence
A key psychometric discovery is the positive manifold, which refers to the observation that individuals who perform well on one cognitive task tend to perform well on others and is quantified as the general factor g (Spearman, 1904). Specifically, it is derived through factor analysis, a statistical technique to identify the extent of shared variance among performances across diverse tasks. Decades of psychometric research have converged on the well-validated (Caemmerer et al., 2020) Cattell–Horn–Carroll (CHC) taxonomy, which posits a hierarchical factor structure of cognitive abilities (Carroll, 1993): A few broad abilities that underlie a wide range of specific skills load to different degrees on a common factor g. Among these broad abilities, fluid intelligence (gf), the capacity for reasoning and problem solving, is statistically almost identical to g. Reasoning tasks, such as progressive matrices, series completion, and analogies, show the highest correlation with most other tasks and provide a strong indicator of g (Arendasy et al., 2008; Reynolds et al., 2013; Caemmerer et al., 2020). As g represents the largest component of the common factor variance, the best single predictor of educational, occupational, and health outcomes (Schmitt and Hunter, 2004; Deary et al., 2007; Strenze, 2007; Duncan, 2013; Roth et al., 2015; Calvin et al., 2017), and the most stable trait across development (Deary et al., 2013), it suggests that these tasks might tap into cognitive mechanisms of wide real-life importance. The hierarchical factor structure (Fig. 1A; CHC model) has been replicated across different samples and independent of the test-battery used, with almost perfectly correlated g-factors (Johnson et al., 2004; 2008; Caemmerer et al., 2020). This implies that g is not dependent on the specific selection of tasks and reinforces the notion of a unitary construct. However, there is no agreement on how to interpret g. While from a psychometric perspective, g as a measure of the positive manifold is established (Johnson et al., 2008), its nature as a psychological construct remains debated. In particular, the debate revolves around whether g can be equated with cognitive or biological variables and whether a causal factor underlies this latent variable. The dominant view considers g an emergent unitary property of interacting neural systems or processes (Gottfredson, 2016). Different theoretical accounts vary in the degree to which g can be equated with cognitive or neural variables that exist intraindividually and can be considered a trait (g-theory) or merely reflects a statistical index (e.g., sampling theory, mutualism; see Van der Maas et al., 2006; Gottfredson, 2016; Kovacs and Conway, 2016, for further discussion of these views).
Construction and application of intelligence tests. During the construction of an intelligence test, a battery of tasks is selected to cover a broad range of cognitive skills and is administered to a large sample. Factor analysis then identifies underlying factors that explain the patterns of performance correlations among the tasks. Factor loadings represent the strength and direction of the relationship between each task and the underlying factor. For example, a high positive factor loading indicates that performance on a specific task is strongly related to the factor being measured. Tasks with higher factor loadings have a greater influence on the factor score. Tests are constructed and evaluated based on certain quality criteria (e.g., objectivity, reliability, and validity of test items) on large cohorts of participants, often grouped by broad demographic variables including age and educational attainment. Individual test scores can then be used in conjunction with the factor loadings obtained from the factor analysis to estimate an individual's factor scores.
Investigating the relation between the structure of cognitive abilities and neural mechanisms of structure mapping. A, Illustration of how neural structure mapping may contribute to the emergence of g. Depicted is a reduced version of the CHC taxonomy (Carroll, 1993). A few broad abilities that underlie a range of ∼80 narrow abilities (opaque) load to different degrees on a common factor g. Thickness of lines reflects g-loading. Gf, gc, glm, gv (reasoning, knowledge, learning and memory, visuospatial reasoning) show the highest g-loading among second-level factors (Caemmerer et al., 2020) with relational reasoning tasks being excellent proxies of g. Bold vertical arrows depict hypothesized direct effects, and horizontal arrows depict mutually reinforcing interactions between reasoning (gf), knowledge (gc), and learning (glm) via structure mapping. B, Structural priors (gray) and alignment with known structures (green) may help to draw inferences about novel concepts (blue). Figure logic adapted from Gentner and Smith (2012). C, Moving from localization-focused neuroimaging approaches to studying g-related differences in representational geometry (left; example of a decision-making task space from Okazawa et al., 2021) may offer insights at the intersections of cognitive science disciplines (right).
Neural Correlates of Intelligence Factors
While intelligence is one of the best-investigated traits in differential psychology, the neural underpinnings of this latent construct remain unresolved. This may invite a recalibration of what we aim to understand at this stage. Previous investigations on its neurobiological correlates focused on localizing the brain regions that show intelligence-related variation in anatomical structure [e.g., total or region-specific gray matter volume (Colom et al., 2006; Pietschnig et al., 2015), white matter integrity (Schmithorst et al., 2005; Penke et al., 2012), microstructure (Genç et al., 2018)], as well as intrinsic structural and functional connectivity (van den Heuvel et al., 2009; Barbey, 2018) and task-related activation strength (Jung and Haier, 2007; Duncan, 2010). Some functional studies contrasted highly g-loaded tasks with control tasks rather than studying interindividual differences. Overall, neuroimaging studies predominantly located g-related variance within brain structure and activation levels in the parietofrontal multiple demand system (Jung and Haier, 2007; Duncan, 2010). In a processing-related account, this has been attributed to attentional control mechanisms for goal-directed problem solving (Duncan, 2013). Another prominent hypothesis related g to lower energy consumption and more efficient processing, based on early PET and fMRI studies that reported negative correlations between intelligence measures and activation in parietofrontal regions in some studies, while the opposite relationship was observed in others (Jung and Haier, 2007; Neubauer and Fink, 2009). Furthermore, several measures of network connectivity, either on a structural or functional level, have been related to g (e.g., how flexibly different network states reorganize; Barbey, 2018), as most neuro-related measures to date, each with relatively low correlation strength. Importantly, while the localization-centered brain imaging approach in this field of research has been instrumental, it is less well suited to further address how, as opposed to where in the brain, information processing differs with respect to intelligence factors. To better understand the nature of human intelligence and how its arguably distinctive qualities are afforded by the brain, a fruitful approach might be to now study how common factors of multitask performance relate to interindividual differences in neural task representations, which can provide a window into the underlying computations.
The Quest for a Neural Process Account and the Neural Geometry of Task Representations
First considerations of a process account date back to the discovery of the positive manifold itself, as indirectly reflected in Spearman's noegenetic laws (Horn and McArdle, 2007). Advances in cognitive neuroscience now provide the tools to investigate the underlying computations via neural task representations. The approach rests on the premises that the format in which information (e.g., task variables) is represented can be more or less advantageous to mental operations (Bernardi et al., 2020; Ebitz and Hayden, 2021) and that representational geometries reflecting the same content or experience are in principle shared across individuals (Chen et al., 2021, 79; Meshulam et al., 2021). For instance, in real-life learning scenarios, such as students taking a computer science course, neural alignment of the individual learner's neural representations to those of experts and other learners increased with learning and predicted performance on the final exam (Meshulam et al., 2021). Thus, if g results from variance in a set of key neural information processing principles, interindividual differences in broad cognitive abilities might be reflected in deviations from (multi)task-optimal neural geometries. As outlined above, factor-analytical research revealed that the factor gf, representing the general ability for reasoning, is statistically almost identical to g (Caemmerer et al., 2020). Derived from these statistical patterns, it can be hypothesized that g is related to neural formats that facilitate reasoning and particularly reflect on those factors that are the strongest indicators of overall intelligence (Fig. 1A). The emphasis on reasoning aligns with seemingly distinctive facets of human intelligence compared with AI or other species. AI can fall short of the human capacity to make context-appropriate inferences from ambiguous input that align with common sense (Marcus and Davis, 2019). What makes human thinking so flexible? A central perspective in cognitive science is that human reasoning often depends on analogical reasoning and that humans understand new situations in alignment with existing knowledge structures (Gentner, 1983; Tenenbaum et al., 2011). The following outlines why neural structure mapping, the brain mechanisms of extracting structural information from experiences, and retrieving it to predict, interpret, and learn in new situations may reflect a neurocognitive processing trait related to the emergence of a general factor.
Structure Mapping as a g-Related Mechanism
Analogical reasoning entails drawing inferences based on abstract relational similarities rather than perceptual similarity or statistical covariation between two situations. For instance, understanding the analogy “an atom is like the solar system” requires evaluating the higher-order similarity of the relations in both terms (Gentner and Smith, 2012). Recognizing and exploiting relational similarities foster inferential leaps in learning and reasoning (Gentner, 1983, Lake et al., 2015) and allow for the transfer of knowledge across domains (Gick and Holyoak, 1980; 1983), which is key to creative insight and scientific advance (Holyoak and Thagard, 1995, Dunbar and Klahr, 2012). Analogical reasoning has been considered uniquely advanced and central to human cognition (Richland et al., 2007; Penn et al., 2008; Gentner and Smith, 2012; Aho et al., 2023). For instance, it has been shown that participants were more likely to solve a novel problem when prompted with two problem analogues rather than one, which facilitated the prior recognition of relational similarities and abstraction of a solution structure (Gick and Holyoak, 1983). Analogical reasoning also appears as an unprompted natural strategy, for instance, employed by scientists in explaining and generating new scientific concepts (Dunbar, 1995; Dunbar and Klahr, 2012). The ability to reason analogically develops gradually, reflected in a shift from reliance on perceptual features to using relational similarities to solve problems in later childhood, and is considered a key driver in the development of abstract thinking and advanced reasoning (Gentner and Toupin, 1986; Gentner and Medina, 1998). Relatedly, relational reasoning is also the best single proxy measure of the general factor of intelligence (Snow et al., 1984). Without analogical reasoning, human learning would be confined to direct observation and trial and error. In contrast, analogical reasoning allows for inferences beyond immediate experience, positioning it as a mechanism that fosters creativity and abstract thought.
Accordingly, a cognitive theory of human reasoning holds that humans form abstract knowledge structures, i.e., concepts, and understand new situations by alignment with known structures. The aligned structures provide the basis to draw inferences via structural pattern completion, highlighting the elements missing in the new situation (Gentner and Smith, 2012; Fig. 1B). In a classic example, learning the concept of an atom is facilitated by discovering relational similarity to the solar system, with respect to several electrons orbiting one nucleus. This relational alignment can foster inferences about further properties of atoms, such that the nucleus might be more massive than the electrons. In line with this idea, a computational framework for concept learning and abstraction (Tenenbaum et al., 2011) notes that many natural data can be described by a few basic structural patterns, such as grids, circles, or trees, which reflect organizational principles across domains. For instance, tree structures can represent taxonomic hierarchies in biology as well as authority relations in corporate structures. The recurrence of structures may facilitate the learning of new concepts by inferring and evaluating generative models of the data (Lake et al., 2015) and highlights the efficiency of structure mapping as a processing mechanism.
If processing mechanisms underlie the general factor, their influence on the broad abilities might be proportional to the g-loading of those abilities. Neural mechanisms of relational reasoning may particularly affect those broad abilities that correlate highest with the general factor (Fig. 1A). These are reasoning (gf), visuospatial reasoning (gv), the breadth and depth of knowledge (gc), and learning and memory (glm; Caemmerer et al., 2020). Structural knowledge can help predict, infer, and generalize, i.e., it aids reasoning (gf) and accelerates learning (glm) by enabling the acquisition of more information than is unambiguously provided by the environment. This leads to broader knowledge (gc), including more explicit concepts to draw upon, which in turn facilitates new learning (glm) and reasoning (gf). Thus, in addition to potential direct effects of structure mapping on the relatively g-loaded broad abilities, mutually reinforcing interactions between them may contribute to positive manifold via a virtuous cycle (Fig. 1A).
Neural Implementations of Structure Mapping: Cognitive Maps and Manifold Alignment
Structural abstraction via knowledge factorization in hippocampal–entorhinal cells
Ideas on how structure mapping could be implemented in the brain, in particular in the hippocampal system and prefrontal–parietal cortices, recently gained traction. The role of the hippocampus in relational inference in paired associative inference (Zeithamova and Preston, 2010; Koster et al., 2018; Morton et al., 2020) and transitive inference tasks (Bunsey and Eichenbaum, 1996; Eichenbaum et al., 1999; Devito et al., 2010) as well as in concept learning and categorization (Zaki, 2004; Kumaran et al., 2009; Mack et al., 2018; Theves et al., 2019; 2024; Zeithamova and Bowman, 2020; Broschard et al., 2024) is well established.
Recent computational and systems neuroscience studies further investigated the circuit-level implementation of these processes. Cells in the hippocampal–entorhinal system similarly encode mental maps of space (O'Keefe and Dostrovsky, 1971; Hafting et al., 2005) and relationships in nonspatial tasks (Aronov et al., 2017; Sun et al., 2020; Samborska et al., 2022). For instance, hippocampal place cells fire at a particular location, and regularly repeating firing fields of entorhinal grid cells provide a general metric allowing vector computations. An integrative computational model of spatial and nonspatial inference (Whittington et al., 2020) suggests that hippocampal–entorhinal cells might support generalization by factorizing experiences into a representation of their structure, encoded by grid and vector cells in the medial entorhinal cortex, and sensory specifics, encoded in the lateral entorhinal cortex. The benefit of this factorization is that structural knowledge can be flexibly recombined to map new situations (conjunction by place cells; Fig. 2A). When the model learned different knowledge structures through factorization, its components exhibited characteristics of hippocampal–entorhinal cell firing, suggesting these cells as a biological basis. Congruent with the notion that the medial entorhinal cortex encodes structural information abstractly, i.e., independent of sensory specifics, is the fact that grid cells keep their correlation structure across environments and do not remap. Due to consistency in the medial entorhinal cortex input, hippocampal remapping may also not be random as previously thought (Whittington et al., 2020). Related to this finding, it was shown that experiences in one rat can be decoded based on hippocampal representations of the same experience in another rat when place cell activity is geometrically aligned across subjects (Chen et al., 2021; Nieh et al., 2021). Also, border (Solstad et al., 2008) and object vector cells (Høydal et al., 2019) in the medial entorhinal cortex fire invariantly with respect to specific features, as they encode vectorial distances to any object or border and maintain their vectors across contexts. In sum, this invariance contributes to the idea that these firing patterns may reflect a process of abstraction (Whittington et al., 2020).
Neural implementation of structure mapping in single-cell tuning and population geometry. A, Knowledge factorization (structural abstraction) in the hippocampal–entorhinal system (as formalized in Whittington et al., 2020). Grid cells, border cells, and object vector cells (OVC) in the medial entorhinal cortex are thought to encode structure abstractly (green), while hippocampal place cells are thought to reflect conjunctions of structural input from the medial entorhinal cortex and specific sensory input (blue). B, Understanding neural activity in geometric terms via state space representations. A state refers to the configuration of activity within a neural population (left), depicted as vectors within the neuron-dimensional activity space (top right). Dimensionality reduction methods (e.g., principal component analysis) identify orthogonal directions (bottom right) in the neural state space that explain most variance. Projecting activity onto those PCs condenses the neuron- (or voxel-/sensor-) dimensional representation into a low-dimensional depiction. Modified from Ebitz and Hayden (2021). C, Parallel and normalized representations can facilitate the transfer of relational knowledge (green) to another context (blue). If items from different contexts (e.g., drinks vs natural phenomena) are represented on a relative scale (top), not an absolute scale (bottom), a classifier (lilac) discriminating cold and hot in one context (drinks) can generalize to the other context (natural phenomena). Figure logic adapted from Sheahan et al. (2021). D, During a reversal learning task (Bernardi et al., 2020), where two sets of stimulus–response–outcome associations alternated (Context 1 and 2; top), hippocampus and PFC neurons exhibited both abstract and high-dimensional representational geometries (top right). Four conditions of task variable (e.g., context, value, and stimulus) combinations are arranged in a way (factorized on a distorted square) that a linear decoder (lilac) can separate all three dichotomies, two of them in an abstract form: the classifier can generalize to new conditions (blue) not seen during training. Deviations from this geometry can reduce generalization ability (bottom left) or reduce the number of decodable variables (bottom right). From Bernardi et al. (2020).
In particular, simulations of hippocampal cell recordings have described their representational schemes as compositional in nature: they would, in principle, allow for the representation of novel or alternative scenarios by recombining their representational elements (e.g., code for borders, objects, etc.; Bakermans et al., 2023), as required for imagination and planning. This is in line with the observation that replay, the time-compressed sequential activation of place cells, does not only apply to experienced trajectories but also to novel shortcuts or alternative routes and can thus be considered generative (Gupta et al., 2010).
Most importantly, if variation in these coding properties contributes to the emergence of general factors, they should be domain-general. In fact, hippocampal–entorhinal cells exhibit similar patterns in spatial navigation and coding various nonspatial task features, such as sound frequency (Aronov et al., 2017), gaze-position (Killian and Buffalo, 2018), and numbers (Kutter et al., 2018). Likewise, corresponding representational signatures in human neuroimaging, such as grid-like hexadirectional signal modulation, and distance and boundary representations, have been observed for spatial environments (Doeller et al., 2010; Morgan et al., 2011) and other task spaces (Tavares et al., 2015; Bao et al., 2019; Theves et al., 2019; Park et al., 2021). For instance, fMRI studies have demonstrated a role of the hippocampal–prefrontal system in the abstraction of concepts in category-learning tasks, including the different features of a conceptual space (i.e., the structure of feature space, category boundaries, prototypes; Theves et al., 2020; Schäfer et al., 2024).
In sum, hippocampal–entorhinal representations are described as domain-general, abstract, compositional, and generative—qualities that align with human intelligence. Correspondingly, a specific relationship between interindividual differences in relational map-like encoding of task information in the hippocampus and fluid intelligence (gf) has been observed in humans (Tenderra and Theves, 2024a, b).
Structural generalization via manifold alignment in the hippocampus and prefrontal–parietal cortex
Activity patterns of neural populations can be described as manifolds in a high-dimensional neural state space, where coordinates represent the activity of individual neurons or units of activations (Fig. 2B). The properties of a neural network can confine neural trajectories to a subregion of this space, the manifold. Quantifying geometric properties of manifolds gained momentum as a tool to infer the computations underlying specific tasks (Ebitz and Hayden, 2021). In particular, recent studies provided first neural evidence in support of structural abstraction and alignment at the population level in the hippocampus and in prefrontal–parietal cortices (Luyckx et al., 2019; Bernardi et al., 2020; Morton et al., 2020; Sheahan et al., 2021). These studies suggest that when sets of stimuli share relational structure across contexts (e.g., they can be ordered along a linear dimension), they get represented on parallel low-dimensional neural manifolds, such that a decoder trained in one context (e.g., to discriminate cold and hot drinks) could also perform in the other context (e.g., distinguish cold and hot natural phenomena; Luyckx et al., 2019; Sheahan et al., 2021; Fig. 2C,D). In particular, the parietal cortex encodes abstract linear representations of magnitude, also referred to as mental number lines (Hubbard et al., 2005; Nieder, 2016). Congruent with a structure mapping account, these neural representations might be repurposed to map newly learned ranks and afford inferences about unobserved pairs. Accordingly, it was shown that parietal EEG signals reflected parallel representations of numerical ranks and newly learned reward probabilities (Luyckx et al., 2019). Another study investigated cross-context generalization in a reversal learning paradigm (Bernardi et al., 2020), showing that inference of context-dependent stimulus–response–outcome mappings required an abstract (i.e., classifiers could generalize to conditions not seen during training) and simultaneously high-dimensional geometry of task variables in the hippocampus and PFC (Fig. 2D). Abstract representations of event-structure in the hippocampus and parietal cortex can further afford vector-based retrieval in support of fast inference times (Morton et al., 2020). Lastly, analogical reasoning must account for the fact that relational patterns can manifest across different physical scales (e.g., drinks and natural phenomena differ in absolute temperature) or lack a physical scale, for instance, in metaphorical uses (e.g., hot and cold emotions). In such cases, the transfer of structural knowledge might require normalized abstract representations. Indicative of this principle, it was shown that during a number comparison task in which the relative magnitude within a context was informative, numbers were represented on parallel and normalized context manifolds in parietal EEG signals (Sheahan et al., 2021).
While recent research has identified key roles of hippocampal and prefrontal–parietal regions in structure mapping, their involvement may not necessarily be exclusive. In sum, the advance of measuring neural task representations now provides the potential to test process theories of the positive manifold empirically, including, for instance, further exploration of the link between structure mapping and human intelligence.
Grounding Characteristics of g in Neural Structure Mapping
The general factor of intelligence features characteristics (Kovacs and Conway, 2016), which can inform the investigation of putative neural processing mechanisms. Firstly, as outlined above, g and the secondary factor gf are statistically almost identical. Thus, candidate processing mechanisms should explain the pivotal component of gf within g. In particular, individual differences in structure mapping might manifest in measures of cognitive abilities through a virtuous cycle of learning, reasoning, and knowledge, as outlined in Figure 1A. Secondly, it has been suggested that complex tests correlate more strongly with g than less complex tests—for instance, reverse as compared with forward digit span tasks have higher g-loadings (Jensen, 1981). Whether interindividual differences in structure mapping manifest in performance would conceivably be influenced by task complexity, as complexity can be related to the number of possible mechanisms to solve a task (Newell and Simon, 1972). For instance, finding a single item in a new supermarket is easily learned from scratch, but locating 30 items benefits from prior structural knowledge of general supermarket layouts, enabling to infer some of the mappings and learn more quickly. In the first case, less effective structure mapping is compensated by alternative processes, but in the second case its advantage becomes significant. Thirdly, the correlation between g and worst performances in a test is higher than the correlation with the best performances, particularly in highly g-loaded tests (Larson and Alderton, 1990; Coyle, 2001; 2003). Evaluation of candidate processes may therefore take into account the constraints that lead to errors in intricate tasks (Kovacs and Conway, 2016; Griffith, 2020). In complex tasks, differences in the recruitment of structure mapping can define a baseline performance, above which its relative contribution to score differences may decrease. Lastly, a putative neural processing mechanism should account for developmental trajectories of g. In average aging trajectories gf declines from middle adulthood, while gains in gc progress (Horn and Cattell, 1967), which was hypothesized to reflect compensation of fluid abilities through reliance on knowledge (Reuter-Lorenz and Cappell, 2008). However, intraindividual aging trajectories revealed that individuals with less decline in gf show stronger increases in gc, which is better captured by a unitary g-factor (Tucker-Drob et al., 2022). Since structure mapping can be assumed to support reciprocal interactions between fluid (gf) and crystallized (gc) intelligence (Fig. 1A), it is conceivable that a developmental decline in forming structured and abstract neural task representations could relate to this observation. So far, there is evidence of instability in the spatial tuning of place and grid cells with age. Hippocampal place cells lose their temporal stability in older as compared with younger mice, characterized by remapping between episodes of the same environment (Barnes et al., 1997) and reduced spatial tuning (Yan et al., 2003). Furthermore, elderly as compared with younger human participants showed weaker to no grid-like representations of space during virtual navigation, reflecting the temporal instability of orientation estimates in entorhinal cortex voxels (Stangl et al., 2018).
Concluding Remarks
I suggest that combining psychometric findings with cognitive-computational neuroscience approaches can offer a novel perspective on the nature of human intelligence. Investigating the structure of cognitive abilities, as inferred from multitask performances, at the level of neural task representations can advance our understanding of generality in key processes and contribute to an empirical definition of what lets individuals perform well across diverse tasks. Vice versa, psychometric findings may provide an integrative view to guide investigations on general processing principles in the brain. Whether statistical g can at all be translated into biological terms (e.g., neural processing mechanisms) is debated (Gottfredson, 2016; Kovacs and Conway, 2016). This is less contentious for broad abilities in the CHC taxonomy (Fig. 1A), and a combined approach may at least approximate process-generality on a domain level.
An influential view on the relation of g and cognitive processing has been based on the consistency of parietal and prefrontal effects (Jung and Haier, 2007) and focused on the role of attention and executive processing for cognition (Duncan, 2010; 2013). It was hypothesized that prefrontal–parietal cortices might contribute to intelligence by segmenting tasks into subgoals and selectively focusing attention to those (Duncan, 2013). While the present viewpoint shifts the focus to structural abstraction and generalization in the hippocampal–entorhinal and prefrontal–parietal system as processes with a broad impact on cognition, both accounts might complement each other.
Could lesion studies provide insight on the role of hippocampal and parietofrontal structure coding in intelligence? While systematic data on pre- to postlesion intelligence tests are lacking, the case study of H.M. reported no reduction in IQ from before to after his medial temporal lobe resection, which however might be attributed to the reduced epileptic activity and related cognitive impairment. Postmortem histology further revealed less extensive and less selective hippocampal damage as previously reported (Annese et al., 2014). Inferring function from lesion cases is thus challenging, also because lesions could be compensated by or affect functioning in interconnected regions. Structure mapping in particular could contribute to g also via interactive and long-term effects (Fig. 1A), which could obscure potential immediate effects of a regional damage. For further related considerations, it is important to first understand the specific roles of, and interactions between, hippocampal and prefrontal–parietal areas in structure mapping.
While humans analogical reasoning capacity is considered particularly advanced (Penn et al., 2008), critical neural structures in the hippocampal system are similar across species. The interaction with a larger PFC in humans might contribute to this advance, potentially by facilitating higher levels of abstraction. In particular, long-term consolidation processes and the presence of language may enhance abstraction. Thus while cells in the hippocampal system share a similar computational blueprint across species, the interconnectedness with the human neocortex may elevate the impact of these mechanisms on cognition.
This viewpoint highlights the potential of empirically testing process accounts of intelligence at the level of neural task representations. Insight into the underlying processing mechanisms may lead to a re-evaluation of previous beliefs. For instance, evidence that training on a specific task can generalize to other tasks (far transfer) is scarce and controversial (Guerin et al., 2021; Ripp et al., 2022). Having defined general factors at the level of neural computations, one might test whether training targeted to those computations can yield transfer effects.
From a methodological perspective, planning corresponding studies could benefit from characterizing the retest stability of relevant neural representational measures. Furthermore, as with research on any single cognitive faculty, multitask assessments are influenced by preconceptions about the tested construct during task selection. Since tasks are inherently culture-specific (e.g., with respect to the content and familiarity of the material; the form in which tests are administered), so will be their potential to access the underlying construct of interest. While generalizability of the discovered factor model can be explored further, the methodological benefit of combining psychometric and neurocomputational approaches may be a critical step toward identifying general principles underlying human cognition.
Footnotes
The author declares no competing financial interests.
- Correspondence should be addressed to Stephanie Theves at stephanie.theves{at}ae.mpg.de.