Abstract
To what extent do the brain regions implicated in semantic processing contribute to the representation of amodal conceptual content rather than modality-specific mechanisms or mechanisms of semantic access and manipulation? Here, we propose that a brain region can be considered to represent amodal conceptual object knowledge if it is supramodal and plays a role in distinguishing among the conceptual representations of different objects. In an fMRI study, human participants made category typicality judgments about pictured objects or their names drawn from five different categories. Crossmodal multivariate pattern analysis revealed a network of six left-lateralized regions largely outside of category-selective visual cortex that showed a supramodal representation of object categories. These were located in the posterior middle/inferior temporal gyrus (pMTG/ITG), angular gyrus, ventral temporal cortex, posterior cingulate/precuneus (PC), and lateral and dorsomedial prefrontal cortex. Representational similarity analysis within these regions determined that the similarity between category-specific patterns of neural activity in the pMTG/ITG and the PC was consistent with the semantic similarity between these categories. This finding supports the PC and pMTG/ITG as candidate regions for the amodal representation of the conceptual properties of objects.
Introduction
Neuropsychological and neuroimaging studies have implicated multiple brain areas in the performance of semantic tasks (Hillis and Caramazza, 1991; Damasio et al., 2004; Patterson et al., 2007; Binder et al., 2009). It remains unclear, however, to what extent these brain regions contribute to the representation of semantic content rather than to modality-specific mechanisms or mechanisms that control, or otherwise make possible, semantic access (Thompson-Schill et al., 1997; Martin and Chao, 2001; see discussions in Martin, 2007; Mahon and Caramazza, 2009; Whitney et al., 2011).
A central assumption about amodal semantic/conceptual representations is that they can be accessed independently of modality: the word “dog” activates a generally similar conceptual content to the picture of a dog. This implies that the brain areas that represent such conceptual knowledge are supramodal in nature. However, not all of the supramodal brain areas that are activated by semantic tasks necessarily represent semantic content. A brain area can have a role in accessing or otherwise operating over semantic representations without necessarily being involved in representing conceptual content. One way to identify brain regions that represent amodal conceptual knowledge is by combining the supramodal criterion with the additional conservative criterion that, for a brain area to be considered to represent conceptual knowledge, it must play a role in distinguishing among the conceptual representations of different objects and, by extension, different object categories.
In the visual modality, neuroimaging studies have shown that the spatial pattern of the fMRI response to images of objects encodes the similarity among objects and among object categories (Kriegeskorte et al., 2008; Connolly et al., 2012). Representational similarity analysis (RSA) of the neural activity produced by object categories compared with the semantic similarity among those categories provides an important test of the degree to which semantic content is encoded in the activity of a region. For example, Connolly et al. (2012) found that the representational similarity among the neural responses in higher-level visual cortex produced by six different biological classes matched subjective ratings of similarity among those categories. Although this correspondence suggests the possibility that higher-level visual cortex represents semantic content, the use of exclusively visual object modes of presentation makes it impossible to distinguish the contribution of semantic versus visual properties on this pattern of neural activity. In other words, the supramodal criterion has not been met.
Previous work has successfully employed supramodal presentation and multivoxel pattern analysis (MVPA) to distinguish modality-specific versus modality-general representations (Peelen et al., 2010). In the present study, we used crossmodal presentation (words and pictures) and a typicality task (e.g., how typical is an eagle of the category “birds”?) for five semantic categories: fruits, tools, clothes, mammals, and birds. We then used crossmodal MVPA to identify the neural representation of the different semantic categories independently of the process through which these representations are accessed. Finally, we used RSA to determine where in the brain neural activity is sensitive to the conceptual similarity of the content being encoded.
Materials and Methods
Participants.
Twenty right-handed, native Italian speakers participated in this experiment (9 females, mean age 25.3 years, SD = 3.9). All procedures were approved by both the University of Trento Human Research Ethics Committee and the Harvard University Committee on the Use of Human Subjects in Research.
Procedure.
Five semantic categories were selected: fruit, tools, clothes, mammals, and birds. In each category there were 32 object exemplars that could be presented either in written or pictorial form. Word length was between 4 and 11 letters (mean 6.9, SD 1.7) and did not differ across categories (p > 0.15). Words subtended approximately 3–5.5 degrees of visual angle horizontally and 1.5 vertically. Pictures were separated from their original background and presented against a square phase-scrambled background (8.3 degrees of visual angle) generated from other images in the stimulus set (selected equally from each category).
Stimuli were presented during four fMRI runs (9 min 20 s each). For every subject, the first two runs were word presentations and the last two were picture presentations. This order was chosen so that the word presentation was not confounded by prior exposure to particular images. Twenty-four 20 s blocks comprised each run (four per category and four baseline blocks). Each block was preceded by a written cue indicating the upcoming category (1 s fixation, 1 s cue, 1 s fixation) followed by the presentation of eight 2.5 s trials (Fig. 1). Each trial consisted of 400 ms stimulus presentation followed by the 2100 ms presentation of a fixation cross. Each object exemplar was presented once per run.
Participants performed a typicality rating of each item within its semantic category (e.g., rating the typicality of “apple” or “coconut” as a fruit). Subjects responded bimanually (two buttons for each hand) on a four-point typicality scale (button order was counterbalanced across participants). The baseline task was a one-back matching task of phase-scrambled images.
Univariate analysis.
Analysis was performed in SPM8 (http://www.fil.ion.ucl.ac.uk/spm/software/spm8/). The first four volumes of each run were discarded. All subsequent images were corrected for head movement. Slice-acquisition delays were corrected using the middle slice as a reference. All images were normalized to the standard SPM8 EPI template (MNI stereotactic space), resampled to a 3 mm isotropic voxel size, and spatially smoothed using an isotropic Gaussian kernel of 3 mm FWHM. The time series at each voxel for each participant were high-pass filtered at 128 s and prewhitened by means of an autoregressive model AR(1).
Subject-specific β weights were derived through a general linear model (GLM). For each subject, the data were best-fitted at every voxel using a combination of effects of interest. These were delta functions representing the onset of each of the experiment conditions, convolved with the SPM8 hemodynamic response function. The six motion regressors, four response buttons, and reaction time (RT) were included as regressors of no interest (RT and response button regressors were convolved with the hemodynamic response function). Betas were extracted for each run for each category.
Whole-brain crossmodal MVPA.
A searchlight analysis (Fig. 1) was performed (Kriegeskorte et al., 2006) using a logistic regression pattern classifier as implemented in the Princeton MVPA Toolbox (Princeton University, Princeton, New Jersey, http://www.csbmb.princeton.edu/mvpa). Classifiers were trained and tested on individual subject data. Beta values were used from each condition from each run. In this way, the spatial pattern information entered into the classifier for each category represented the average response of the presentation of the 32 exemplars (from either word or picture presentation). The searchlight analysis was implemented by extracting β vales from a three-voxel-radius sphere centered on each voxel in the brain. This sized sphere yielded ∼123 3 mm voxels and was selected for consistency with previous research (Connolly et al., 2012).The diameter of the sphere was 21 mm, substantially larger than the minimal smoothing kernel used during preprocessing (3 mm FWHM).
The classifier was trained on each modality for subsequent testing on the other modality. In this way, only the category-specific information that was general to both modalities was informative to the classifier. Classification was performed in a pairwise fashion comparing each category with each other category. Therefore, a total of eight β maps (2 runs × 2 modalities × 2 categories) were entered in the classifier for each pairwise iteration. There were 10 such iterations (i.e., the possible number of pairwise combinations of our five categories). For each searchlight, the performance of the classifier was summarized at the center voxel of the sphere (the “summary voxel”). In the unimodal analysis (presented in Fig. 2B,C), training and testing were performed on the first and second run for that modality. Therefore, only half of the data were available compared with the crossmodal MVPA.
For whole-brain analysis, the classification performance for each subject was averaged across all categories (the 10 pairwise iterations) and the resulting accuracy maps were entered into a one-sample t test. Brain maps were initially thresholded at p < 0.001 uncorrected, extent >60 voxels. All reported results have been corrected at the cluster level using the family-wise error (FWE) correction as implemented in SPM8.
Semantic distance template.
For the assessment of the relationship between neural representational similarity and semantic similarity, an estimate of the semantic relatedness between the stimuli used in this study was determined. There are a number of different methods for determining semantic relatedness between object concepts (e.g., feature co-occurrence, subjective similarity, and corpus-based latent semantic analysis). There are significant variations across these methods (for a recent comparison, see Dilkina and Lambon Ralph, 2012). For this study, the semantic relationship between objects was determined using Wordnet (http://wordnet.princeton.edu) and the Wordnet::Similarity interface (http://wn-similarity.sourceforge.net). The measure used was path length, which reflects the shortest hierarchical distance between objects and was selected because it produced the most stable estimates of the canonical taxonomic category structure. The distance was calculated between each object and every other object and the average between-category distance was used as the template index of semantic distance.
Crossmodal RSA.
RSA was performed on the classification confusion matrix (more dissimilar neural patterns are more accurately classified, more similar patterns more readily confused). ROIs were identified in the whole-brain crossmodal MVPA described previously. For stability and normality, confusion matrices were averaged across the summary voxels attained from the searchlight analysis (Fig. 2A). Specifically, ROIs were formed by taking the union between spheres centered on decoding accuracy peaks (Table 1) and the significant decoding accuracy map (p < 0.001) revealed in the whole-brain analysis (Fig. 2A). The radius of the sphere was modified to create a comparable number of voxels within each ROI (90–106 voxels). The relationship between semantic and neural similarity patterns was assessed using a weighted contrast. The vector of contrast weights was derived from the semantic distance template. The mean semantic distance was subtracted from each cell of this vector, which was then multiplied element-wise with a vector representing the 10 different cells of the confusion matrix. The resulting values were then averaged for each subject and a one-sampled t test was used to determine whether the neural similarity increased as a function of semantic similarity for each ROI.
Data acquisition.
Participants lay in the scanner (BioSpin MedSpec 4T; Bruker) and viewed the stimuli through a mirror system. Data collection was conducted at the Center for Mind/Brain Sciences, University of Trento, using a USA Instruments eight-channel phased-array head coil. A total of 1168 volumes of 34 anterior/posterior-commissure aligned slices were acquired over four runs (image matrix = 70 × 64, repetition time = 2000 ms, echo time = 33 ms, flip angle = 76°, slice thickness = 3 mm, gap = 0.45 mm, with 3 × 3 mm in plane resolution).
An additional high-resolution (1×1×1 mm3) T1-weighted MPRAGE sequence was performed (sagittal slice orientation, centric phase encoding, image matrix = 256 × 224 [read × phase], field of view = 256 × 224 mm [read × phase], 176 slices with 1 mm thickness, GRAPPA acquisition with acceleration factor = 2, duration = 5.36 min, repetition time = 2700, echo time = 4.18, TI = 1020 ms, 7° flip angle).
Results
Behavioral results
Mean typicality ratings did not differ across categories (Fig. 1A; all t-values <1). RTs differed significantly between some categories (Fig. 1B). For words (range, 977–1181 ms), ratings for fruits were faster than the other four categories (p < 0.01) and ratings for birds were faster than both mammals and tools (p < 0.05). For pictures (range, 910–1050 ms), fruit responses were faster than tools, clothes, and mammals (p < 0.05), and bird responses were faster than tool responses (p < 05). Subjects also performed a perceptual control task—a one-back matching task between phase-scrambled images. Mean RTs (mean, 790.5 ms) were significantly faster than each of the conditions of interest (p < 0.001). Mean accuracy was 81.0%.
Crossmodal sensitivity to semantic category
First, the average overall crossmodal classification performance (collapsed across category pairs) attained by the whole-brain crossmodal MVPA was determined. This revealed ROIs sensitive to semantic category in posterior middle/inferior temporal gyrus (pMTG/ITG), angular gyrus (AG), ventral temporal cortex (VTC; fusiform, parahippocampal, and perirhinal cortex), the posterior cingulate/precuneus (PC), the lateral and dorsomedial prefrontal cortex (latPFC and dmPFC; Fig. 1A, Table 1). The apparent left laterality of this network is quantitative rather than qualitative, with weaker effects being evident in geometrically equivalent right hemispheric regions (mean classification accuracy 56–57%; t-values 2.1–2.6, p < 0.05).
To assess the influence of susceptibility artifacts on the identified ROIs (particularly VTC), we analyzed the temporal signal-to-noise ratio (tSNR) for each participant in each ROI. tSNR was calculated by dividing the mean intensity of a voxel by the SD of that voxel's signal over time separately for each fMRI run (Friedman et al., 2006). Mean tSNRs for each ROI were as follows: pMTG/ITG, 112.9; AG, 104.9; VTC, 125.6; PC, 144.1; latPFC, 105.3; and dmPFC, 138.3. These values are comparable across ROIs and well above the minimum “good” value for signal detection (>20; Binder et al., 2011). To assess our ability to detect effects in regions of the anterior temporal lobe (ATL) previously linked to semantic processing, we calculated the tSNR in 3-voxel radius spheres centered on coordinates described previously (Binney et al., 2010; Visser and Lambon Ralph, 2011). The tSNR was again seen to be robust (118.4 and 98.6, respectively). tSNR dropped off only in more anterior regions of the ventral ATL, as reported in Peelen and Caramazza (2012), who found a tSNR of 51.6). However, even here, tSNR remained sufficiently high for signal detection. These values indicate that signal quality was good and detectability was relatively high in this study.
Might this multivariate supramodal sensitivity to category also be evident in the univariate magnitude of the response? Within our ROIs, overall univariate responses were comparable between words and pictures, with the exception of a stronger effect for pictures in the left pMTG/ITG (t(19) = 2.4, p < 0.05). To determine if subject-specific variations in the magnitude of the response predicted semantic category, we next averaged the responses across each searchlight sphere (removing the spatial pattern) and trained and tested the classifier on the average response within the sphere. This analysis revealed two frontal clusters in the latPFC (xyzmni = −42, 17, 31; 33 voxels) and dmPFC (−15, 17, 52; 58 voxels; p < 0.001, extent > 30 voxels), but not elsewhere. Therefore, only in these regions did the category-specific pattern in the magnitude of the response and the spatial pattern contain information about semantic category.
Prior research has indicated a selective response in the left pMTG for visually presented tools. This motivated further examination of this cluster to ensure that the effects obtained here for this region are not merely the result of this region's sensitivity to tools relative to the other categories. We first replicated the MVPA analysis considering only the four nontool categories. The pMTG/ITG cluster remained highly significant (p < 0.001 FWE corrected, 165 voxels), indicating that this region is sensitive to other semantic categories beyond tools. We then identified the “classic” tool-selective region in the pictorial modality through the contrast tools > mammals. The peak of the supramodal category-sensitive cluster was ∼1.5 cm anterior to that of the conventional tool-selective cluster (xyzmni = −48, −64, −5; 439 voxels). Although the two clusters partially overlapped, the supramodal category-sensitive activation spread more anteriorly (<40% overlap; 102 of the 259 voxels).
For comparison, we also included the complementary MVPA for unimodal (word and picture) presentation (Fig. 2B,C). MVPA performed within the picture domain revealed widespread sensitivity to category in the visual cortex (lateral occipital cortex and sections of the fusiform and parahippocampal gyri and early visual cortex). Within the word modality, sensitivity to semantic category was only observed in the pMTG/ITG and latPFC. There was little overlap in picture, word, and crossmodal decoding. This can be the result not only of modality-specific information (particularly in the case of images), but also reduced power due to the smaller train-test sets available to the within-modality classifiers. Examination of unimodal (word or picture) effects in the six supramodal ROIs revealed that there was some evidence for unimodal categorical information in each of these regions. Within each ROI, searchlight performance was averaged across category pairs separately for the word and picture presentation runs. Classification performance was above chance in all six ROIs (p < 0.05).
Comparing semantic and visual category sensitivity
Category sensitivity evident during image presentation may reflect semantic processing (Connolly et al., 2012). However, with the exception of the left pMTG/ITG and anterior parts of the left mid-fusiform, there appeared to be little overlap between visual and supramodal category-sensitive neural populations (Fig. 2A,B). To investigate this relationship more fully, we examined the performance at each summary voxel for picture and word MVPA. This analysis was performed in those brain regions showing robust image-based sensitivity to category (Fig. 2B) while excluding areas showing supramodal sensitivity (Fig. 2A) and early visual cortex. The correlation between the local categorical representation for pictures and words revealed a significant dissociation between sensitivity to category in the word and image modality (r = −0.19, p < 0.001). Therefore, outside of the supramodal ROIs, stronger image-based category sensitivity was associated with decreased word-based sensitivity to category. This indicates that there is a division of labor within the ventral stream, with those brain regions most sensitive to image-based categorical distinctions being separate from supramodal representations of category.
Is supramodal categorical information present at a spatial scale larger than that sampled by our searchlight, such as medial (inanimate) to lateral (animate) organization of the ventral stream? To determine this, all visually responsive voxels were extracted from ventral stream regions (all pictures > phase-scrambled, p < 0.001) and an MVPA was performed at this broad spatial scale (>5000 voxels). Predictably, decoding was highly significant for the picture task (p < 0.001), but was not evident for the word task (t < 1). Therefore, there was no indication that supramodal categorical representations are present in the ventral stream at broad spatial scales.
Representational and semantic similarity
The preceding analysis allowed us to identity regions with a high overall sensitivity to semantic category, but did not allow us to assess the relationship between categories. To evaluate this relationship in the six supramodal regions, we next performed RSAs using the confusion matrix between each category as an index of neural representational similarity (Fig. 3). The representational similarity of each category was then compared with the semantic template derived from the Wordnet database. Of the six ROIs, only in the pMTG/ITG and PC was there a relationship between semantic similarity and representational neural similarity (Fig. 3, all other regions t < 1). This indicates that, in these ROIs, semantically more similar categories have more similar neural representations. Inspection of the dendrograms in Figure 3 shows that a broad semantic taxonomy is evident and that these effects are not driven by differences between one single category and the others. Nonetheless, there may be some biases embedded within this structure. There appeared to be some weighting toward tools in the pMTG/ITG, and the PC seemed to contain relatively little information about artifacts (tools and clothes).
As a final check for RT confound effects, the representational similarity was compared with the RT similarities among categories. Neural similarity across categories did not vary with RT similarity (t-values <1).
Discussion
In this study, we employed crossmodal MVPA and RSA to identify where in the brain the pattern of neural representation was consistent with amodal conceptual content. We identified six (predominantly) left-lateralized regions that exhibited overall supramodal sensitivity to semantic category. RSA determined that in two of these regions, pMTG/ITG and PC, the neural representational similarity among categories conformed to the semantic relationship among objects within those categories. This concordance between neural and semantic relationships within pMTG/ITG and PC supports the view that these regions encode modality-neutral conceptual information.
The role of pMTG/ITG in semantic processing has been supported by convergent evidence from lesions (Hillis and Caramazza, 1991; Damasio et al., 2004), neuroimaging studies (Thompson-Schill et al., 1997; Binder et al., 2009), and transcranial magnetic stimulation using word (Whitney et al., 2011) and picture stimuli (Hoffman et al., 2012). Moreover, during rest, individual differences in spontaneous activation in pMTG most strongly predict conceptual processing efficiency, suggesting a central role in semantic processing (Wei et al., 2012). However, this central role may either reflect representation or retrieval/selection operations performed on semantic content (Whitney et al., 2011). The relationship between neural patterns of activation and semantic distance patterns observed in our study supports the former possibility—the representation of conceptual content in pMTG/ITG, at least in regard to the semantic content that is accessed by both visual objects and their names.
One possible explanation for the observed results is that pMTG/ITG effects merely reflect tool selectivity. Multiple studies have reported tool selectivity in the left pMTG (Chao et al., 2002; Mahon et al., 2007; Simmons et al., 2010; Fairhall et al., 2011) and there is some evidence that this extends to crossmodal presentation (Noppeney et al., 2006). To rule out this possibility, a secondary analysis was performed and it was found that: (1) supramodal sensitivity to semantic category persists when the tool category is removed from the analysis and (2) the anatomical locus of this category-sensitive cluster is anterior to the classical tool-selective region. The more anterior locus of our supramodal ROI is consistent with prior reports of non-category-selective semantic effects (Simmons et al., 2010; Wei et al., 2012) and may reflect a functional subdivision of pMTG/ITG.
Less widely recognized than pMTG, PC is one of the most consistently reported brain regions in studies of semantic processing (Binder et al., 2009). For example, PC is activated when people report semantic properties about animals (Binder et al., 1999) or the nationality or occupation of famous people (Fairhall et al., 2013). At rest, spontaneous activity in PC is coupled with the section of pMTG associated with conceptual processing efficiency (Wei et al., 2012). The PC's strong connections to frontal and association (such as pMTG) cortices and its central involvement in internalized “default-mode” cognition has led to the proposal that it is involved in elaborating highly integrated information (Cavanna and Trimble, 2006). Access to such highly integrated information is consistent with a functional role in the abstract representation of semantic content. The recruitment of PC in semantic tasks has previously been attributed to the incidental retrieval of episodic memories (Gobbini and Haxby, 2007; Binder et al., 2009). However, in this study, the minimal episodic demands of the task (typicality judgment), the type of stimuli used–basic, everyday objects–and the preservation of between-category semantic relationships in the neural representation suggest that PC plays a role in conceptual representation that extends beyond episodic memory. The relationship between conceptual and neural similarity supports a role in conceptual representation, but evidence on the nature of semantic deficits in the early stages of Alzheimer's disease suggests that PC may have an additional role in semantic control (Corbett et al., 2012).
We also identified four regions (VTC, AG, latPFC, dmPFC) that were sensitive to category but insensitive to the semantic relationship among categories. This pattern of response may reflect involvement in guiding the selection and retrieval of task-relevant semantic content, particularly in the case of the latPFC and dmPFC (Thompson-Schill et al., 1997; Martin and Chao, 2001; Wagner et al., 2001; Jefferies and Ralph, 2006; Binder et al., 2009). Although the response profile of neural populations fulfilling this function is uncertain, in the macaque latPFC, neurons respond to ad hoc, arbitrarily formed categories based upon the online task set rather than natural semantic categories (Cromer et al., 2010). In specific situations, such as in the current study, ad hoc categories can correspond to semantic categories (e.g., when a task requires a decision to be made for the category tools), but their functional role in PFC is not dependent on the semantic content of the categories, merely on their task relevance. If such a process of ad hoc (and not semantic) category formation were occurring in the latPFC and dmPFC during the typicality task, one would expect the pattern of decoding performance we have observed: distinct neural patterns associated with the task-defined categories but no systematic relationship among them. Similar processes may occur in the AG, which has been implicated in the fluent integration of conceptual information (Binder et al., 2009), resulting in ad hoc, novel semantic representations.
A surprising result was the lack of sensitivity to semantic relationships in VTC. Damage to this region is frequently associated with semantic deficits (Patterson et al., 2007) and, in this study, the overall classification performance (averaging across category pairs) was most reliable in this region (compare t-values in Table 1). However, classification performance was relatively uniform between categories and was not influenced by semantic similarity. Analysis of the tSNR in the VTC ROI indicated that signal quality was robust. However, poorer signal-to-noise ratios in more anterior aspect of the ventral ATL (Devlin et al., 2000) may have hindered detectability of semantic relationships (post hoc analysis revealed a potential distinction between animate and inanimate categories; t(19) = 1.81, p = 0.043 uncorrected). Alternatively, conceptual representations in these regions might not reflect classical taxonomic semantic relationships. Different aspects of semantics may be represented in different brain regions. For example, VTC might represent semantic knowledge about object form, whereas the AG might be involved in the representation of motor knowledge. This pattern of representation provides another potential explanation for sensitivity to category in the absence of sensitivity to overall semantic distance.
With the exception of the left pMTG/ITG and anterior parts of the left fusiform gyrus, an interesting dissociation was seen between supramodal and image-based sensitivity to the semantic category. Those regions most sensitive to image-based category differences actually showed decreased sensitivity to supramodal semantic category differences. Objects drawn from the same semantic category share not only more conceptual properties, but also a greater number of visual features (Caramazza, 1994; Dilkina and Lambon Ralph, 2012). The results observed in the present study suggest that reports of semantic-like similarity structure for images of objects (Kriegeskorte et al., 2008) and phylogenetic classes of animals (Connolly et al., 2012) may reflect the processing of such shared visual features.
Might category sensitivity have arisen due to differences in task? MVPA is a highly sensitive measure and small differences in task that are undetectable in univariate analysis might produce reliable effects with MVPA. We do not think this to be case here. The typicality judgment task was the same across all categories. Moreover, typicality ratings were equivalent across categories, RT and responses were included in the fMRI GLM as regressors of no interest, and representational similarity was not related to similarity in reaction time. These factors make it unlikely that task differences produce spurious category sensitivity.
It this study, we have emphasized the importance of semantic and neural distances in uncovering brain areas involved in conceptual processing. However, standard subtraction techniques (balancing for sensory/phonological factors or manipulating the level of required cognitive control) and the use of criteria such as the capacity to generalize conceptual knowledge across different exemplars even when surface characteristics may be very different (Lambon Ralph and Patterson, 2008; Lambon Ralph et al., 2010) remain highly effective tools for the identification of the neural correlates of conceptual representation. It is also possible that there are semantic representations present in the brain that do not encode the semantic distances between categories, at least at the spatial scales measured by fMRI, and that controlled subtraction approaches may be more suitable for identifying such representations.
To summarize the results of this study, the supramodal criterion was met in a network of six left lateralized regions largely outside of category-selective visual cortex. The sensitivity to semantic category in the latPFC, dmPFC, AG, and VTC may arise for a number of reasons, including category-sensitive variations in the cognitive operations performed over semantic content. In contrast, the pattern of neural activity in pMTG/ITG and PC not only respected the supramodal criterion, but also reflected the semantic distances among the semantic categories being encoded, a strong test of the criterion that neural representation should reflect the conceptual content being encoded. It is unlikely that the relationship between semantic and neural representational distance would be apparent in neural populations not encoding conceptual content. This finding supports the PC and pMTG/ITG as candidate regions, probably among others, for the supramodal representation of the conceptual properties of objects.
Footnotes
This work was supported by the Provincia Autonoma di Trento and the Fondazione Cassa di Risparmio di Trento e Rovereto.
The authors declare no competing financial interests.
- Correspondence should be sent to Scott L. Fairhall, Center for Mind/Brain Sciences, University of Trento, Trento, Corso Bettini 31, I-38068-Rovereto, Italy. fairhall{at}wjh.harvard.edu