Neuropsychological studies suggest that the human brain is endowed with two body representations: the body schema (BS), coding the orientation of one's body parts in space, and the body structural description (BSD), coding the location of body parts relative to a standard body. We used fMRI to disentangle the neural mechanisms underlying these putatively distinct body representations. Participants saw an arm or a pot's handle (stimulus: arm, handle) rotated at different angles (angle: 30–150°). If the stimulus was an arm, subjects were instructed to imagine (1) rotating their own arm until it matched the stimulus orientation (comparing the seen arm to their own) or (2) seeing the stimulus moving toward its appropriate position on a simultaneously presented human body [comparing the arm to the one of a standard body (strategy: motor, visual imagery)]. If the stimulus was a handle, subjects were instructed to imagine (1) placing the handle on its appropriate position on a simultaneously presented pot or (2) seeing it moving toward its pot's position. The analysis of the interaction stimulus × strategy revealed activation of left secondary somatosensory cortex (SII), specifically when comparing the stimulus arm to one's own. The analysis of the parameters describing the linear effect of angle revealed that neural activity of left posterior intraparietal sulcus was modulated by the stimulus's rotation, but only when relating the arm to a standard body. The results associate BS and BSD with differential neural substrates, thereby suggesting that these are independent body representations, and furthermore extend current concepts of SII function.
The human brain is endowed with multiple body representations. One is the body schema (BS), an egocentric map coding the position of one's body segments in space and time (Head and Holmes, 1911). Evidence for the BS can be found, for instance, in Parsons (1987a,b), who discovered that participants, when asked to assess the handedness of arm stimuli, implicitly compared it with a representation of their own arm, after having imagined rotating their own limb toward the orientation depicted by the stimulus.
A second representation is the body structural description (BSD), which codes the position of each body segment within a standard body. Evidence for the BSD is provided by autotopagnosic patients (for review, see Corradi-Dell'Acqua and Rumiati, 2007), who, following left-hemisphere damage, become unable to locate parts of a body shape. Interestingly, these patients locate correctly parts of animals or objects, suggesting that their deficit is not due to generalized spatial impairment. Furthermore, they identify effortlessly the body parts they cannot locate, suggesting that their deficit does not involve visual processing of isolated body parts, but where these parts belong in an allocentric body model.
Previous studies reported dissociated performances in tasks tapping these putative body models. For instance, Schwoebel and Coslett (2005) described seven patients impaired in a handedness task but exhibiting spared performance in those tasks used to diagnose autotopagnosia, and two patients showing the opposite dissociation. In contrast, neuroimaging studies identified the left intraparietal sulcus (IPS) and the premotor cortex (PM) as neural underpinnings of both BS (Bonda et al., 1995; Parsons et al., 1995; Ehrsson et al., 2004, 2005) and BSD (Le Clec'H et al., 2000; Felician et al., 2004; Corradi-Dell'Acqua et al., 2008). This may reflect the fact that these body representations have never been compared directly in one neuroimaging study.
Accordingly, we used fMRI to disentangle the neural substrates of BS and BSD. Indeed, whereas it has been shown that in a handedness task participants spontaneously compare the displayed body part to their own, through the use of motor imagery (Parsons, 1987b), subjects can, however, also be instructed to solve the task by comparing the same part with an allocentric body-part model through the use of visual imagery (Kosslyn et al., 2001; Tomasino and Rumiati, 2004). Therefore, subjects assessed the handedness of an arm by comparing it, in one condition, to their own (and, therefore, to an egocentric body model) and, in a second condition, to a simultaneously presented body stimulus (and, therefore, to an allocentric body model). For control, the handle of a pot was shown and subjects assessed its laterality by either imagining that they grasped it and placed it on its appropriate position of a simultaneously presented pot or simply relying on its visual properties. This yields the factors stimulus (arm, handle) and strategy (motor, visual imagery). The key analysis relies on the interaction stimulus × strategy, which allows us to disentangle the neural processes associated with a specific body model, controlling for the use of a given strategy per se.
Materials and Methods
Since gender differences in handedness tasks have been reported in both behavioral (Voyer et al., 1995) and neuroimaging (Jordan et al., 2002) studies, only male subjects (n = 17, right handed, aged between 22 and 48 years, average age 28.31 years) took part in the present experiment. None of the participants had any history of neurological or psychiatric illness. Written informed consent was obtained from all subjects, who were naive to the purpose of the experiment. The study was conducted in accordance with the Declaration of Helsinki and approved by the local ethics committee.
On each trial, two horizontally aligned black and white photographs were presented on a white background on a computer screen. The photographs could show either a human arm, placed to the right (25% of the trials) or left (25% of the trials) of an upright human body whose arms had been removed (Fig. 1A), or a pot handle, placed to the right (25% of the trials) or left (25% of the trials) of a pot whose handles had been removed (Fig. 1B). The human body was facing away from the observer and had principal axes of 6.95° × 21.41° (horizontal × vertical) of visual angle. The pot had principal axes of 9.75° × 10.27° of visual angle. The arm had the hand palm and the inner part of the forearm pointing upward and had principal axes of 3.6° × 10.23° of visual angle. The pot handles were simple loop handles with a curly lower attachment and a top attachment that smoothed directly into the pot; they had principal axes of 3.4° × 5.29° of visual angle.
Both arms and handles could be right or left and could appear rotated, either clockwise or counterclockwise, at one of five possible orientations (Fig. 1C), namely, at 30°, 60°, 90°, 120°, and 150° from their upright canonical orientation.
Participants lay supine with their head fixated by firm foam pads and were asked to keep their arms along the body with the palms toward the legs for the whole duration of the experiment. Stimuli were presented using Presentation 9.0 (Neurobehavioral Systems) and projected to a VisuaStim Goggles system (Resonance Technology) that subtended 30° × 22.5° (horizontal × vertical) of visual angle. Behavioral responses were recorded by pressing the corresponding keys of an MRI-compatible response device (Lumitouch, Lightwave Medical Industries, CST Coldswitch Technologies) mounted on a custom-made feet support.
For each experimental trial, the stimuli were presented for 2000 ms, followed by an intertrial interval that ranged from 2700 to 4300 ms with incremental steps of 400 ms. Participants were asked to perform a handedness task: if the stimulus was a right arm they had to press the pedal corresponding to the right foot, whereas if the stimulus was a left arm they had to press the pedal corresponding to the left foot. This response matching was chosen to avoid interference effects between stimulus handedness and the response given. Moreover, even though the arm's laterality could, in principle, be assessed by inspecting the thumb's position, participants were instead asked to accomplish the task using different strategies, that is, either by imagining the arm stimulus rotating until this could be wedged in the human photograph (visual strategy) or by ignoring the human photograph and imagining to rotate their own arm until this reached the position depicted in the screen (motor strategy). The experiment was built so that those orientations that were closer to the position of the participants' arm (namely, 120° and 150°) were apart from the position that the arm would have if wedged in the visual model of the body. Conversely, those orientations that were close to the position that the arm would have if wedged in the visual model of the body (namely, 30° and 60°) were as well apart from the position of the participants' arm (Fig. 1C,D). For control, participants faced a handle of a pot and were asked to assess whether it belonged to the right/left side of a simultaneously presented pot (Fig. 1B). In one condition, subjects were asked to accomplish the task either by imagining the handle rotating until this could be wedged on the pot surface (visual strategy) or by imagining grasping the handle and placing it on the pot surface (motor strategy).
Consistently with previous experiments (Parsons, 1987a,b; Tomasino and Rumiati, 2004), we defined the factor orientation, by labeling lateral those trials that had arms/handles rotated away from one's midsagittal plane (i.e., both right arms/handles rotated clockwise and left arms/handles rotated counterclockwise) and medial those trials that had arms/handles rotated toward one's midsagittal plane (i.e., both right arms/handles rotated counterclockwise and left arms/handles rotated clockwise) (Fig. 1E). This yielded a 2 × 2 × 2 design with the factors stimulus [arm (A) vs handle (H)], strategy [visual (V) vs motor (M)], and orientation [lateral (L) vs medial (M)] and eight conditions: (1) AVL, participants assessed the handedness of a laterally oriented arm using a visual strategy; (2) AVM, participants assessed the handedness of a medially oriented arm using a visual strategy; (3) AML, participants assessed the handedness of a laterally oriented arm using a motor strategy; (4) AMM, participants assessed the handedness of a medially oriented arm using a motor strategy; (5) HVL, participants assessed the handedness of a laterally oriented handle using a visual strategy; (6) HVM, participants assessed the handedness of a medially oriented handle using a visual strategy; (7) HML, participants assessed the handedness of a laterally oriented handle using a motor strategy; and (8) HMM, participants assessed the handedness of a medially oriented handle using a motor strategy.
The factors stimulus and strategy were delivered, during the experimental session, in four separate blocks, the order of which was counterbalanced across subjects. The factor orientation was instead delivered within each block, together with the angle (30°, 60°, 90°, 120°, and 150°) previously defined. Thus, each block comprised 90 randomized trials, including 80 experimental trials [5 angles × 2 orientations × 8 repetitions (in which the position of the arm/handle with respect to the body/pot, i.e., right vs left, was counterbalanced)] and 10 “null events” in which a white screen replaced the stimuli. Each block was introduced by instructions (8000 ms) informing the subjects about the upcoming stimuli and the strategy to use.
For each subject, a training session was performed before the fMRI experiment. This included a shortened version of the experiment that comprised four different blocks (one for each kind of level of stimulus and strategy) in each of which 40 randomized trials were delivered. Each of these blocks was repeated until participants felt confident that they understood the task and that they were using the correct strategy.
Behavioral data processing.
For each subject, and for each condition, the average accuracy rate and the medial values of the reaction times (RTs) of the correct trials were calculated from all eight repetitions. For each subject, and for each kind of stimulus, strategy, and orientation, accuracies and RTs were regressed against the factor angle. The coefficients of each regression (namely, the slope and the intercept) were then used in a 2 (stimulus: arm vs handle) × 2 (strategy: visual vs motor) × 2 (orientation: lateral vs medial) repeated-measures ANOVA. For each effect, the partial eta square (ηp2) value was calculated as an estimate of effect size. Statistical analyses were performed with SPSS 14.0 software.
fMRI data acquisition.
A Siemens Trio 3-T whole-body scanner was used to acquire both T1-weighted anatomical images and gradient-echo planar T2-weighted MRI images with blood oxygenation level-dependent (BOLD) contrast. The scanning sequence was a trajectory-based reconstruction sequence with a repetition time (TR) of 2200 ms, an echo time (TE) of 30 ms, a flip angle of 90°, a slice thickness of 3 mm, and a 0.3 mm interval between slices. For each subject, 929 volumes were acquired during the whole experimental session (∼34 min). For the anatomical images the following parameters were used: TR = 2.25 ms, TE = 3.93 ms, number of sagittal slices = 128, slice thickness = 1 mm, interslice gap = 0.5 mm, and flip angle = 9°.
fMRI data processing.
Statistical analysis was performed using the SPM5 software package (http://www.fil.ion.ucl.ac.uk/spm/). For each subject, the first six volumes were discarded. To correct for subject motion, the functional images were realigned to a first functional image (Ashburner and Friston, 2003), normalized to a template based on 152 brains from the Montreal Neurological Institute (MNI), and then smoothed by convolution with an 8 mm full-width at half-maximum (FWHM) Gaussian kernel.
The data were analyzed using the general linear model framework (Kiebel and Holmes, 2003) implemented in SPM5. On the first level, for each individual subject, we fitted a linear regression model to the data, by modeling the event sequence of each of the eight conditions of the 2 × 2 × 2 design and its first-order temporal derivative. Trials in which participants omitted responses were excluded from the event-sequences further analyzed. Moreover, for each of the eight conditions tested, additional regressors testing both the parametric modulation of the factor angle and its time derivative were included. All the regressors were convoluted with a canonical hemodynamic response function. To account for movement-related variance, we included the six differential realignment parameters as regressors of no interest. Low-frequency signal drifts were filtered using a cutoff period of 128 s.
For each subject, simple main effects for each regressor were computed by applying appropriate baseline contrasts. The first-level individual images describing both the eight experimental conditions and the eight parametric modulations were then fed to a second-level one-way ANOVA with “conditions” as factor with eight levels, using a random effects analysis (Penny and Holmes, 2003). Specific effects were then tested by applying appropriate contrasts to the parameter estimates for each condition. Areas of activation were identified as significant if they passed a threshold of p < 0.05, corrected for multiple comparisons at the cluster-level, with an underlying voxel-level height threshold of t = 3.16 (corresponding to p < 0.001, uncorrected).
The localization of the functional activations with respect to cytoarchitectonic areas was analyzed based on probabilistic cytoarchitectonic maps derived from the analysis of cortical areas in a sample of 10 human postmortem brains, which were subsequently normalized to the MNI reference space. The significant results of the random effects analysis were compared with the cytoarchitectonic maps using the SPM Anatomy toolbox (Eickhoff et al., 2005).
For each activated region, the percentage signal changes were calculated over the averaged parameter estimates using the MarsBar toolbox (Brett et al., 2002). t tests were performed over the extracted percentage signal change values to further investigate the functional properties of the areas of activation. Statistical analysis was performed with SPSS 14.0 software.
Over all conditions participants took 1134 ms on average for giving a correct response (F(1,16) = 794.34, p < 0.001, ηp2 = 0.98). The main effect of strategy was significant (F(1,16) = 46.06, p < 0.001, ηp2 = 0.74), with the use of motor strategy (1237 ± 42 ms, SEM) (Fig. 2C,D) being significantly more time consuming than the use of the visual strategy (1031 ± 43 ms) (Fig. 2A,B). The main effect of orientation was significant (F(1,16) = 23.53, p < 0.001, ηp2 = 0.59), with the stimuli rotated laterally (1162 ± 41 ms) eliciting higher RTs than the stimuli rotated medially (1107 ± 40 ms). The stimulus × strategy interaction was significant (F(1,16) = 11.48, p < 0.01, ηp2 = 0.42), with participants being ∼325 ms slower when using the motor (with respect to the visual) strategy for assessing handles, and being ∼88 ms slower when using the motor (with respect to the visual) strategy for assessing arms. The stimulus × orientation interaction was significant (F(1,16) = 6.48, p < 0.05, ηp2 = 0.22), with the arms eliciting 69 ms slower responses when rotated laterally (with respect to arms rotated medially), and handles eliciting equal responses regardless of whether they were rotated laterally or medially (lateral vs medial ∼2 ms). Finally, the strategy × orientation interaction was significant (F(1,16) = 5.96, p < 0.05, ηp2 = 0.27), with stimuli rotated laterally eliciting 80 ms longer RTs than stimuli rotated medially during the use of the motor strategy (Fig. 2C,D), and with stimuli rotated laterally eliciting 30 ms longer RTs than stimuli rotated medially during the use of the visual strategy (Fig. 2A,B). None of the remaining effects in the ANOVA were found to be significant.
The analysis of the slopes revealed, over all conditions, a positive linear relation between RTs and angle (grand mean: ∼1.1 ms/deg, F(1,16) = 35.48, p < 0.001, ηp2 = 0.69). The main effect of stimulus was significant (F(1,16) = 9.36, p < 0.01, ηp2 = 0.37), with RTs increasing on average 1.5 ms/deg (± 0.1) for the handles and 0.66 ms/deg (±0.3) for the arms. The main effect of strategy was significant (F(1,16) = 12.40, p < 0.01, ηp2 = 0.44), with the use of the visual strategy eliciting on average 1.7 ms/deg (±0.2) and the use of the motor strategy eliciting 0.5 ms/deg (±0.3). Finally, the stimulus × strategy interaction was significant (F(1,16) = 5.82, p < 0.05, ηp2 = 0.27), reflecting a positive effect of the angle of the rotated arms on the RTs (1.8 ± 0.4 ms/deg) during the use of the visual strategy, but not of the motor strategy (−0.4 ± 0.5 ms/deg), and a positive effect of the angle of the rotated handles on the RTs regardless of the strategy used (motor: 1.4 ± 0.3 ms/deg; visual: 1.7 ± 0.2 ms/deg). None of the remaining effects in the ANOVA were found to be significant.
The analysis of accuracy showed that, over all conditions, participants were on average 95.8% correct in their assessments. We did not find any significant effect of stimulus, strategy, orientation, or any interaction thereof.
With regard to the main effect of stimulus, arms as opposed to handles [i.e., (AVL + AVM + AML + AMM) − (HVL + HVM + HML + HMM)] led to differential activation of the posterior part of the middle temporal gyrus bilaterally (Fig. 3A), over and around the extrastriate body area (EBA), which has previously been described as responding to human bodies and parts thereof in a category-selective manner (Downing et al., 2001). In contrast, the visual processing of handles, relative to arms [i.e., (HVL + HVM + HML + HMM) − (AVL + AVM + AML + AMM)], revealed an activation in the left primary visual cortex. Neither the main effect of strategy nor the main effect of orientation led to any suprathreshold activation.
With regard to the stimulus × strategy interaction, the assessment of the arm's (but not the handle's) laterality using motor (but not visual) strategy (i.e., [(AML + AMM) − (HML + HMM)] − [(AVL + AVM) − (HVL + HVM)]) led to a significant increase in the BOLD response in the inferior portion of the left parietal operculum corresponding to cytoarchitectonic area OP 1 (Eickhoff et al., 2006) (Fig. 4). This interaction was due to an increase of neural activity while assessing the arm's laterality using the motor strategy, which was significantly higher than that associated with each of the other three conditions [(AML + AMM) − (AVL + AVM): t(16) = 3.35, p < 0.01; (AML + AMM) − (HML + HMM): t(16) = 3.50, p < 0.01; (AML + AMM) − (HVL + HVM): t(16) = 2.55, p < 0.05]. No suprathreshold activation was found associated with the opposite contrast (i.e., [(AVL + AVM) − (AML + AMM)] − [(HVL + HVM) − (HML + HMM)]). The analysis of the stimulus × orientation interaction did not lead to any suprathreshold activation. The analysis of the strategy × orientation interaction led to a suprathreshold activity in the vermis extending to the left anterior cerebellar lobe (Fig. 3B). Finally, the analysis of the stimulus × strategy × orientation three-way interaction led to suprathreshold activation in the right middle temporal gyrus (supplemental material, available at www.jneurosci.org).
Region of interest analysis
Previous neuroimaging studies reported regions in the left primary motor cortex and in both PM and supplementary motor area bilaterally (but predominantly on the left hemisphere) associated with the use of motor imagery (Parsons et al., 1995; Stephan et al., 1995; Roth et al., 1996; Parsons and Fox, 1998; Schubotz and von Cramon, 2004; de Lange et al., 2006; Wolfensteller et al., 2007). Thus, a region of interest (ROI) analysis was performed in which we tested significant increases of neural activity in the left precentral gyrus and supplementary motor area [AAL atlas (Tzourio-Mazoyer et al., 2002)], for both the main effect of strategy and the strategy × orientation interaction. We found significant activation within the left PM (MNI coordinates: x = −56, y = 4, z = 32; t = 4.00, p < 0.05, small volume corrected) associated with the strategy × orientation interaction (Fig. 3B). This region was significantly more active when subjects assessed the laterality of laterally (with respect to medially) rotated stimuli while using the motor strategy (t(16) = 3.79, p < 0.01), but not the visual strategy (t(16) = 0.75, n.s.). This left premotor activation was identified by testing the same interaction contrast, which, at a voxelwise level, identified the cerebellar activation displayed in Figure 3B. However, in the left premotor cortex the interaction term was driven, as predicted, by an enhanced activity during the assessment of laterally oriented stimuli while using the motor strategy relative to the other three conditions. On the other hand, in the cerebellum, the interaction term was driven by an enhanced activity during the assessment of medially oriented stimuli while using the visual strategy. Thus, in contrast to the premotor activation, the cerebellar activation cannot be interpreted as due to the use of motor imagery.
Parametric modulation of angle
Differential linear effects of angle were found analyzing the main effects of stimulus and strategy (supplemental material, available at www.jneurosci.org). As to the stimulus × strategy interaction, the assessment of the arms' (but not the handles') laterality using the motor (but not the visual) strategy (i.e., [(AML + AMM) − (HML + HMM)] − [(AVL + AVM) − (HVL + HVM)]) did not reveal any suprathreshold activation. The opposite contrast (i.e., [(AVL + AVM) − (HVL + HVM)] − [(AML + AMM) − (HML + HMM)]) led to a differential linear effect of angle on the neural response in (1) the supplementary motor area extending to the right middle frontal gyrus, (2) the superior parietal cortex bilaterally, extending to the intraparietal sulcus, (3) the superior portion of the left precentral gyrus, (4) the right supramarginal gyrus, and (5) the left middle occipital gyrus. The neural activity in all these regions grew linearly with the angle at which arms (but not handles) were rotated while using the visual strategy. Moreover, when participants were using the motor strategy, the neural activity of these regions grew linearly with the angle at which the handles were rotated and decreased linearly with the angle at which the arms were rotated (Fig. 5, red activations). Commonalities between this neural behavior and our reaction time data suggest that the neural activity of these regions was modulated by the amount of imagery used. Crucially, however, this was not the case in the left superior parietal cortex (extending from z = 46 to z = 70, thus including the left caudal IPS as well), which was also the only region in the whole brain showing a suprathreshold activity when testing against zero the parametric modulation of angle only in those trials in which participants were assessing the arm's laterality using a visual strategy [i.e., AVL + AVM (Fig. 5, green activations)].
The analysis over the extracted parameters testing the linear relation between BOLD response and angle showed that the activity of this region increased monotonically with the angle at which the arms were rotated while using the visual strategy (t(16) = 4.20, p < 0.001). No significant linear relation was instead found during the assessment of the arms' laterality (t(16) = −1.44, n.s.) while using the motor strategy or during the assessment of the handles' laterality, regardless of the strategy used (visual: t(16) = 1.29, n.s.; motor: t(16) = 1.93, n.s.). This was not the case in the other regions, in which the interaction was driven as well by using the motor strategy and, in particular, by a significant negative linear relation between angle and BOLD activity during the assessment of arms, together with a positive linear relation between angle and BOLD activity during the assessment of handles (for the detailed statistics over each of the eight regions found significantly associated with the interaction term, see supplemental material, available at www.jneurosci.org). Neither the stimulus × orientation nor the stimulus × strategy × orientation three-way interaction led to any suprathreshold activation.
In the present study we disentangled the neural mechanisms underlying two body maps: the body schema, an egocentric model coding the location of one's own body parts in space and time, and the body structural description, an allocentric model coding the position of parts of a standard body. We asked participants to assess the laterality of an arm or a handle by using either visual or motor imagery. Consistently with previous studies, the use of motor imagery over laterally (relative to medially) oriented stimuli (either arms or handles) both is more time consuming (Parsons, 1987a,b) and results in stronger PM activity (de Lange et al., 2006), thus confirming that imagined movements account for those biomechanical limitations that bind real movements, even though the mental space is, at least in principle, constraintless (Tomasino and Rumiati, 2004; Vargas et al., 2004; Stevens, 2005) [for similar effects in the domain of action observation, see Urgesi et al. (2006) and Candidi et al. (2008)]. No orientation effects were found while using visual imagery, consistently with previous accounts, which suggest that the task engages object-based transformations of visual mental images (Shepard and Metzler, 1971).
More importantly, the reference frame used to assess the arm's laterality changed together with the kind of imagery used. While using visual imagery, RTs increased with the angular discrepancy between the arm's orientation and the orientation that the arm would have if wedged on the body (ideally described by 0°), thus confirming that this assessment was based on an allocentric body model. Such relationship disappeared while using motor imagery: the angles that minimized the angular discrepancy between the orientation of the displayed arm and the orientation of one's own arm (ideally described by 180°) were the ones associated with the shorter RTs, thus confirming that this assessment was based on an egocentric body model. This was not the case of the handle orientation, which, regardless of the imagery used, was assessed according to an allocentric pot model, as proven by RTs increasing with the angular discrepancy between the handle's orientation and the orientation that the handle would have on the pot surface (described by 0°).
The body schema and the secondary somatosensory cortex
The analysis of the BOLD responses significantly associated with the stimulus × strategy interaction isolated those brain regions involved in an egocentric (rather than allocentric) body model, controlling for strategy differences. This was the case in the left parietal operculum. Previous data obtained from both macaques (Burton et al., 1995; Krubitzer et al., 1995) and humans (Disbrow et al., 2000; Eickhoff et al., 2007) suggest that the parietal operculum is organized into distinct areas, among which the secondary somatosensory cortex (SII) exhibits a somatotopic body map. In particular, the hand area of left SII has recently been identified at the coordinates x = −56, y = −21, z = 21 [by applying tactile stimuli over the hand, as opposed to other body parts (Eickhoff et al., 2007)]. Similar coordinates (x = −51, y = −21, z = 19) were obtained by a metaanalysis reporting SII activations consequent to somatosensory stimulation of the hand, hand motion, and manipulation of complex objects (Eickhoff et al., 2006). Each of these two sets of coordinates is ∼5 mm distant from our opercular activation (x = −58, y = −24, z = 18), thus implicating the hand area of SII in our study as well.
SII has often been associated with high-level tactile processing. For example, Fitzgerald et al. (2006a,b) reported SII neurons responding to complex tactile stimuli, such as bars of a given orientation. Crucially, the neurons' responses were not limited to the cases in which the bar was applied on a contiguous skin surface (e.g., the hand's palm), but also on skin portions belonging to different body parts (e.g., different digits), thus showing how SII integrates inputs arising from different touched points. Haggard (2006) suggested that, to do so, SII needs to process the information about the intrinsic position among one's own body parts. While behavioral studies describe tactile processing being bound by implicit information about one's own body (de Vignemont et al., 2005, 2009), no study thus far has been able to show that such information is coded by SII.
However, SII has been implicated in processes related to BS. For instance, SII is a polysensory region: it receives cutaneous input from peripheral receptors and information from the primary somatosensory cortex, extrastriate visual areas, and polymodal regions within IPS and PM (Cipolloni and Pandya, 1999). Bremmer et al. (2001) found SII responding to moving stimuli regardless of whether they were perceived through vision, audition, or touch, whereas Keysers et al. (2004) found the left SII active during the processing of touches regardless of whether these were seen on others or felt on one's own skin. Moreover, SII has been shown to respond to proprioceptive modulations of tactile stimuli both in monkeys (Fitzgerald et al., 2004) and humans by using the rubber-hand illusion (Botvinick and Cohen, 1998): in this task, participants see a dummy hand while their real hand is hidden from sight; when the experimenters brush simultaneously both real and dummy fingers, participants may report feeling the touch where the fake hand is located. Tsakiris et al. (2007) recently associated the right insula with the sense of ownership toward the dummy “seen” hand, and the left primary and secondary somatosensory cortex with the sense of ownership toward the real “felt” hand (described by those trials in which participants, although stimulated, did not feel the illusion). Our study extends the results described above, by implicating the hand area of SII in coding the intrinsic position of one's own arm within the whole body. Moreover, such information was elicited exclusively by visual stimuli, i.e., in the absence of tactile stimulation, thus providing the first reliable association between SII and a supramodal representation of one's own body.
Allocentric body model and the left posterior IPS
We found that the neural activity of the left superior parietal cortex, extending to the posterior IPS, was modulated by the factor angle when the arm's position was assessed using visual imagery. In contrast, no effect of angle was found in all other conditions, thus showing that this region is not involved in mental transformation of any object part according to an allocentric object model, but specifically of a body part according to an allocentric body model. This activation is reminiscent of the one identified by Corradi-Dell'Acqua et al. (2008), who implicated the left posterior IPS in processing body part visual stimuli according to a body-centered reference frame. This was not the case for the posterior IPS in the right hemisphere, which has instead been implicated in object-centered processing of either body or nonbody parts (Harris and Miniussi, 2003; Zacks et al., 2003; Urgesi et al., 2007). We extend the results from this previous study by showing that the body-centered effects with which left posterior IPS is associated reflect the involvement of an allocentric (but not egocentric) visual model of the body.
The BSD is the cognitive process supposedly damaged in autotopagnosia. Patients affected by this neurological syndrome usually exhibit extensive damage, which involves the left parietal and, in particular, the posterior parietal cortex (Ogden, 1985; Semenza, 1988; Denes et al., 2000; Schwoebel et al., 2001; Felician et al., 2003). A recent study, however, found activation of the left posterior IPS when healthy participants performed a pointing task similar to the one used to diagnose autotopagnosia (Felician et al., 2004). Our results thus extend previous findings associating the left IPS with the BSD, thereby explaining why damage to this region may lead to autotopagnosia.
The notions of BS and BSD have a certain degree of overlap with the notions of self and others. Whereas BS is a representation of one's own body and, therefore, related to the concept of self, BSD supposedly draws large part of its input from visual processing of others' bodies, thereby relating it to other. The neural underpinnings of self/other processing have been studied extensively by using different kind of tasks, which converge in implicating the right inferior parietal lobe, temporoparietal junction, and the medial prefrontal cortex (Decety and Sommerville, 2003; Vogeley and Fink, 2003; Blanke and Arzy, 2005), thus rendering the alternative account posing self/other processing behind our activations less likely. To the best of our knowledge, no study has disentangled explicitly the neural underpinnings of BSD and BS from those of self and other. However, David et al. (2006) showed participants self- and other-generated actions seen from either the egocentric or allocentric perspective. They found that the right inferior parietal lobe, precuneus, and middle frontal cortex were modulated by the agent (self–other), whereas bilateral regions such as IPL, PM, and insula were modulated by the perspective from which the action was observed. Moreover, occipital–temporal regions bilaterally and the left superior frontal cortex were modulated by both factors, thus suggesting that these processes, though independent, exhibit a certain degree of overlap in their neural representations.
G.R.F. was supported by the Deutsche Forschungsgemeinschaft (KFO 112, TP1). We are grateful to our colleagues of the Institute of Neuroscience and Medicine, Research Center Jülich. In particular, we thank Simon Eickhoff for his support in all methodological matters and Roland Sparing for helping us in recruiting the participants. Finally, we thank two anonymous reviewers for suggesting a number of significant improvements to earlier versions of this manuscript.
- Correspondence should be addressed to Dr. Corrado Corradi-Dell'Acqua, sInstitute of Neuroscience and Medicine (INM-3), Cognitive Neurology Section, Research Center Jülich, Leo-Brandt-Strasse 5, D-52425 Jülich, Germany.