Abstract
Planning object-directed hand actions requires successful integration of the movement goal with the acting limb. Exactly where and how this sensorimotor integration occurs in the brain has been studied extensively with neurophysiological recordings in nonhuman primates, yet to date, because of limitations of non-invasive methodologies, the ability to examine the same types of planning-related signals in humans has been challenging. Here we show, using a multivoxel pattern analysis of functional MRI (fMRI) data, that the preparatory activity patterns in several frontoparietal brain regions can be used to predict both the limb used and hand action performed in an upcoming movement. Participants performed an event-related delayed movement task whereby they planned and executed grasp or reach actions with either their left or right hand toward a single target object. We found that, although the majority of frontoparietal areas represented hand actions (grasping vs reaching) for the contralateral limb, several areas additionally coded hand actions for the ipsilateral limb. Notable among these were subregions within the posterior parietal cortex (PPC), dorsal premotor cortex (PMd), ventral premotor cortex, dorsolateral prefrontal cortex, presupplementary motor area, and motor cortex, a region more traditionally implicated in contralateral movement generation. Additional analyses suggest that hand actions are represented independently of the intended limb in PPC and PMd. In addition to providing a unique mapping of limb-specific and action-dependent intention-related signals across the human cortical motor system, these findings uncover a much stronger representation of the ipsilateral limb than expected from previous fMRI findings.
Introduction
An overarching feature of primate cortex, at the level of both sensory inputs and motor outputs, is its strong contralateral organization. For instance, in primary visual cortex, the main cortical recipient of incoming retinal information, the right hemisphere supports representations of the left visual field and vice versa. Similarly, in primary motor cortex (M1), the main source of descending outputs to spinal structures (Porter and Lemon, 1993), the right hemisphere primarily supports movements of the left side of the body and vice versa. However, elsewhere throughout cortex, because of the integration of information across the hemispheres via the corpus callosum (or other subcortical pathways; Colby et al., 2005), the contralateral topography becomes less pronounced. In the case of vision, this interhemispheric transfer is thought to facilitate unified spatial perception. In the case of action, it is thought to allow sophisticated bimanual coordination of the limbs. Movement preparation is a particular case in which crosstalk between the hemispheres is required, particularly when the target object crosses both visual hemifields and/or permits interaction with either hand. Correspondingly, neurons in monkey frontoparietal cortex involved in planning eye and hand movements, although primarily tuned to the contralateral field or limb, respectively, also show ipsilateral or bilateral responses (Barash et al., 1991; Ben Hamed et al., 2001; Cisek et al., 2003; Chang et al., 2008).
A second prominent feature of primate cortex is its hierarchical organization. Central to many of our higher-level cognitive abilities is the capacity to link abstract action goals with the underlying behaviors needed to achieve them. Monkey neurophysiological evidence suggests that this capacity relies on actions being represented with increasing levels of abstraction throughout sensorimotor circuits. For instance, the planning-related activity in some areas appears linked to the representation of intentions and goals (e.g., grasping an object), whereas in other areas, the activity seems to correspond with lower-level features of the movement (e.g., muscles, forces, and direction; Rizzolatti et al., 1988; Scott and Kalaska, 1997; Kakei et al., 1999, 2001; Hoshi and Tanji, 2006).
Fundamental questions in sensorimotor neuroscience concern where and how in the human brain action- and limb-related signals become combined. Given the increase in hemispheric specialization and corresponding asymmetries in the lateralization of certain cognitive/perceptual functions (Liepmann, 1907; Corballis, 1991; Gazzaniga, 2000), the integration of action- and limb-related signals may differ in humans compared with monkeys. However, the capacity to examine this issue first requires the ability to map and characterize planning-related brain signals in humans with a level of detail similar to that shown in monkeys.
Here we found that a multivariate analysis of the functional MRI (fMRI) activity patterns that precede movement onset could be used to decode which limb would be used (the contralateral vs ipsilateral arm) to perform one of two upcoming actions (grasping vs reaching). Using this approach, we show where and to what extent in human frontoparietal cortex the integration of limb-specific and action-dependent signals occurs during movement planning. Notably, we uncover a much stronger representation of the ipsilateral limb than expected from previous fMRI findings.
Materials and Methods
Our task required participants to perform either a grasp or reach movement with their left or right hand toward a centrally located target object on each trial; participants were first cued to the action to be performed, and then, after a delay period, they executed the action (Fig. 1B,C). On all trials, the target object was presented throughout the sequence, such that the visual presentation remained constant and only the instructions (and prepared movements) differed. The delayed timing of the paradigm allowed us to isolate the sustained intention-related neural activity that evolves before movement (plan phase) from the transient visual (preview phase) and movement execution responses (execute phase; Fig. 1D) that directly accompany presentation of the target object and initiation of the movement, respectively (Gallivan et al., 2011a,b).
Note that here we define the motor action as a function of “how” the object will be interacted with during the task (grasping vs reaching). This definition differs from that frequently used in both the human and monkey sensorimotor control literature, which commonly defines a specific motor action or goal with respect to the spatial location of a target (Basso and Wurtz, 1997; Snyder et al., 1997; Beurze et al., 2007, 2009; Stark and Zohary, 2008; Gallivan et al., 2011a). Importantly, with this latter definition, the ubiquitous finding that many frontoparietal neurons respond strongly to spatial information with some level of invariance to the acting effector (e.g., eyes vs limb, left vs right hand) (Hoshi and Tanji, 2000; Calton et al., 2002; Cisek et al., 2003; Dickinson et al., 2003; Chang et al., 2008; but see Bisley and Goldberg, 2010 for a review) necessitates that caution be applied in interpreting any spatially selective response at the single-neuron level as a planning-related, motor-specific signal (Snyder et al., 1997; Cui and Andersen, 2007). However, it is worth noting that, at the level of neural populations, effector-specific representations appear to emerge (Quian Quiroga et al., 2006). The current study avoided any potential ambiguity by having the motor action (performed on a single centrally located target object) tightly linked to highly motor-specific aspects of the planned hand action, such as whether or not it required finger preshaping.
We investigated the planning-related fMRI spatial activity patterns [using multivoxel pattern analysis (MVPA)] in specific regions of interest (ROIs) to determine whether we could decode (1) upcoming left- versus right-hand movements and (2) upcoming grasping versus reaching movements. More specifically, in the former case, we examined the dependence of limb decoding based on the intended hand action: limb specificity. This included two decoding tests: (1) grasp-left (GraspL) versus grasp-right (GraspR); and (2) reach-left (ReachL) versus reach-right (ReachR). In the latter case, we examined the dependence of hand action decoding based on the limb to be used: action specificity. This included two separate decoding tests: (1) GraspL versus ReachL; and (2) GraspR versus ReachR. Notably, this second set of decoding tests allowed us to examine whether the preparatory activity in a specific ROI represents planned ipsilateral movements in addition to contralateral movements.
We further examined whether the patterns of brain activity for one set of movements (e.g., grasping vs reaching with the left hand) could be used to predict the patterns of brain activity for a different set of movements (e.g., grasping vs reaching with the right hand). To do this, we trained pattern classifiers using one set of trials and tested the accuracy of the classifiers using a different set of trials (cross-decoding) (Dinstein et al., 2008; Formisano et al., 2008; Harrison and Tong, 2009; Gallivan et al., 2011a). If cross-decoding is successful in a particular ROI, it suggests that the representation of information is to some degree shared between the separate conditions being tested. For instance, to test for “limb-specific, action-independent” representations, we trained the pattern classifier to discriminate the limb being used for one set of hand actions (e.g., GraspL vs GraspR) and then tested it with trials for the other set of hand actions (e.g., ReachL vs ReachR). If successful, this particular cross-decoding pairing would indicate that some of the signals in an ROI represent the intended limb (left vs right hand) with some invariance to the hand action (grasping vs reaching). Similarly, we tested for “action-specific, limb-independent” representations by training the pattern classifier to discriminate hand actions for one limb (e.g., GraspL vs ReachL) and then testing it with trials from the other limb (e.g., GraspR vs ReachR). According to the same logic, this second cross-decoding test can assess whether the signals in an ROI represent hand actions (grasping vs reaching) that generalize across the intended limb (left vs right hand).
We first localized a common set of action-related ROIs within each individual subject for use in MVPA. These a priori ROIs were defined by performing a whole-brain voxelwise search in each subject to find areas where the activity during movement execution (execute phase) was higher than the activity elicited during simple visual object presentation (preview phase; when participants were unaware of which action they would eventually perform). This [execute > preview] contrast, in addition to revealing activity throughout a well-documented frontoparietal network of areas (Fig. 2, Table 1), allowed us to isolate brain areas involved in movement generation and then examine with MVPA the preparatory activity (plan phase) that forms between the two trial phases (note that the [execute > preview] contrast also allowed us to even localize movement-related areas containing baseline activity levels during planning and yet still examine whether predictive movement information is represented in the corresponding spatial voxel patterns). Within this network, we focused MVPA on 11 commonly described neuroanatomical ROIs in both the left and right hemispheres (22 ROIs total), each examined in our previous studies (Gallivan et al., 2011a,b) and each previously implicated in visuomotor processing in both the human and monkey: (1) superior parieto-occipital cortex (SPOC), a general region thought to be involved in reach preparation and execution (Galletti et al., 1997; Prado et al., 2005; Cavina-Pratesi et al., 2010); (2) posterior intraparietal sulcus (pIPS), a general area involved in attention and hand movement-related processes (Calton et al., 2002; Beurze et al., 2009; Chang and Snyder, 2010; Szczepanski et al., 2010); (3) middle IPS (midIPS), an area involved in limb movements (Culham et al., 2006; Gallivan et al., 2011a,b); (4) anterior IPS (aIPS), a region involved in hand grasping (Culham et al., 2003; Frey et al., 2005; Tunik et al., 2005; Davare et al., 2007a); (5) an area posterior to anterior IPS (post. aIPS), an area involved in object-related processing and hand preshaping (Valyear et al., 2007; Gallivan et al., 2011b); (6) motor cortex, a region predominantly involved in contralateral limb movement (Tanji et al., 1988; Kim et al., 1993; Cisek et al., 2003) and the primary source of descending projections to spinal cord (Porter and Lemon, 1993); (7) dorsal premotor cortex (PMd), a region primarily involved in reach-related processes (Caminiti et al., 1990a,b; Pesaran et al., 2006; Cavina-Pratesi et al., 2010), although also implicated in grasping (Raos et al., 2004; Cavina-Pratesi et al., 2010); (8) ventral premotor (PMv) cortex, an ROI most often implicated in hand-related actions (Rizzolatti et al., 1988; Graziano et al., 1994; Cavina-Pratesi et al., 2010; Davare et al., 2010); (9) supplementary motor area (SMA) and (10) preSMA, two medial frontal areas involved in internally generated actions and sequencing together limb movements (Kermadi et al., 1997; Hoshi and Tanji, 2004b; for review, see Nachev et al., 2008); and (11) dorsolateral prefrontal cortex (DLPFC), an ROI most often implicated in several aspects of higher-level cognitive processing (for review, see Miller and Cohen, 2001), including maintaining items of interest in working memory (Funahashi et al., 1989; Goldman-Rakic, 1996). One additional area, somatosensory cortex (SS cortex), was selected as a sensory control region, not expected to accurately decode movements until stimulation of the mechanoreceptors of the hand at movement onset (i.e., at execute phase; Johansson and Flanagan, 2009).
The critical question of interest here was whether we would be able to predict the limb to be used and hand action to be performed (as a function of limb) in an upcoming movement from the preparatory activity patterns that form before movement onset in each of the prespecified areas noted above. In doing so, our goal was to provide a unique characterization of the integration of limb-specific and action-dependent planning-related signals across human frontoparietal cortex. Although previous fMRI work in humans has reported bilateral and even ipsilateral movement-related representations (Beurze et al., 2007; Stark and Zohary, 2008; for review, see Filimon, 2010), very little work has demonstrated, similar to that with neural recording methods in nonhuman primates (Cisek et al., 2003; Chang et al., 2008), whether these neural representations are behaviorally relevant (i.e., whether they contain information pertaining to the type of hand movement to be performed). In contrast to the majority of studies that have used pattern classification methods to examine the perceptual representations elicited by the presentation of visual or auditory stimuli (Haxby et al., 2001; Haynes and Rees, 2005; Kamitani and Tong, 2005; Formisano et al., 2008), here we use MVPA to predict upcoming behaviors, future motor events that, at the time of decoding, have yet to occur.
Participants
Eleven right-handed volunteers participated in this study (five females; mean age, 25.7 years) and were recruited from the University of Western Ontario (London, Ontario, Canada). Informed consent was obtained in accordance with procedures approved by the Health Sciences Research Ethics Board of the university.
Setup and apparatus
Each subject's workspace consisted of a black platform placed over the waist and tilted away from the horizontal at an angle (∼10–15°) to maximize comfort and target visibility. To facilitate direct viewing of the workspace, we also tilted the head coil (∼20°) and used foam cushions to give an approximate overall head tilt of ∼30° (Fig. 1A). Participants performed individual grasp or reach movements with the left or right hand toward a centrally located target object when required (Fig. 1C). To minimize limb-related artifacts, participants had the right and left upper arms braced, limiting movement of each limb to the elbow and creating an arc of reachability for each hand. The target object was made of opaque plastic (length × depth × height, 11.5 × 5 × 7.5 cm) and was secured to the workspace (via black tape) at a midpoint location common to the arcs of reachability for each hand (Fig. 1B). The exact placement of the target object on the platform was adjusted to match each participant's arm length such that all required movements were comfortable. During the experiment, the target object was illuminated from the front by a bright white light emitting diode (LED) attached to a flexible plastic stalk (Loc-Line; Lockwood Products), located over the participant's left shoulder. Each trial was preceded by a period in which participants were in complete darkness. During participant setup, the illuminator LED was positioned so as to provide good illumination of the target object and the visual workspace. Experimental timing and lighting were controlled with in-house software created with MATLAB (MathWorks). For an overview of the experimental setup and task, see Figure 1. To control for eye movements during scanning, a small green fixation LED attached to a flexible plastic stalk was placed above and behind the target object, and participants were required to always foveate the fixation LED during experimental testing. Throughout the experiment, subjects' eye and arm movements were recorded using two MR-compatible infrared-sensitive cameras (bore cameras; MRC Systems), each attached to a flexible plastic stalk. One bore camera was positioned over the subject's left shoulder and directed toward the subject's hand to verify that subjects performed the movement task correctly. The second bore camera was positioned directly beside the subject's right eye and directed toward the pupil to verify that eye position during fixation remained stable (note that both of these bore cameras are not shown in Fig. 1A). The videos captured during the experiment were then analyzed offline to exclude error trials from fMRI analysis. A more rigorous tracking of the eyes was not performed in the MRI scanner because our eye-tracking software does not work while the head is tilted because of a partial occlusion from the eyelids.
For each trial, the participants were required to perform one of four actions on the target object, after a delay period: (1) reach toward and precision grasp the object (between their thumb and index finger) with the left hand, without lifting (GraspL, “grasp left” auditory command given to subject); (2) reach toward and precision grasp the object with the right hand, without lifting (GraspR, “grasp right” auditory command given to subject); (3) reach toward and manually touch the top of the object (without hand preshaping) with the left hand (ReachL, “reach left” auditory command given to subject); and (4) reach toward and manually touch the top of the object with the right hand (ReachR, “reach right” auditory command given to subject). Thus, whereas on grasp trials hand preshaping for precision grasping was required, on reach trials subjects were simply required to transport their hand from the starting position, without any preshaping, and touch the top of the object (with their middle phalanges and proximal and distal interphalangeal joints). Note that, although the grasp action also required a distinct reach component, exactly how the object was interacted with during grasp versus reach actions remained significantly different. Participants were instructed to keep the timing of hand movements for grasping and reaching trials and the left and right hand as similar as possible. For grasping movements, to reduce trial-to-trial variability in grasp points, a small black dot sticker was placed at the center of the target object (on the surface that faced the subject), and, when cued, participants were required to place the thumb of the grasping hand at this prespecified dot location. Other than the execution of these hand actions, the left and right hands throughout all other phases of the trial [preview phase, plan phase, and intertrial interval (ITI); for trial phases, see Fig. 1D] were to remain still and in “home” positions on the left and right surfaces of the platform. For each participant, these home/starting positions were marked with a small elevated plastic nib secured to the platform, and participants were required to always return to these same positions after execution of the instructed movement (for these two locations in a representative subject, see Fig. 1B). For each trial, the target object never changed its position, thus eliminating any retinal differences across the experiment. Importantly, from trial-to-trial and before movement, it was only the subject's movement intentions that changed.
Experiment design and timing
To extract the sustained sensorimotor planning response from the transient visual and motor execution responses, we used a slow event-related planning paradigm with 34 s trials, each consisting of three distinct phases: preview, plan, and execute (Fig. 1D). We adapted this paradigm from our previous work (Gallivan et al., 2011a,b), as well as from studies with eye and arm movements that have successfully isolated delay period premovement activity (Curtis et al., 2004; Beurze et al., 2007, 2009; Chapman et al., 2011; Pertzov et al., 2011).
In our object-directed movement task, each trial began with the preview phase, in which the subject's workspace was illuminated, revealing the centrally located target object. After 6 s of the preview phase, participants were given an auditory cue (0.5 s duration), “grasp left,” “grasp right,” “reach left,” or “reach right,” informing them of the upcoming movement required; this cue marked the onset of the plan phase. Although there were no visual differences between the preview and plan phases of the trial (i.e., the target object was always visually present), only in the plan phase did participants have all the information necessary (i.e., conjunction of limb and hand action) to prepare the upcoming action. After 12 s of the plan phase, a 0.5 s auditory beep cued participants to immediately execute the planned action (for a duration of ∼2 s), initiating the execute phase of the trial. Two seconds after the beginning of this “go” cue, the illuminator was turned off, providing the cue for participants to return the hand to its respective starting position. After the illuminator was turned off, participants then waited in the dark while maintaining fixation for 14 s, allowing the blood oxygenation level-dependent (BOLD) response to return to baseline before the next trial (ITI phase). The four trial types, with five repetitions per condition (20 trials total), were randomized within a run and balanced across all runs so that each trial type was preceded and followed equally often by every other trial type across the entire experiment.
Separate practice sessions were performed to familiarize participants with the paradigm, namely the delay timing that required the cued action to be performed only at the beep (go) cue. These sessions were performed before the participants entered the scanner as well as during the anatomical scan (collected at the beginning of every experiment). A testing session for one participant included setup time (∼45 min), eight functional runs, and one anatomical scan, and lasted ∼3 h in total. Each subject also participated in an additional scan session in which high-resolution anatomical images and functional localizers were collected (note that the functional localizers were not used for analysis in the current study).
Eye-tracking control experiment
Four of the participants completed an additional testing session outside of the MRI scanner in which their eye movements were monitored using an Eye-Link II eye tracker (SR Research). Participants completed two full experimental runs, each identical to those performed in the scanner. Monocular eye tracking was performed at a sample rate of 500 Hz, with maximum calibration errors ranging from 0.68° in participant 1 to 0.78° in participant 4.
We quantified eye stability using the variance in eye position in both horizontal and vertical dimensions. After the removal of blinks and their related artifacts, we computed the SD for each condition and phases of the trial (preview, plan, and execute) and then subjected each of the horizontal and vertical eye position SDs to a 3 × 4 repeated-measures ANOVA (i.e., 3 trial phases × 4 trial types). This analysis revealed negligible evidence of eye movements in our participants. Specifically, we found very low variability (average SDs for horizontal axes across subjects were just fractions of a degree at 0.32°, 0.28°, and 0.58° for the preview, plan, and execute phases of the trial, respectively; average SDs for vertical axes across subjects were slightly higher at 1.03°, 1.08°, and 1.15° for the same trial phases), and, correspondingly, none of the main effects or their interaction were statistically significant (all p > 0.5). This demonstrates that, in general, participants were able to reliably maintain fixation over the course of a full experimental run and, as such, subtle differences in eye stability between the trial types is unlikely to account for any accurate decoding performance found throughout frontoparietal cortex.
MRI acquisition and preprocessing
Imaging was performed on a 3 tesla Siemens TIM MAGNETOM Trio MRI scanner. The T1-weighted anatomical image was collected using an Alzheimer's Disease Neuroimaging Initiative magnetization-prepared rapid-acquisition gradient echo sequence [repetition time (TR), 2300 ms; echo time (TE), 2.98 ms; field of view, 192 × 240 × 256 mm; matrix size, 192 × 240 × 256; flip angle, 9°; 1 mm isotropic voxels]. BOLD MRI volumes were collected using a T2*-weighted single-shot gradient-echo echo-planar imaging acquisition sequence [TR, 2000 ms; slice thickness, 3 mm; in-plane resolution, 3 × 3 mm; TE, 30 ms; field of view, 240 × 240 mm; matrix size, 80 × 80; flip angle, 90°; and acceleration factor (integrated parallel acquisition technologies) of 2 with generalized autocalibrating partially parallel acquisitions reconstruction]. Each volume comprised 34 contiguous (no gap) oblique slices acquired at a ∼30° caudal tilt with respect to the plane of the anterior and posterior commissure (ACPC), providing near whole-brain coverage. We used a combination of imaging coils to achieve a good signal/noise ratio and to enable participants' direct viewing of the workspace without mirrors or occlusion. Specifically, we tilted (∼20°) the posterior half of the 12-channel receive-only head coil (six channels) and suspended a four-channel receive-only flex coil over the anterior–superior part of the head. The cortical surface from one subject was reconstructed from a high-resolution anatomical image, a procedure that included segmenting the gray and white matter and inflating the boundary surface between them. This inflated cortical surface was used to overlay group activation for figure presentation (Fig. 2; also note that voxel activity for this figure was spatially interpolated from 3 mm functional iso-voxel resolution to 1 mm functional iso-voxel resolution). All preprocessing and univariate analyses were performed using Brain Voyager QX version 2.21 (Brain Innovation).
After slice scan-time correction, 3D motion correction (such that each volume was aligned to the volume of the functional scan closest to the anatomical scan), high-pass temporal filtering (4 cycles/run), and functional-to-anatomical coregistration, functional and anatomical images were rotated such that the axial plane passed through the ACPC space and then transformed into Talairach space. Other than the sinc interpolation inherent in all transformations, no additional spatial smoothing of the data was performed. Talairach data were only used for group voxelwise random-effects (RFX) analyses to display the predefined action-related ROIs (Fig. 2). For MVPA, these same areas were defined anatomically within each subject's ACPC data. Given that MVPA discriminates spatial patterns across voxels, we have found it beneficial to select ROIs at the single-subject level using the ACPC data in lieu of the Talairach data (Gallivan et al., 2011a,b), which has additional resampling and distorts the voxel sizes.
For each participant, functional data from each session were screened for motion and/or magnet artifacts by examining the time course movies and the motion plots created with the motion-correction algorithms. None of the runs revealed head motion that exceeded 1 mm translation or 1° rotation. Error trials—trials in which the participant fumbled with the object (one trial, one participant), performed the incorrect instruction (three trials, two participants), or contaminated the plan phase data by slightly moving their limb or eyes or by performing the action before the go cue (eight trials, four participants)—were identified offline from the videos recorded during the session and were excluded from analysis by assigning these trials predictors of no interest. This generally low error rate more than likely reflects the fact that participants were well trained on the movement task before entering the scanner.
ROIs
To localize the specific a priori action-related areas in individual participants in which to apply MVPA, we used a general linear model (GLM) with predictors created from boxcar functions convolved with the Boynton (Boynton et al., 1996) hemodynamic response function (HRF). For each trial, a boxcar function was aligned to the onset of each phase: (1) three volumes for the preview phase, (2) six volumes for the plan phase; and (3) one volume for the execute phase. After convolution with the HRF, the height of the predictor functions took into account the duration of each phase. The ITI was excluded from the model; therefore all regression coefficients (β values) were defined relative to the baseline activity during the ITI. In addition, the time course for each voxel was converted to percentage signal change before applying the RFX GLM.
To define our prespecified ROIs and select voxels for MVPA, in each subject's ACPC-aligned data, we searched for brain areas involved in movement execution (collapsed across all conditions) compared with the simple visual response accompanying object presentation before instruction: [execute (GraspL + GraspR + ReachL + ReachR) > preview (GraspL + GraspR + ReachL + ReachR)]. The resulting statistical map of all positively active voxels in each subject [t = 3, p < 0.005, each subject's activation map was cluster threshold corrected (corrected, p < 0.05) so that only voxels passing a minimum cluster size were selected; average minimum cluster size across participants was 108.5 mm3; for details, see below, ROI selection] was then used to define 12 different ROIs within both the left and right hemispheres (see below, ROI selection). The voxels included in each ROI were selected based on all significant activity within a 3375 mm3 cube centered on predefined anatomical landmarks that corresponded with functional activity (for criteria, see below, ROI selection). These ROI sizes were chosen because it allowed the inclusion of several functional voxels for pattern classification while at the same time ensuring that adjacent ROIs did not substantially overlap (for the average number of functional voxels selected across the 11 participants, see Table 1). Rather importantly, given the orthogonal contrast used to select these ROIs (i.e., execute > preview), the selected activity is not directionally biased to show any preview-, plan-, or execute-related pattern differences between any of the experimental conditions.
ROI selection procedures
The ROI selection procedures were as follows: (1) left and right SPOC: defined by selecting voxels located medially and directly anterior to (or sometimes within) the parieto-occipital sulcus (Gallivan et al., 2009); (2) left and right pIPS: defined by selecting activity at the caudal end of the IPS (Beurze et al., 2009); (3) left and right midIPS: defined by selecting voxels halfway up the length of the IPS, centered on the medial bank, near a characteristic “knob” landmark observed consistently within each subject (Gallivan et al., 2011a,b); (4) left and right region located post. aIPS: defined by selecting voxels just posterior to the junction of the IPS and post-central sulcus (PCS), on the medial bank of the IPS (Culham, 2004; Gallivan et al., 2011b); (5) left and right aIPS: defined by selecting voxels directly at the junction of the IPS and PCS (Culham et al., 2003); (6) left and right SS cortex: defined by selecting voxels encompassing the post-central gyrus and PCS, medial and anterior to aIPS (Gallivan et al., 2011b); to provide a similar number of SS cortex voxels to those of other areas, the cluster size for this ROI only was increased (up to 5832 mm3); (7) left and right motor cortex: defined by selecting voxels around the “hand knob” landmark in the central sulcus (CS) (Yousry et al., 1997); (8) left and right PMd: defined by selecting voxels at the junction of the pre-central sulcus (preCS) and superior frontal sulcus (Picard and Strick, 2001); (9) left and right PMv: defined by selecting voxels posterior to the junction of the inferior frontal sulcus and preCS (Tomassini et al., 2007); (10) left and right SMA: defined by selecting voxels adjacent and anterior to the medial end of the CS, posterior to the plane of the anterior commissure (Picard and Strick, 2001; Gallivan et al., 2011b); (11) left and right preSMA: defined by selecting voxels superior to the anterior segment of the cingulate sulcus, anterior to the plane of the anterior commissure, and more anterior and inferior than those voxels selected for SMA (Picard and Strick, 2001; Gallivan et al., 2011b); (12) left and right DLPFC: defined by selecting voxels anteriorly located along the intermediate frontal sulcus (Curtis and D'Esposito, 2003).
See Table 1 for details about ROI coordinates and sizes and Figure 2 for representative anatomical locations on one subject's brain.
To provide an additional control, we also tested the performance of our pattern classifiers in ROIs outside of our action-related network in which no statistically significant classification should be possible. To select these ROIs, we further reduced our statistical threshold (after specifying the [execute > preview] network within each subject) to t = 0, p = 1, and selected all positive activation within 3375 mm3 centered on a consistent point (1) within each subject's right ventricle and (2) at a location just outside the skull of the right hemisphere, in the ACPC plane, directly in line with the posterior commissure.
MVPA
We used the fine-grained sensitivity afforded by MVPA to examine whether we could decode limb-specific and action-dependent movement plans from different frontoparietal brain regions, in which little or no premovement signal amplitude differences may exist. If successful, our aim was to then determine whether we could characterize the representation of hand actions for the contralateral and ipsilateral limb across frontoparietal cortex based on the types of upcoming movements the preparatory activity in each area can predict.
Support vector machine classifiers.
MVPA was performed with a combination of in-house software (using MATLAB) and the Princeton MVPA Toolbox for MATLAB (http://code.google.com/p/princeton-mvpa-toolbox/) using a Support Vector Machines (SVM) binary classifier (LIBSVM; http://www.csie.ntu.edu.tw/∼cjlin/libsvm/). The SVM model used a linear kernel function and a constant cost parameter, C = 1, congruent with many other fMRI studies (Mitchell et al., 2003; LaConte et al., 2003; Mourão-Miranda et al., 2005; Haynes et al., 2007; Pessoa and Padmala, 2007), to compute the hyperplane that best separated the trial responses. To verify the generalizability of the set of trials into two separate stimulus classes an iterative cross-validation procedure, in which several independent subsets of trials were used to train and test the classifier, was used. The separability of the sets of trials into the correct stimulus classes was then assessed by comparing the average accuracy of the classifier over N iterations to the chance level (Duda et al., 2001).
Voxel pattern preparation.
To prepare the data for spatial pattern classification, the percentage signal change activity was computed from a windowed average of the time course at a time point of interest (e.g., preview, plan, or execute) with respect to an average of the time course at a common baseline, for each voxel in the ROI (a procedure similar to that used for analyzing event-related average time courses). The baseline window was defined as volume −1, a time point before the current trial that avoided contamination from responses of the previous trial. For the preview phase time points, we extracted the mean of volumes 3–4; time points corresponding to the peak of the visual transient response (see percentage signal change time courses in Figs. 3⇓⇓⇓⇓–8). [Note that, although volumes 3–4 encompass time points both one volume before and after the auditory instruction, the activity during this time window—because of the sluggishness of the BOLD response—can only be attributable to a simple visual response and cannot reflect any plan-related activity initiated by the auditory cue]. For the execute phase time points, we extracted the average of volumes 12–13, time points generally corresponding to the peak (and activity before the peak) of the transient movement response, after the subject's action (see percentage signal change time courses in Figs. 3⇓⇓⇓⇓–8). Last, for the plan phase, we extracted the average of volumes 8–9 (the final two volumes of the plan phase), generally corresponding to the sustained activity of a planning response (see percentage signal change time courses in Figs. 3⇓⇓⇓⇓–8) and, rather importantly, a two-volume window before the subject has initiated any movement. After the extraction of the percentage signal change activity of each trial, these values were rescaled between −1 and +1 across all trials for each individual voxel within an ROI. It is worth noting that, with this time-dependent analysis approach, in addition to revealing which types of movements could be decoded, we could also examine specifically when in time predictive movement information was available (i.e., within the preview, plan, or execute phase).
Pairwise discriminations.
SVMs are designed for classifying differences between two stimuli and LIBSVM (the SVM package implemented here) uses the “one-against-one method” for each pairwise discrimination. Although it is often the case that multiple pairwise results are combined to produce multiclass discriminations (Hsu and Lin, 2002) (i.e., distinguish among more than two stimuli), to map brain regions according to the types of upcoming movements that could be decoded, we found it imperative to examine the individual pairwise discriminations separately. For instance, a right-hemisphere brain area that discriminates contralateral but not ipsilateral hand actions (i.e., GraspL vs ReachL trials but not GraspR vs ReachR trials)—an important theoretical finding here—would be essentially obscured in a multiclass discrimination approach, requiring the individual pairwise comparisons to be assessed independently in any case.
Single-trial classification.
For each subject and each action-related ROI, 12 separate binary SVM classifiers were estimated for MVPA (i.e., for each of the preview, plan, and execute phases and each pairwise comparison; GraspL vs GraspR, ReachL vs ReachR, GraspL vs ReachL, and GraspR vs ReachR). We used a “leave-one-trial-pair-out” N-fold cross-validation to test the accuracy of the SVM classifiers [i.e., one trial from each of the conditions being compared (two trials total) were reserved for testing the classifier, and the remaining (N − 1) trial pairs were used for classifier training (i.e., 39 remaining trials per condition)]. We performed this N − 1 cross-validation procedure until all trial pairs were tested and then averaged across N iterations to produce a classification accuracy measure for each pairwise discrimination and subject (Duda et al., 2001). We statistically assessed decoding significance across participants using a two-tailed t test versus 50% chance decoding. To control for the problem of multiple comparisons, a false discovery rate (FDR) correction of q ≤ 0.05 was applied based on the p values of all t tests performed across all ROIs for a specific comparison (e.g., GraspL vs Grasp R) (Benjamini and Yekutieli, 2001).
Permutation tests.
In addition to the t test, we separately assessed the statistical significance of our decoding accuracies with nonparametric randomization tests (Golland and Fischl, 2003; Etzel et al., 2008; Smith and Muckli, 2010; Chen et al., 2011; Gallivan et al., 2011a,b). For each subject, ROI, and pairwise comparison, after classifier training (and testing) with the true trial identities, we permuted the correspondence between the test trial identities and data 100 separate times before testing the classifier (i.e., the test labels were swapped with 50% probability on each individual permutation) and then computed classifier performance the same as before (i.e., average across N iterations) for each individual permutation of the test labels. This produced 100 mean accuracies (the one “true” mean accuracy containing the correct test trial labeling was appended to this permuted distribution). We then generated a randomized population of 1000 group mean accuracies, based on 1000 combinations of randomly drawn accuracies from each subject's permuted distribution (of 101 accuracies), and then found the empirical probability of the true group mean accuracy based on its place in a rank ordering of this randomized distribution. Note that the peak percentiles of significance (p < 0.001) are limited by the number of samples producing the randomized probability distribution at the group level. The findings from this nonparametric randomization test produced significant results with much higher significance than those found with a standard parametric t test (a finding also noted by Smith and Muckli, 2010; Chen et al., 2011; Gallivan et al., 2011a,b), indicating the highly conservative nature of the conventional parametric t test. The important finding highlighted from these permutation tests is that the brain areas showing significant decoding with the one sample parametric t tests (vs 50%) also show significant decoding (albeit generally much higher) with the empirical nonparametric permutation tests.
Cross-decoding.
To test whether an SVM pattern classifier trained to discriminate between two trial types could then be used to accurately predict pattern differences when tested on a different set of trials (e.g., train set: GraspL vs GraspR; test set: ReachL vs ReachR), instead of using the N − 1 cross-validation procedure (implemented above), we used all the available single-trial data for both classifier training and testing [i.e., one single train-and-test iteration (Smith and Muckli, 2010; Gallivan et al., 2011a)]. Cross-decoding accuracies for each subject were computed by averaging together the two accuracies generated by using each pair of trial types for classifier training and testing (for example, when testing for action-specific, limb-independent effects, right-hand trials were used to train the classifier in one analysis when left-hand trials were used for testing, and then they were used to test the classifier in the other analysis when the left-hand trials were used for classifier training). The means across participants of this cross-decoding procedure are reported in Figures 3⇓⇓⇓⇓–8. As above, we statistically assessed decoding significance with a two-tailed t test versus 50% chance decoding. An FDR correction of q ≤ 0.05 was applied based on the p values of all the t tests performed across all the ROIs for a specific comparison.
Results
Across the network of frontoparietal ROIs, we found a wide range of pattern classification profiles during movement planning: some areas predicted the limb to be used (contralateral vs ipsilateral hand), some the hand actions to be performed (grasping vs reaching), but in the overwhelming majority of regions, preparatory signals discriminated some conjunction of both types of motor information (for a summary of the findings, see Fig. 2). These decoding results are discussed in detail below and organized according to the general neuroanatomical region in which these multiplexed preparatory signals were revealed. It is worth noting that, although we do in fact observe several interesting pattern classification profiles during movement execution (i.e., execute phase of the trial), some level of caution needs to be applied when interpreting these findings. For instance, not only can the execution-related activations reflect the sensory/proprioceptive feedback responses that accompany movement onset (and object contact), but motion artifacts may be introduced into the data when the limb perturbs the magnetic field of the scanner (Culham, 2004). For these and other reasons, the primary focus of the current study was to reveal the integration of limb-specific and action-dependent signals during movement planning. As such, for the sake of concision, the execution-related findings, when relevant, are only briefly discussed.
Posterior parietal cortex decoding
In posterior parietal cortex (PPC), we found a general gradient of contralateral-to-bilateral limb-related signals. For instance, in both L- and R-SPOC, we found what can be described as contralateral preparatory responses: both regions discriminated the limb to be used and upcoming hand action (grasping vs reaching) for the contralateral hand (Fig. 3, top). That is, left SPOC could discriminate grasping versus reaching with the right hand, whereas right SPOC could do the same for the left hand. This finding resonates with the grasp- and reach-related responses often reported in human SPOC and monkey V6A, its putative homolog (Prado et al., 2005; Fattori et al., 2009, 2010; Cavina-Pratesi et al., 2010; Gallivan et al., 2011b) and adds to emerging evidence from nonhuman primates that the PPC operates at a limb-dependent stage of processing, further along the continuum of sensory-to-motor transformations for arm movement than traditionally assumed (Chang et al., 2008; Chang and Snyder, 2012). Given that subjects maintained central fixation on an LED for the duration of each trial, these findings may also support the view that human SPOC forms part of a specialized network of areas involved in planning arm movements to peripheral visual targets (Prado et al., 2005).
In contrast to the limb-specific contralateral responses observed in SPOC, we found that the planning-related activity in pIPS and midIPS predicted all conjunctions of the limb to be used and hand action to be performed (Fig. 3, middle and bottom). Although it is perhaps unexpected to find that PPC, particularly the most caudal aspect of the IPS, represents hand actions for the ipsilateral limb, these fMRI results do in fact closely correspond with those observed at the single-neuron level in macaque monkeys. For instance, although comparatively far fewer in number, monkey PPC contains reach-related neurons preferentially and exclusively tuned to movements of the ipsilateral limb (Chang et al., 2008). Although the exact functional role that ipsilateral limb-specific neurons play in movement planning remains speculative (Chang et al., 2008; see also Discussion), these findings nonetheless suggest, given the connectivity of macaque frontoparietal cortex, that PPC provides both contralateral and ipsilateral limb-specific information to premotor cortex areas, such as PMd, a primary recipient of its direct projections (Tanne et al., 1995; Wise et al., 1997; Tanné-Gariépy et al., 2002).
The observed gradient of contralateral-to-bilateral limb-specific representations in PPC seemingly contrasts with recent human transcranial magnetic stimulation (TMS) findings that report a strong contralateral limb representation in midIPS but not SPOC (Vesia et al., 2010). This discrepancy, although potentially reflecting differences in the methodologies, tasks used, or functional localization of these areas, may also relate to a nuance of the pattern classification approach; although our analyses permit the detection of signals coding for the contralateral and ipsilateral limb, they necessarily fall short of being able to determine whether representations of the contralateral limb are stronger than those of the ipsilateral limb (except, of course, for the obvious cases in which we simply cannot decode movements with the ipsilateral limb, like in SPOC). However, an examination of the signal amplitude responses suggests a trend for there to be more contralateral hand movement representations in anterior–lateral (e.g., midIPS) than medial–posterior (e.g., SPOC) PPC foci (see percentage time courses in Fig. 3; for additional confirmation of this fact, see below, Anterior parietal cortex decoding), consistent with the findings from Vesia et al. (2010).
For each PPC area (and also each area examined below), by crossing classifier training and testing between trials of different types, we also characterized the limb-specific, action-independent and action-specific, limb-independent nature of the underlying representations (see Materials and Methods, Cross-decoding; for additional details, see Fig. 3 legend). In ROIs in which we found statistically significant cross-decoding, we took this as suggestive evidence that the corresponding activity patterns being examined contained some degree of mutual or shared information (at least to the extent that accurate cross-classification could be achieved). Interestingly, we found that, within L-pIPS, R-pIPS, and L-midIPS, the plan-related activity patterns showed significant cross-decoding for the hand action to be performed (Fig. 3, blue bars in cross-decoding plots). This suggests that some portion of the signals in each of these areas reflect more abstract, limb-invariant representations of the upcoming movement. In contrast to these action-specific, limb-independent signals, we found limb-specific, action-independent cross-decoding in R-midIPS during movement planning (Fig. 3, red bars in cross-decoding plots). Together, these findings correspond well with recent neurophysiological findings in nonhuman primates that report a mixture of limb-dependent and limb-independent signals in PPC (Chang et al., 2008), as well as with more general observations that monkey PPC signals tend to be highly multiplexed, representing some conjunction of both the action (or goal) and movement effector (Snyder et al., 1997; Batista et al., 1999; Calton et al., 2002). We also found that, during the execution phase of movement, limb-specific, action-independent representations primarily dominated the activity patterns across all six PPC regions (see cross-decoding plots in Fig. 3). This latter finding likely reflects a trend for there to be stronger contralateral responses at the signal amplitude response level during movement execution (see percentage signal change time courses in Fig. 3).
Anterior parietal cortex decoding
Although some past studies have found differences in coding between aIPS and post. aIPS (Valyear et al., 2007; Gallivan et al., 2011b), here we found an essentially identical pattern of preparatory responses: discrimination of the intended limb and representation of the hand action for the contralateral limb only (Fig. 4). The differentiation of grasping versus reaching movements is consistent with the grasp-related activity observed previously in the anterior parietal cortex of both humans (Culham et al., 2003; Cavina-Pratesi et al., 2010; Gallivan et al., 2011b) and monkeys (Taira et al., 1990; Murata et al., 2000; Baumann et al., 2009). In contrast to the posterior aspects of the IPS discussed above (i.e., pIPS and midIPS), our finding suggests a primarily contralateral organization for anterior parietal cortex in movement planning. This accords with the strongly biased fMRI responses for the contralateral limb commonly observed within anterior parietal cortex during movement execution (Beurze et al., 2007; Stark and Zohary, 2008; see the percentage signal change time courses in Fig. 4).
The strong bias of anterior parietal signals toward the contralateral limb would also be consistent with the accurate cross-decoding results (see red bars in cross-decoding plots for L- and R-post. aIPS and L-aIPS during the plan phase and see all cross-decoding plots during the execute phase). However, we note some difficulty and reservation in interpreting the significant action-specific, limb-independent responses found with cross-decoding (e.g., blue bars in the cross-decoding plots for L-aIPS), given that the corresponding GraspL versus ReachL and GraspR versus ReachR pairwise comparisons are not themselves both significant (see dark blue and light blue bars in the within-trial decoding figure).
Decoding in the motor and sensory cortices
We found a rather intriguing result in both left and right motor cortex: in addition to encoding the limb to be used, planned hand actions for both the contralateral and ipsilateral limb were represented (Fig. 5, top). Although there is certainly some neurophysiological evidence in monkeys suggesting the presence of movement-related signals for the ipsilateral arm in motor cortex (Tanji et al., 1988; Donchin et al., 1998; Cisek et al., 2003; Ganguly et al., 2009; but see Soteropoulos et al., 2011), compared with the contralateral limb, these neurons tend to be in the significant minority (Cisek et al., 2003). Nevertheless, stroke patients with unilateral lesions involving M1 will typically show deficits in coordinating movements of the ipsilateral limb (Colebatch and Gandevia, 1989; Desrosiers et al., 1996; Hermsdörfer et al., 1999a,b; Yarosh et al., 2004), and microstimulation studies in monkeys (Aizawa et al., 1990; Boudrias et al., 2010) and rodents (Brus-Ramer et al., 2009), as well as TMS studies in humans (Chen et al., 1997; Stedman et al., 1998; Tinazzi and Zanette, 1998; Foltys et al., 2001; Davare et al., 2007b), all suggest a role for ipsilateral motor cortex in limb movements. However, what remains vague across many of these studies is the specific role/function of these ipsilateral representations.
Ipsilateral movement-related activity in motor cortex has been linked to a variety of sources, features, and interrelated purposes (see Discussion). For instance, one possibility is that it reflects an efference copy of a contralateral motor command, delivered via transcallosal (or subcortical) projections. One purpose of these efference signals could be to facilitate coordination of the two limbs in goal-directed behavior. There is some evidence suggesting, particularly within M1, SMA, and PMd, that bimanual coupling requires both contralateral and ipsilateral signals to be present within the same population of neurons (Donchin et al., 1998; 2002; Kermadi et al., 1998, 2000). If this is the case, then transcallosal coordination with ipsilateral motor cortex by the contralateral hemisphere may simply reflect automatic, naturally occurring cortical dynamics during unimanual actions, in line with observations of trial-by-trial synchronicity in contralateral and ipsilateral M1 activity (Verstynen and Ivry, 2011). There is also evidence suggesting that ipsilateral activity may be tied to specific features of the task (e.g., movement complexity, timing of muscle recruitment; Verstynen et al., 2005; Davare et al., 2007b) or context (Diedrichsen et al., 2012). Notably, the fact that we can discriminate differences in the intention to perform ipsilateral distal finger movements (grasping vs reaching) from preparatory motor cortex signals, similar to that shown in macaque monkeys, suggests that ipsilateral motor cortex is at some level involved in movement planning. From the perspective of fMRI, this particular result conveys the importance of examining distributed activity patterns in addition to signal amplitude responses alone (as in conventional analyses). For instance, based solely on the fMRI signal amplitude response after movement onset and baseline-level activity during planning (see percentage signal change time courses in Fig. 5, top), a reasonable assumption would be that motor cortex is primarily involved in movement execution (rather than planning) and contains only representations of the contralateral limb.
Consistent with motor cortex operating at a stage of processing closer to the final motor output (Todorov, 2000; Scott, 2003, 2008; Kalaska, 2009) and indicative of substantial differences in the coding of the four movement conditions, we found no significant cross-decoding for the limb or hand actions during planning (see the cross-decoding plots in Fig. 5, top, red and blue bars, respectively). However, what clearly emerges during action execution is a robust limb-specific, action-independent representation [presumably driven by the strong limb-dependent signal amplitude differences that arise in the percentage signal change time courses at that stage of the trial, highly consistent with previous studies (Kim et al., 1993; Beurze et al., 2007; Stark and Zohary, 2008)].
In SS cortex, consistent with all expectations, we found no significant decoding for any of the intended movements (Fig. 5, bottom). This suggests that, at least during movement planning, discriminatory signals are limited to sensorimotor circuits within frontoparietal cortex (Gallivan et al., 2011a,b). However, during action execution, we did find evidence for robust limb-specific signals (in both the decoding and time course activity), consistent with the mechanoreceptors of the hand being stimulated only at object contact as well as with the basic role attributed to SS cortex in processing tactile sensory information.
Premotor cortex decoding
In PMd and PMv, we found evidence for somewhat dissociable neural representations during movement planning: PMd contained limb-specific signals and represented hand actions for both the contralateral and ipsilateral limb, whereas PMv, while also representing actions for both the contralateral and ipsilateral hands, showed limited discrimination for the intended limb (Fig. 6). This reported dissociation in humans is consistent with findings from monkey neurophysiology showing that PMd integrates both effector- and goal-related signals (Cisek et al., 2003; Hoshi and Tanji, 2006; Pesaran et al., 2006), whereas PMv primarily represents more goal-related properties of the task (Gentilucci et al., 1988; Rizzolatti et al., 1988; Hoshi and Tanji, 2002, 2006). As such, our findings suggest a similar planning-related role for these two regions across both primate species.
Cross-decoding further revealed the presence of action-specific, limb-independent preparatory signals in L-PMd (Fig. 6, blue bars in cross-decoding plots), commensurate with the more abstract goal-related signals often reported in monkey PMd (Wise et al., 1997, 1998; Cisek et al., 2003). Despite the absence of limb-specific signals in PMv, the absence of action-specific, limb-independent cross-decoding suggests that the representation of actions in PMv is still at some level linked to the acting effector. In additional support of differences in the functional role of each region, we found prominent limb-dependent responses during movement execution in PMd but not PMv, both revealed at the level of distributed activity patterns (compare decoding plots for execute phase; Fig. 6) and signal amplitude responses (see biased contralateral limb responses in the time course activity of PMd but not PMv; Fig. 6). However, apart from these differences, consistent with past fMRI work from our laboratory (Gallivan et al., 2011b) as well as work from other groups using different methodologies (e.g., TMS, neural recordings; Raos et al., 2004, 2006; Davare et al., 2006; Stark et al., 2007), we were able to decode grasp- and reach-specific planning-related activity from both PMd and PMv.
Decoding in the SMA
Similar to premotor cortex, we also found evidence for dissociable planning-related responses in the SMAs: L- and R-preSMA discriminated hand actions for both the contralateral and ipsilateral limbs (and for the most part, the limb to be used), whereas L- and R-SMA primarily showed contralateral limb-dependent representations (i.e., decoding the intended limb and hand action for the contralateral limb only; Fig. 7). This differentiation of activity profiles approximately corresponds to dissociations reported in monkey neurophysiology, whereby preSMA neurons appear to serve a more integrative function by combining action goals with the movement effector (and generally show a preferential tuning to movement goals), whereas SMA neurons instead appear predominantly linked to representing the intended limb (Fujii et al., 2002; Hoshi and Tanji, 2004b). In addition, similar to the results reported here (Fig. 7), monkey SMA during movement execution shows strong selectivity for the acting limb, whereas preSMA neurons largely fail to make this same distinction (Hoshi and Tanji, 2004b). However, in contrast to these and other previous findings in monkeys, we failed to reveal any ipsilateral limb-related preparatory signals in SMA (Tanji et al., 1988; Donchin et al., 2002; Hoshi and Tanji, 2004b). This may relate to subtle differences in the organization of the SMAs between the two primate species or, alternatively, reflect differences in the methodologies used (fMRI vs neural recordings). Nonetheless, mostly in line with these previous investigations in nonhuman primates, our findings indicate some degree of specialization between the two SMAs in sensorimotor processing.
Prefrontal cortex decoding
In both L- and R-DLPFC, we observed a similar decoding profile to that found in PMv: some representation of hand actions for the contralateral and ipsilateral hands but no discrimination of the limb to be used (Fig. 8). This pattern of activity, although not a direct match with respect to previous findings in monkeys, seems somewhat consistent with responses in more ventral aspects of monkey DLPFC that show more goal-related properties rather than the more dorsal aspect that contains limb-specific signals (Hoshi and Tanji, 2004a). To loosely speculate, it may be the case that the activity we observe here is predominantly linked to maintaining or storing the rule/goal of the task (White and Wise, 1999; Wallis et al., 2001) rather than the specific way (left vs right limb) in which that action will be achieved.
Characterizing intention-related signals
Critical to several of the interpretations made here is the notion that our findings, specifically the preparatory spatial activity patterns that accrue during the plan phase, actually reflect the intention of an individual to perform a specific movement. We have made this important distinction because nearly all previous characterizations with human fMRI of the cortical topography supporting the integration of limb- and goal-related signals have relied on differentiating the signal amplitude responses that accompany (or follow from) initiation of the movement (i.e., during the execute phase; Culham et al., 2006; Filimon, 2010) rather than those that precede movement (but see Bursztyn et al., 2006; Beurze et al., 2007; Chapman et al., 2011; Pertzov et al., 2011). This, of course, relates primarily to the fact that the premovement signal amplitude responses tend to be highly overlapping and essentially indistinguishable between different movement conditions (for verification that this is indeed the case, see the percentage signal change time courses during the plan phase in Figs. 3⇑⇑⇑⇑–8).
The absence of decoding found during the preview phase—before the subject being aware of which movement to perform—supports our interpretation of the findings (i.e., accurate decoding during the plan phase = decoding of movement intentions). It also suggests that the current paradigm and analysis approach is able to reveal similar types of planning-related signals in the human brain to those well documented in macaque monkeys with invasive neural recording techniques (Gallivan et al., 2011a,b), although admittedly at a far less spatial and temporal resolution. It is also worth noting that the absence of decoding during the preview phase, in and of itself, provides a good control of data quality and further validates that the signals discriminated with MVPA during the plan and execute phases of the trial are unlikely to arise simply as a result of chance (note that we have separately validated our findings using a nonparametric permutation test, and, for more cautious interpretations, we also apply an FDR correction).
The capacity to study the preparatory components of sensorimotor neural processing in humans, as demonstrated here, is important not only because of the insights it bears on higher-level cognitive function (and ultimately how neural activity leads to goal-directed behavior) but also because, from a pragmatic standpoint, the responses of cortical motor neurons before movement may contain different information (and relay different signals) from those same neurons after movement onset (Churchland et al., 2010; for a more expansive notion of what exactly is being represented in preparatory neural activity before movement, see Churchland et al., 2012). Indeed, with respect to the latter point, this underlying neural basis may help explain some of the discrepancies we observe between premovement and postmovement pattern decoding (for example, compare the plan- and execute-phase decoding profiles in both PMv and preSMA).
To further ensure that our decoding accuracies could not result from spurious factors (e.g., task-correlated head or arm movements), we ran the exact same classification analyses as performed throughout the frontoparietal network in two non-brain ROIs in which decoding is highly unlikely: the right ventricle and outside the brain. Critically, MVPA in these two areas showed no accurate decoding for any trial phase (Fig. 9).
One aspect of our experimental design and results requires additional clarification. In previous work (Gallivan et al., 2011a), we demonstrated that it is possible to decode movement direction for hand and eye movements from the preparatory activity in a similar set of ROIs to those analyzed here. In the current study, because right and left limb movements were linked to spatial direction (i.e., all left arm actions involved rightward movements and vice versa), one possibility is that some of the limb-specific decoding results may also relate to the direction of the movement being planned. However, based on comparisons with our previous work, we consider this alternative explanation unlikely. Visual examination of the activation time courses across nearly all frontoparietal regions shows rather distinct contralateral limb-related responses during the execute phase of the trial (Figs. 3⇑⇑⇑⇑–8), highly consistent with previous fMRI work (Beurze et al., 2007, 2009; Stark and Zohary, 2008; for review, see Filimon, 2010). Similarly, in our past study (which used many of the same subjects), the only time course effects visually identifiable during movement execution were differences in the effector being used (hand vs eye; Gallivan et al., 2011a, their Fig. 3). In stark contrast, in none of the regions examined in our previous study did we ever find time course differences related to hand movement direction. Together, this strongly suggests that any limb-specific decoding found in the current study predominantly relates to differences in the effector being recruited in the upcoming movement (left vs right hand) rather than differences related to the spatial direction of movement.
In addition, given the nature of our task design and findings, accurate decoding is unlikely to simply reflect low-level visual attention-related responses for two main reasons. First, the hand action, as defined here, was with respect to “how” the object was to be interacted with (grasping vs reaching) and not the spatial location of the target (as done in most studies, for example, Snyder et al., 1997; Cisek et al., 2003). As such, action-related decoding during planning is more likely to reflect motor-specific aspects of the task (e.g., amount of hand-preshaping required) rather than differences in the location of spatial attention. Similarly, the fact that action-specific decoding emerges in so many cortical areas (and in some cases, independent of the hand used) makes it unlikely that differences in spatial attention linked to initial limb position can directly account for our results. Second, consistent with intention versus attention distinctions made from monkey neural activity (Snyder et al., 1997; Andersen and Buneo, 2002; Quian Quiroga et al., 2006; Cui and Andersen, 2007), across most frontoparietal ROIs we find decoding of the movement effector (in this case, the limb) to be used in an upcoming action toward a single target location.
Discussion
Understanding the transition of contralateral-to-bilateral limb representations throughout cortex is fundamental to understanding the sensorimotor transformations that support movement planning. Using an object-directed delayed movement task and fMRI decoding techniques, here we found that a substantial number of frontoparietal areas, in addition to discriminating the acting arm and representing hand actions (grasping vs reaching) for the contralateral limb, also represented hand actions for the ipsilateral limb. Whereas in many regions this finding is consistent with expectations from neurophysiological recordings in nonhuman primates (e.g., PMd), in other brain areas, such as motor cortex, given the relatively small proportion of ipsilaterally tuned neurons (Cisek et al., 2003), this finding was not necessarily expected (or at the very least not anticipated given the coarser spatial resolution of fMRI). In addition, consistent with the more abstract representation of movement goals in monkey PPC and PMd (Cisek et al., 2003; Chang et al., 2008), we found supporting evidence for action-specific, limb-independent representations in corresponding human regions (pIPS, midIPS, and PMd).
Comparison with previous human and monkey studies
Previous human fMRI studies report limb- and goal-related gradients of activity across frontoparietal cortex. A prominent feature of these activation gradients is their diametrical organization; stronger representations of the movement goal (e.g., target location) tend to correspond with weaker representations of the limb and vice versa. For instance, moving posteriorly to anteriorly in parietal cortex, one tends to find an orderly transition from spatial-to-effector-related signals (Beurze et al., 2007, 2009; Stark and Zohary, 2008), and, likewise, the same spatial-to-effector gradient appears mirrored (i.e., rostrally to caudally) in frontal cortex. However, this general topography has received mixed support from monkey neurophysiology. For instance, although it is true that M1 contains neural responses more closely coupled to the contralateral limb than PMd (Cisek et al., 2003), this, by itself, does not indicate that PMd represents limb information to any lesser extent. In fact, the decrease in the proportion of PMd neurons tuned to the contralateral limb (compared with M1) is to a certain extent matched by a correspondent increase in the proportion of neurons tuned to the ipsilateral limb (or both limbs; Cisek et al., 2003). This has important implications for interpreting findings from fMRI because, in individual brain areas in which the neural population of contralaterally and ipsilaterally tuned neurons are highly intermingled (e.g., in parietal or premotor areas), comparisons between the average plan-related activity evoked across the entire region may fail to reveal signals specific to one limb versus the other. That is, despite their being effector specificity (contralateral and ipsilateral responses) or even goal specificity (representations of contralateral and ipsilateral target locations) at the level of individual neurons or subpopulations, an examination of the overall signal amplitude response—as measured at the coarser resolution of BOLD fMRI—may instead suggest little or no effector or goal specificity in the region.
Here we show through an analysis of preparatory fMRI activity patterns that limb-specific signals extend all throughout frontoparietal cortex. Although these findings differ from the previous gradient-related descriptions of limb specificity in humans (noted above), they in fact closely correspond with the general observation in monkeys that limb-specific neurons are ubiquitous across frontoparietal cortex (Hoshi and Tanji, 2004b, 2006), even within more posterior aspects of parietal cortex (Chang et al., 2008).
In contrast to the prominent limb specificity found across frontoparietal cortex, we observed a somewhat more distributed topography for the representation of planned hand actions. Interestingly, what primarily differentiated planning-related signals across several regions was whether, in addition to representing hand actions for the contralateral limb, the area also represented hand actions for the ipsilateral limb. For instance, whereas SPOC, post. aIPS, aIPS, and SMA represented grasping versus reaching actions with the contralateral limb only, the majority of areas (pIPS, midIPS, PMd, PMv, preSMA, DLPFC, and motor cortex) differentiated those actions with either limb. This characterization of action-dependent coding across frontoparietal cortex (i.e., with respect to limb) differs from previous fMRI descriptions of goal- and limb-dependent representations (for review, see Filimon, 2010) as being primarily separable, independent phenomena (but see below).
An intriguing finding to emerge from monkey neurophysiology is that a small proportion of neurons in some regions (e.g., PPC and PMd) will not only fail to differentiate the limb used in an upcoming movement (i.e., have equal firing rates before moving either limb) but also show similar directional tuning across the limbs (Cisek et al., 2003; Chang et al., 2008; but see Chang and Snyder, 2012). These limb-invariant preparatory signals may code higher-level representations of the movement plan, abstractly removed from muscle-specific details related to motor output (Wise et al., 1997; Cisek et al., 2003; but see Scott, 2008). Consistent with this notion, in addition to showing here that L-pIPS and L-PMd represent planned hand actions for both the contralateral and ipsilateral limbs, we find, based on our cross-decoding results, that these representations are also to some degree limb invariant.
Possible roles for ipsilateral limb activity
What might action-specific signals for the ipsilateral limb represent? Rodent research suggests that ipsilateral activity depends, in part, on intact connectivity with the contralateral hemisphere (Brus-Ramer et al., 2009), and one possibility is that the ipsilateral signals reflect an efference copy of a contralateral motor command. Shared information related to the state of the other (contralateral) limb could represent a naturally occurring mechanism that facilitates coordination between both limbs in bimanual behavior.
Another possibility is that the ipsilateral signals represent some form of contingency planning (Chang et al., 2008; Chang and Snyder, 2012). That is, despite the instruction to perform a contralateral limb movement, a plan to execute an ipsilateral limb movement may nonetheless be maintained in parallel. This secondary plan could reflect an automatic strategy to prepare for unexpected events in the environment (e.g., if one limb is obstructed, the second limb can be used immediately without additional planning), possibly conferring survival advantages (Cisek, 2007; Andersen and Cui, 2009). The simultaneous preparation of multiple potential movements, even when not explicitly required by the task, has provided a useful heuristic for explaining neural activity in a variety of cortical (Kalaska and Crammond, 1995; Snyder et al., 1997; Cisek and Kalaska, 2005) and subcortical structures (Basso and Wurtz, 1998) and may provide a neurophysiological basis for decision-making processes (Cisek, 2006; Beck et al., 2008; Wang, 2008).
A less compelling explanation of our ipsilateral results is that the non-task limb musculature may be active during movement preparation (Cisek et al., 2003; Ganguly et al., 2009; Soteropoulos et al., 2011). However, this explanation is unlikely to provide a complete account of the present findings. First, classic lesion evidence indicates that ipsilateral M1 can exclusively control arm movements (Brinkman and Kuypers, 1973). Second, ample neural evidence indicates that proximal and distal limb movements activate ipsilateral M1 (Tanji et al., 1988). Moreover, we carefully monitored for trials in which there were small movements of the limb during planning and, when detected, removed these trials from analysis (see Materials and Methods). Although we cannot exclude the possibility that undetected (subthreshold) movements of the non-task arm may explain some component of our results (Soteropoulos et al., 2011), given considerable previous work and findings of a similar nature in monkeys (Cisek et al., 2003; Oristaglio et al., 2006; Ganguly et al., 2009), it provides an unlikely basis for our findings.
One remaining possibility is that the ipsilateral signals actually reflect an internally generated planning response (i.e., originating from the ipsilateral hemisphere). At the anatomical level, M1 has the capacity to directly or indirectly influence ipsilateral muscle activity through uncrossed descending projections to spinal structures (Glees and Cole, 1952), contralateral descending spinal fibers that re-cross the midline to the ipsilateral side (Rosenzweig et al., 2009), or transcallosal connections to the contralateral hemisphere (Rouiller et al., 1994). Furthermore, evidence from brain-damaged patients (Hermsdörfer et al., 1999b) and stimulation studies in both monkeys (Boudrias et al., 2010) and humans (Davare et al., 2007b) all implicate ipsilateral cortex in limb movement. However, the precise localization of stimulation effects (and certainly lesion effects) is a matter of significant debate (Xu-Wilson et al., 2011), and some lines of neurophysiological evidence from the monkey (Snyder et al., 2006; Chang et al., 2008; Soteropoulos et al., 2011) argue against a direct role for the ipsilateral hemisphere in movement generation. We expect that studies using reversible lesions (e.g., cortical cooling), which avoid the confounds associated with permanent lesions (e.g., connectivity changes; Jankowska and Edgley, 2006), will further help clarify the nature of these ipsilateral responses.
Conclusions and implications
The current findings provide substantive insights into how and where limb-specific and action-dependent signals are integrated in the human brain during planning. Notably, we reveal a much stronger representation of the ipsilateral limb than expected from previous fMRI findings. From the perspective of developing neural prosthetics, devices able to convert intention-related brain signals into output commands capable of controlling robotic limbs, here we report several brain regions in which limb-specific signals can be decoded and potentially used to restore motor function to movement-impaired patient populations.
Footnotes
This work was supported by Canadian Institutes of Health Research Operating Grant MOP84293 (J.C.C.) and by a Banting Postdoctoral fellowship, Ontario Ministry of Research and Innovation Postdoctoral fellowship, and Natural Sciences and Engineering Research Council Collaborative Research and Training Excellence funding (J.P.G.). We thank Derek Quinlan with his assistance in collecting the eye movement data.
- Correspondence should be addressed to Jason P. Gallivan, Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada K7L 3N6. jasongallivan{at}gmail.com