Abstract
To achieve a certain sensory outcome, multiple actions can be executed. For example, unlocking a door might require clockwise or counterclockwise key turns depending on regional norms. Using fMRI in healthy human subjects, we examined the neural networks that dissociate intended sensory outcome from underlying motor actions. Subjects controlled a figure on a computer screen by performing pen traces on an MR-compatible digital tablet. Our design allowed us to dissociate intended sensory outcome (moving the figure in a certain direction) from the underlying motor action (horizontal/vertical pen traces). Using multivoxel pattern analysis and a whole-brain searchlight strategy, we found that activity patterns in left (contralateral) motor and parietal cortex and also right (ipsilateral) motor cortex significantly discriminated direction of pen traces regardless of intended direction of figure movement. Conversely, activity patterns in right superior parietal lobule and premotor cortex, and also left frontopolar cortex, significantly discriminated intended direction of figure movement regardless of underlying direction of hand movement. Together, these results highlight the role of ipsilateral motor cortex in coding movement directions and point to a network of brain regions involved in high order representation of intended sensory outcome that is dissociated from specific motor plans.
Introduction
The same intentions can be achieved by different and possibly conflicting motor acts, such as pushing or pulling to open a door. The opposite is also true: a specific motor act can correspond with multiple (and sometimes opposite) consequences as in the case of pressing a button to turn on or turn off an appliance. These examples highlight both the flexibility and redundancy in the link between actions and their underlying intentions. The neural mechanisms governing such mappings, translating intentions to specific motor commands, are largely unknown, although accumulating evidence points to the important role of neural circuits in the frontal and parietal lobes (Cisek and Kalaska, 2010).
Imaging studies in humans have demonstrated that information regarding the direction of hand movement is available from spatial activity patterns in contralateral primary motor cortex (Eisenberg et al., 2010). Similarly, tuning curves for direction of hand movement have been shown in dorsal premotor cortex (PMd), the intraparietal sulcus (IPS), and parietal reach region (Fabbri et al., 2010). Other types of actions, such as grasping, hand gestures, or isometric pressing, have been also successfully decoded based on spatial fMRI activity patterns in multiple frontal and parietal regions (Dinstein et al., 2008; Gallivan et al., 2011b; Diedrichsen et al., 2013), supporting the view that these regions are involved in the implementation of motor acts. Interestingly, fMRI activity patterns have been also shown to contain sufficient information to discriminate movement types even before their actual execution (Gallivan et al., 2011a, 2013), suggesting a role of these regions also in action planning.
Previous studies have examined the link between performed actions and the sensory consequences they generate by introducing perturbations to the expected sensory feedback. Perturbations in time (i.e., a delay between the motor act and its evoked sensory consequence) or space (e.g., a joystick providing rotated visual feedback) have been found to engage regions in primary motor and posterior parietal cortices (Farrer et al., 2003; Graydon et al., 2005; Ogawa et al., 2007; Farrer et al., 2008; Eisenberg et al., 2011; Ogawa and Imamizu, 2013).
In the current study, we introduced spatial perturbations to manipulate the link between actions and their sensory consequences. We took a data-driven approach to dissociate brain regions that are sensitive to low-level aspects of actions (direction of hand movement), from those sensitive to their intended sensory outcome. We hypothesize that a distinct network of regions represents intended sensory outcome, and another, partially overlapping network translates this into specific motor commands. To that end, we examined patterns of fMRI activity while subjects were engaged in a visuomotor tracking task using a tablet pen to control a cartoon figure on a screen. Specifically, we searched for (1) regions that are selective to direction of hand movement regardless of intended direction of sensory outcome and (2) regions that are selective to the desired direction of sensory outcome (direction of cursor movement) regardless of the underlying motor act that was performed to achieve it.
Materials and Methods
Subjects.
Ten healthy, right-handed volunteers participated in the experiment (6 males; mean age 26.9 years, range 20–34 years) and were reimbursed for their time. All subjects had normal or corrected-to-normal vision and provided written informed consent to participate in the experiment. The experiment protocol was approved by the Ethical Committee of Tel Aviv University and the Helsinki committee at the Tel Aviv Souraski Medical Center.
Experimental design.
Subjects were scanned under two experimental conditions: hand tracking and eye tracking. In the “hand tracking” condition, two oval markers moved on the screen: one horizontally and one vertically at constant speed (1 s from one side of the screen to the other, corresponding to 16.4 deg/s) (see Fig. 1A). At the beginning of each trial, a cartoon figure appeared either facing right or facing down. Subjects controlled the cartoon figure's position by using a touch pen on an MR-compatible tablet positioned at an angle of 30° on their abdomen and their task was to track the relevant marker by maintaining minimum distance from it. Throughout the experiment, subjects could not see their hand. On trial onset, when the cartoon figure faced to the right and was located on the horizontal line (half of the trials), subjects had to track the horizontal marker. When the figure was facing down on the vertical line (other half of the trials), they had to track the vertical marker using appropriate pen movements. Movement of the cartoon figure on the screen was limited to a single axis, either horizontal or vertical. On half of the trials, the direction of pen movement on the tablet and resulting direction of cartoon figure movement were congruent (i.e., left/right pen movements resulted in left/right horizontal movement of the cartoon figure, respectively, and upward/downward pen movements resulted in upward/downward vertical movement of the cartoon figure, respectively). On the other half of trials, directions of pen movement and figure movement were incongruent (e.g., subjects had to perform left/right movements of the pen on the tablet in order for the cartoon figure to move upward/downward, respectively, and upward/downward pen movements to move the figure left/right, respectively). Subjects were cued as to the trial type (congruent, incongruent) by the type of cartoon figure that appeared on the screen (princess or troll, respectively). Therefore, tracking on a given axis (e.g., horizontal) required different motor acts (either horizontal or vertical pen movements) depending on trial type (congruent or incongruent). In this fashion, the intended direction of cartoon movement (sensory outcome) was decoupled from direction of hand movement. The order of trial types (horizontal/vertical, congruent/incongruent) was counterbalanced. In the “eye tracking” condition, subjects were instructed to visually track a single oval marker that moved along either the horizontal or vertical axis of a plus-shaped track (see Fig. 1B). Subjects practiced both tasks before the fMRI scanning until they reached ceiling performance.
The experiment consisted of four functional runs with 48 trials each. A trial block started with a 2 s presentation of a static image cueing the condition (a princess or troll facing either right or down in the “hand tracking” condition; in the “eye tracking” condition a single oval marker was presented along the horizontal or vertical axis). This instruction phase was followed by an active phase during which the oval markers moved and the subjects performed the appropriate tracking task for 7.5 s (corresponding with 3.5 movements back and forth). The active phase was followed by 5.5 s of fixation period during which the subjects were instructed to fixate on a cross and refrain from moving their hands (see Fig. 1C). Across all runs, a total of 192 trials were obtained per subject, of which 144 were in the “hand tracking” condition (36 trials for each direction and congruency type) and 48 in the “eye tracking” condition (24 trials for each direction). Trial order was pseudo-randomized.
Data acquisition.
fMRI data were acquired using a 3T GE scanner, located at the Tel Aviv Souraski Medical Center. Functional scans were acquired using an EPI sequence (TR = 3000 ms, TE = 40 ms, FOV = 220 mm, flip angle = 30°). We acquired 39 slices parallel to the AC-PC plane (4 mm thickness, 0 mm gap, in-plane resolution of 1.7 mm, ascending interleaved) to obtain coverage of the entire brain, including the cerebellar lobes. For anatomical reference, a whole-brain high-resolution T1-weighted scan (voxel size, 1 × 1 × 1 mm) was acquired for each subject. Visual stimuli were back-projected onto a screen behind the scanner and viewed by the subjects using a mirror attached to the head coil.
Data analysis: fMRI data preprocessing.
Data were preprocessed using Brain Voyager QX 2.6 (Brain Innovation). Functional images were corrected for slice time acquisition, motion (sinc interpolation), and linear trends were removed (2 cycles per run). The functional data were then registered to the anatomical data, which were later transformed into Talairach space (Talairach and Tournoux, 1988).
Classification.
We used a multivoxel pattern analysis (MVPA) whole-brain searchlight approach (Kriegeskorte et al., 2006; Tong and Pratte, 2012). This approach was chosen over a univariate voxel-based analysis because of its superiority in finding subtle differences in activation patterns across groups of voxels. To identify brain regions that differentiate between experimental conditions, we used a linear binary classification algorithm (support vector machine [SVM]) implemented in MATLAB (MathWorks, version 7.14) and Java (http://www.csie.ntu.edu.tw/∼cjlin/libsvm) (Chang and Lin, 2011).
For each trial, we calculated the percent signal change between time point 2 and time point 0 (corresponding to 6 and 0 s from trial onset, respectively). To account for possible differences in signal intensity across runs, the percent signal changes of all trials from each voxel were z-score normalized in each run separately. Thus, for each subject, the brain activation of all voxels in a given condition across all trials was represented by a 4D matrix of the form Xdim × Ydim × Zdim × Ntrials where X, Y, and Z represent brain dimensions in Taliarach space and N represents the number of trials (e.g., N = 36 across all runs for the vertical congruent condition in the hand tracking task).
All analyses were performed using a searchlight strategy (see Fig. 2). For each center-voxel, common to all subjects in Talairach space, data of 125 closest voxels from two conditions were used as input to the classifier. To increase the signal-to-noise ratio, the trials in each condition were randomly divided to triplets, and data of each triplet were averaged to serve as a single data point in the matrix (e.g., Dinstein et al., 2008). Thus, for a given condition (e.g., horizontal hand movement), a 72 × 125 matrix corresponding to 72 trials (collapsed across congruency) and 125 voxels (center-voxel + neighbors) was reduced to a 24 × 125 matrix. To obtain a prediction score for the center-voxel, a repetitive leave 2 out cross-validation analysis was performed in which one trial from each condition was used as the test set and all remaining trials were used as the training set. Therefore, matrices were divided into a training set (23, 125-voxel length vectors) and a test set (one 125-voxel length vector). An SVM was trained with the two train sets (corresponding to the different conditions to be classified), and a prediction score between 0 and 2 was obtained by examining the SVM performance on the test sets (according to the number of correct classifications). The leave 2 out process was repeated 100 times (signal-to-noise ratio averaging was performed once every 24 iterations) to get a score representing the average classification performance of a particular center-voxel between the two conditions. A high score indicates that the center-voxel contains information associated with the examined property (i.e., differentiating horizontal from vertical pen movements). This process was iterated for each voxel in the dataset, and a multisubject classification map (MSCM) was obtained by calculating for each voxel in Talairach space the median prediction score across all subjects.
Statistical analysis.
We used a cluster-based approach to robustly assess statistical significance of classification prediction levels obtained with searchlight MVPA. In this approach, we compare cluster size and minimum prediction score (the lowest prediction score of a voxel within the cluster) against the prediction score obtained from randomly labeled data (Stelzer et al., 2013). A total of 100 MSCMs were created by repeating the classification process (see Fig. 2) with shuffled labels. A threshold classification level was chosen as the highest performance level for which across all 100 shuffle-labeled MSCMs less than a total of 100 voxels had a higher classification level (i.e., less than one voxel on average per MSCM).
A cluster of voxels in the original data MSCM was considered significant and declared as a region of interest (ROI) if it exceeded the following two threshold criteria: (1) prediction level of every voxel in the cluster exceeded the threshold prediction score defined above, and (2) cluster size exceeded 50 voxels. We chose a minimum of 50 voxels as a conservative measure to obtain robust clusters.
The spatial distribution of significant clusters was defined at the multi-subject level (MSCM), however, to quantify the cluster's performance level in differentiating between two conditions, we went back to the single-subject level. For each subject, we took a measure of the prediction scores across all voxels in the cluster, and this measure was averaged across all subjects. We computed three such measures: the mean classification score across all voxels in the cluster, top 20% of voxels in the cluster, and highest voxel within the cluster. This was performed for each ROI in the MSCM. Throughout the text, the measure we report representing classification level of an ROI is the classification level of the top 20% voxels averaged across subjects. The other measures are provided in the tables.
Exclusion of brain areas associated with eye movements.
During the hand tracking task, direction of eye movement was coupled with the intended direction of sensory outcome. An alternative explanation for significant classification levels in this condition is that voxels within the ROI are sensitive to direction of eye movement rather than intended direction of sensory outcome. To examine this issue, we went back to the data of single subjects and calculated the classification level of eye movement direction in each voxel within the ROIs during the eye-tracking experiment. A p value of the classification score in each voxel was assigned by repeating this calculation across 500 iterations using shuffle-labeled data and comparing the actual classification score with this null distribution. Voxels within the ROI with a p value <0.05 were declared as coding direction of eye movements. In this fashion, we obtained a Boolean map for each subject representing an exclusion matrix of voxels within each ROI.
Each subject and each ROI had a unique, partially overlapping exclusion matrix; therefore, we generated a probability map displaying the degree of overlap across subjects. We computed all three classification measures described above for each ROI across subjects both before and after exclusion of eye-movement-related voxels. Each ROI was defined as significantly coding intended sensory outcome if after exclusion of eye-movement-related voxels the following two conditions were satisfied: (1) classification level exceeded the level obtained using 100 shuffle-labeled maps and (2) in each individual subject, minimum cluster size remained >50. We examined these criteria using each of our three classification measures.
Generalization of classification across conditions.
In the previous analysis, we collapsed trials across conditions (for example, collapsing across the two directions of hand movement and training a classifier to discriminate intended direction of cursor movement: horizontal/vertical). In this procedure, classification performance is examined on a set of new test trials that are similar in type to the training set (i.e., also containing both horizontal and vertical hand movement trials). In a more conservative cross-condition classification, a classifier is trained to discriminate (e.g., intended direction of cursor movement: horizontal/vertical) using all trials from one type of hand movement (e.g., horizontal), and performance is examined on trials from the other type of hand movement (e.g., vertical). In this case, classification of intended direction of cursor movement is examined on trials with a different type of hand movement than that used for training. This analysis is more demanding and examines generalization of the discrimination rule learned from one type of trials (e.g., horizontal hand movement) to another type of trials (e.g., vertical hand movements).
We took all trials with a common direction of cursor movement (e.g., horizontal) and trained a classifier to discriminate these trials according to the actual direction of hand movement (either horizontal or vertical). Next, we examined whether the discrimination rule learned on this dataset generalizes to trials in which the intended direction of cursor movement was orthogonal (e.g., vertical). These trials were obviously not used in the training procedure; and most importantly, the direction of cursor movement in these trials is different from the trials used in the training process. In this fashion, a classifier is built to discriminate horizontal/vertical hand movements once using all data with one intended direction of cursor movement (e.g., horizontal) and tested on all remaining trials in which intended direction of cursor movement was orthogonal (e.g., vertical in the current example). Therefore, significant decoding in these trials implies a similar neural representation of hand movement direction for both intended directions of cursor movement. For each subject, this procedure was performed in both directions: once by training on trials in which the intended direction of cursor movement was horizontal and testing on trials in which intended direction of cursor movement was vertical, and once by training on trials in which intended direction of cursor movement was vertical and testing on trials in which intended direction of cursor movement was horizontal. The single-subject classification maps in each direction were used to generate an MSCM, and a conjunction of the significant clusters in the two MSCMs (one in each direction) was performed.
In a similar fashion, we examined whether the spatial patterns of activity discriminating direction of intended cursor movement during a specific direction of hand movement (e.g., horizontal) generalize to trials in which direction of hand movement was different (e.g., vertical). We trained a classifier to discriminate direction of intended cursor direction (horizontal/vertical) from all trials with a common direction of hand movement and tested classifier performance on trials with the orthogonal direction of hand movement. Again, significant classification performance in this case implies that the spatial pattern of activity in a certain region discriminates intended direction of cursor movement and generalizes to trials with a different direction of hand movement.
In this type of cross-condition analysis, the amount of available data is reduced because the classifier is trained on half the trials and tested on the other half (as opposed to the leave-one-out scheme used in the original analysis in which data from all trials, except the two in the test set, is used for training). We used a classification threshold of 55% and assessed statistical significance by generating 100 shuffle-labeled MSCMs. Using this threshold, we created a histogram of the maximum cluster size found in each shuffle-labeled MSCM. Clusters in the real map, with a size greater than the largest 5% of clusters in the histogram, were defined as significant (p = 0.05).
Results
Subjects performed four different types of a tracking task by performing directional pen movements over a tablet to control a cartoon figure on the screen. In half of the trials, the direction of hand movement and figure movement was congruent, whereas in the other half, direction of hand movements was orthogonal (incongruent) to the desired direction of figure movement (Fig. 1A).
To identify brain regions with activity patterns that distinguish between the experimental tasks, we applied MVPA in a searchlight strategy across all voxels in the brain (Fig. 2; see Materials and Methods). An MSCM was generated by calculating the median performance score of each voxel across all subjects in discriminating between two experimental conditions. To assess statistical significance, we also generated 100 MSCMs using the same dataset after randomly shuffling the labels identifying the correct trial type. A significance threshold was determined by taking the classification score in which, on average, only one voxel remained across the 100 shuffle-labeled maps (see Materials and Methods). In our dataset, this threshold corresponded with an accuracy level of 64.3%, and the number of voxels exceeding this threshold within single shuffled-label MSCMs ranged between 0 and 5. We therefore used this classification level as a threshold together with a minimum cluster size of 50 neighboring voxels as our significance criterion.
Classification of hand movement direction
To examine brain regions that are sensitive to the direction of hand movement, we collapsed separately all trials in which subjects performed vertical hand movements (i.e., vertical congruent + vertical incongruent) and all trials in which subjects performed horizontal hand movements (i.e., horizontal congruent + horizontal incongruent). Thus, within each group of trials, the hand movement was fixed (either vertical or horizontal) but intended visual feedback varied (horizontal in half of the trials and vertical in the other half). This dissociation allowed us to search for regions sensitive to direction of hand movement regardless of direction of intended visual feedback.
We identified three significant clusters (Fig. 3): (1) left sensorimotor regions, including postcentral and precentral gyri with a mean classification score of 78.1 ± 5.2% across subjects; (2) left parietal cortex (superior parietal lobule) with average classification score of 70.9 ± 7.1%; and (3) right sensorimotor cortex surrounding the central sulcus with average classification score of 71.6 ± 5.7%. For Talairach coordinates and classification levels, see Tables 1 and 2.
Classification of intended direction of cursor movement
To examine significant clusters coding for sensory outcome (intended direction of figure movement) regardless of hand movement direction, we collapsed all trials with a common direction of figure movement (horizontal or vertical) to use as input to the classifier. In other words, we separately collapsed all horizontal congruent and vertical incongruent trials (which resulted in horizontal figure movement) and all vertical congruent and horizontal incongruent trials (which resulted in vertical figure movement).
We identified four significant ROIs that discriminate between the two conditions (Fig. 4A): (1) the right superior parietal lobule with an average classification score of 75.6%; (2) right sensorimotor cortex (including precentral and postcentral gyri) with an average classification score of 72.6%; (3) the left medial frontal gyrus (frontopolar cortex) with an average classification score of 73.3%; and (4) a large cluster consisting of early visual regions, including cuneus and lingual gyri, with an average classification score of 94.7% across subjects.
Although the results described above point to significant decoding of intended direction of cursor movement, an alternative explanation might be eye movements. Since subjects tracked the cartoon figure, high classification scores might reflect the decoding of direction of eye movement rather than intended direction of sensory consequence. Therefore, our experiment also included an eye-tracking condition in which subjects performed horizontal or vertical eye movements (see Materials and Methods). Figure 4B depicts the same map as in Figure 4A, overlaid with a probability map (green-white color bar) representing the number of subjects in which a particular voxel allowed classification of direction of eye movements in a significant manner. Dark green colors in the map correspond with voxels that significantly classified direction of eye movement across few subjects, whereas bright colors correspond with voxels that significantly coded direction of eye movement in most subjects. The bright voxels in the map are therefore most likely related with direction of eye movements rather than the intended direction of sensory outcome. As can be seen in Figure 4B, early visual regions exhibited high classification scores in both hand and eye-tracking experiments across many subjects, supporting the interpretation that the high classification levels in these regions are probably due to their sensitivity to direction of eye movement. However, other regions, higher in the visual hierarchy including fusiform and lingual gyri ventrally, and inferior parietal and occipitotemporal gyri dorsally, were much less sensitive to eye movement.
To address this issue, we examined in each subject the classification level of all voxels in the above clusters after excluding all voxels that significantly discriminated direction of eye movements (see Materials and Methods). This exclusion process reduced the average classification scores for the clusters. Therefore, to verify that the clusters remain significant following exclusion of these eye-movement-related voxels, the classification level of the remaining voxels in each cluster was compared with that obtained using shuffle-labeled data (see Materials and Methods). Importantly, all four regions reported with significant classification of intended direction of cursor movement remained significant after the exclusion process (Tables 1 and 2).
Figure 4C depicts an overlay of the clusters related to direction of hand movement (motor; blue) and clusters related to intended direction of cursor movement (intention; red). Green voxels represent voxels in which direction of eye movement could be significantly classified in at least 3 subjects (Figure 4B). In the right hemisphere, voxels associated with intended direction of cursor movement (red) are more rostral to those associated with direction of hand movement (blue), with partial overlap in the central sulcus.
Generalization of classification
In the previous analyses, we built a classifier to discriminate between two experimental conditions using data that were collapsed across the other experimental dimension. For example, we built a classifier to discriminate direction of hand movement based on trials collapsed across the two intended directions of cursor movement. In the current analysis, we further examined whether a model trained to discriminate between two experimental conditions (e.g., horizontal/vertical hand movement) using trials from one direction of intended cursor movement, generalizes and allows successful discrimination of horizontal/vertical hand movement from trials in which intended direction of cursor movement was orthogonal. Similarly, we examined generalization of classification of intended direction of cursor movement across the different directions of hand movement (see Materials and Methods).
Figure 5 presents an overlay of the maps obtained when classifying intended direction of cursor movement (intention; red) and classifying direction of hand movement (motor; blue). Successful transfer of classification of hand movement direction was obtained in several clusters, including the following: (1) left sensorimotor cortex with 58.2% accuracy; and (2) right sensorimotor cortex with 56.5% accuracy. Excluding voxels related to eye movements (green), successful classification of intended direction of cursor movement was obtained in two clusters: right superior parietal lobule with 56.9% accuracy and high-order visual cortex including bilateral lingual gyri with 66.4% accuracy (Table 3). These regions are compatible with the maps obtained using the leave-one-out procedure using only half of the data.
Classification of rule
Finally, we also examined regions that distinguish between the rule linking motor actions (direction of hand movement) and intended sensory outcome (direction of figure movement). In our experiment, this link was either congruent or incongruent (orthogonal). To that end, we collapsed all congruent trials (i.e., trials with princess cartoon figure either horizontal or vertical) and all incongruent trials (i.e., trials with troll cartoon figure, either horizontal or vertical) separately. We identified one significant cluster in the precuneus that was sensitive to congruency level with an average classification level of 71.5 ± 9.3% across subjects (Fig. 6). At the behavioral level, we found a significant difference in the average distance of the figure and the tracked oval between congruent and incongruent conditions (7.5 ± 0.8 vs 9.2 ± 1.1 pixels, respectively, p < 0.002 paired t test).
Discussion
In the current study, we recorded fMRI signals in healthy subjects while dissociating between direction of performed hand movements (horizontal/vertical pen traces) and their intended sensory outcome (horizontal/vertical cursor movement). Previous fMRI studies in humans using an ROI approach have demonstrated that activity patterns in contralateral primary motor cortex are sensitive to direction of joystick movement (Eisenberg et al., 2010). Other studies using whole-brain repetition suppression analysis (Fabbri et al., 2010) and correlational pattern analysis (Fabbri et al., 2014) extend these findings and report direction selectivity during execution of reaching movements also in dorsal premotor cortex, intraparietal sulcus, the parietal reach region, and the supplementary motor area. Our results, using a tracking task, are compatible with these reports and show that the fMRI signal in contralateral sensorimotor cortex and superior parietal lobule is sensitive to the direction of hand movement. However, our results demonstrate that also right hemisphere (ipsilateral) sensorimotor cortex contains information regarding direction of hand movement. Previous studies examining activity of individual voxels using univariate approaches (e.g., GLM), have reported a global reduction of the fMRI signal in ipsilateral motor cortex during unimanual movements (Hamzei et al., 2002). Recently, careful inspection of the spatial activity patterns in ipsilateral motor cortex during isometric button presses has revealed that local groups of voxels contain information regarding finger identity (Diedrichsen et al., 2013). Similarly, activity in ipsilateral motor cortex has been found to be modulated by movement complexity (Verstynen et al., 2005). fMRI activity patterns in ipsilateral as well as contralateral parietal cortex have been also successfully used to discriminate complex gestures performed by subjects (Dinstein et al., 2008). One study using repetition suppression reports directional tuning in anterior IPS and PMd, which is stronger in the right hemisphere compared with the left. This was found using directional movements with the right (ipsilateral) and also left (contralateral) hand (Fabbri et al., 2010), suggesting a right hemisphere bias in coding movement direction. Finally, electrocorticography recordings of local field potentials from motor cortex in patients allow successful decoding of ipsilateral limb position (Ganguly et al., 2009). Together, these results demonstrate that ipsilateral activity in motor and parietal cortices contains information about limb identity, position, and type of movement. Our observation of a significant cluster of voxels in ipsilateral motor cortex coding for direction of hand movement adds to these previous reports and strengthens the notion that local patterns of neural activity in both contralateral and ipsilateral motor cortex participate in coding movement parameters. It should be noted that electrophysiological studies in monkeys provide direct evidence for neural responses in ipsilateral motor regions, including primary motor, inferior area 6, and premotor cortex during movement (Gentilucci et al., 1988; Rizzolatti et al., 1988; Cisek et al., 2003).
The main incentive for using the current experimental design was to examine brain regions that are sensitive to the intended sensory outcome (direction of cursor movement) regardless of the direction of hand movement. We identified significant clusters discriminating intended direction of cursor movement in four regions (right parietal cortex, right motor/premotor cortex, left frontopolar cortex, and visual cortex; bilaterally). In right parietal cortex, we found a cluster of significant voxels spanning the superior parietal lobule. The involvement of parietal cortex in coding intentions has been demonstrated in multiple studies (Haggard, 2008; Desmurget and Sirigu, 2009). Electrical stimulation of the parietal cortex in humans has been reported to elicit subjective feelings of urge to perform a movement (Desmurget et al., 2009). The time point at which subjects report being aware of their intention to move is delayed in patients with parietal lesions and coupled with a reduced readiness potential, which is a physiological marker of voluntary actions (Sirigu et al., 2004). Successful classification of movement type based on activity in a distributed network of parietal and frontal regions has been shown not only during the movement phase but also during the planning phase before actual movement (Gallivan et al., 2011a, b, 2013). Interestingly, the cluster of voxels we found coding intention spanning motor/premotor cortex in the right hemisphere was more anterior than the cluster of voxels coding for actual direction of hand movement (Fig. 4C). This may underscore a functional separation between regions coding for direction of hand movement and those coding for their intended sensory consequences. A previous study focusing on contralateral M1 has shown that activity patterns are similar not only when hand movements are performed to the same direction but also when hand movements are performed to slightly different directions, but the intended direction of sensory outcome is the same (Eisenberg et al., 2011). In our whole-brain strategy, we did not find a pattern of activity discriminating intended direction of cursor movement in contralateral M1. This difference may be attributed to the larger deviance between direction of hand movement and visual feedback in our experiment (90° vs 45°). In addition, Eisenberg et al. (2011) examined correlations between activity patterns across different sets of trials, whereas in the current study we used SVM classification of single trials. Given these differences, the involvement of contralateral M1 in the network coding-intended direction of sensory outcome should be further examined.
Regarding the cluster we found in left frontopolar cortex, recent studies using a similar whole-brain searchlight approach have also implicated this region in coding the intention to perform an upcoming left or right finger movement several seconds before actual movement (Soon et al., 2008). Moreover, it has been demonstrated that voxels in left frontopolar cortex can also code abstract nonmotor intentions, such as the decision to add or subtract two numbers (Haynes et al., 2007; Soon et al., 2013). Our results provide further evidence that this region holds information regarding intended sensory outcome regardless of their underlying motor implementation. Together, it seems that frontopolar cortex in the left hemisphere takes part in coding abstract goals and predicting action outcome.
Our classification analysis of sensory outcome has also revealed a large cluster of voxels in visual areas. Since subjects tracked the cartoon figure with their eyes, this is not surprising, especially in early retinotopic visual areas. We further examined this issue using our eye-tracking condition. Indeed, most voxels in early visual areas that were found to significantly code sensory outcome were also found to significantly code direction of eye movement. This suggests that the high classification levels in these regions are most probably due to the different directions of eye movement. However, after excluding these voxels, significant voxels in high-order visual areas, including the fusiform and lingual gyri in the ventral stream, and also precuneus, and superior dorsal sections of the occipital gyri still remained (Fig. 4B). A possible mechanism explaining this result might be a forward model in which frontoparietal regions send information to high-order visual areas regarding expected sensory outcome corresponding to the desired sensory goal (Astafiev et al., 2004; Crapse and Sommer, 2008; Orlov et al., 2010). Importantly, the other clusters in parietal and frontal cortex that were found to code-intended direction of cursor movement were marginally influenced by the exclusion of voxels sensitive to direction of eye movement (Table 1).
We also examined the generalizability of training a classifier to discriminate intended direction of cursor movement using trials with one direction of hand movement and testing on trials with the orthogonal direction of hand movement. The amount of trials that can be used for training in this type of analysis is drastically reduced relative to the leave-one-out analysis performed earlier. Nonetheless, we found a similar set of regions in parietal cortex and high-order visual cortex coding-intended direction of cursor movement. This implies that the intended direction of sensory outcome is represented in these regions with similar activity patterns across different underlying movements. This analysis did not reveal clusters in left frontopolar and right sensorimotor cortex pointing to possible functional differences between subregions within the network discriminating intended sensory outcome.
Coding intentions in humans has also been examined using observation tasks. Such studies have typically presented subjects with videos depicting actions with different intentions and have implicated the parietal and frontal cortices in coding intentions during observation of actions performed by others (Iacoboni, 2005; Hamilton and Grafton, 2006, 2008; Ramsey and Hamilton, 2010). Interestingly, these observation studies show a right hemisphere bias in both frontal and parietal regions, similar to the right hemisphere bias we report using an execution task.
Finally, we also examined regions sensitive to the mapping between actions and their consequences. We found one significant cluster in the left precuneus discriminating congruent and incongruent trials. This is compatible with previous studies demonstrating modulation of precuneus activity during learning of visuomotor transformations (Graydon et al., 2005). However, an alternative interpretation of this result is task difficulty. In the current design, we used congruent and incongruent trials; therefore, one might argue that the high discrimination level is due to the fact that the incongruent trials were more difficult. Indeed, the average distance between the figure and the tracked oval marker was larger in the incongruent condition. Furthermore, significant classification could potentially be driven by differences in the visual cues in the two conditions (princess vs troll). Although we cannot completely rule out these interpretations, we note that, using standard GLM methods, we did not see global signal differences between the two conditions, making these interpretations less likely. A recent study (Ogawa and Imamizu, 2013) reported the supplementary motor area and sensorimotor cortex as regions discriminating trials with different types of incongruent visuomotor mappings. In the current study, we did not find voxels in frontal regions discriminating congruent/incongruent trials. Although differences in task might explain these discrepancies, further studies are needed to examine the role of these regions in the process of mapping motor acts to their sensory consequences.
In conclusion, using a whole-brain data-driven approach, we find that, in addition to contralateral motor and parietal cortices, hand movement direction is also coded in ipsilateral motor cortex. Additionally, we find a network of clusters in left frontopolar, right motor/premotor, and parietal cortex that is sensitive to intended sensory outcome regardless of the motor actions performed to achieve them. Additionally, our data suggest that high-order visual areas might also take part in such coding. Activity patterns in this network of brain regions are compatible with a role in abstract goal representation.
Footnotes
This work was supported by the I-CORE Program of the Planning and Budgeting Committee and the Israel Science Foundation Grant 51/11 to R.M. and Y.Y., Human Frontiers Science Project Organization CDA00078/2011-C and Israel Science Foundation Grants 1771/13 and 2043/13 to R.M., Sagol School of Neuroscience, and Israeli Ministry of Absorption Fellowship to R.G. We thank Dr. Jonathan Rosenblatt and Prof. Yoav Binjamini for fruitful discussions.
The authors declare no competing financial interests.
- Correspondence should be addressed to Dr. Roy Mukamel, Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 69978, Israel. rmukamel{at}tau.ac.il