Abstract
Neurophysiology and neuroimaging evidence shows that the brain represents multiple environmental and body-related features to compute transformations from sensory input to motor output. However, it is unclear how these features interact during goal-directed movement. To investigate this issue, we examined the representations of sensory and motor features of human hand movements within the left-hemisphere motor network. In a rapid event-related fMRI design, we measured cortical activity as participants performed right-handed movements at the wrist, with either of two postures and two amplitudes, to move a cursor to targets at different locations. Using a multivoxel analysis technique with rigorous generalization tests, we reliably distinguished representations of task-related features (primarily target location, movement direction, and posture) in multiple regions. In particular, we identified an interaction between target location and movement direction in the superior parietal lobule, which may underlie a transformation from the location of the target in space to a movement vector. In addition, we found an influence of posture on primary motor, premotor, and parietal regions. Together, these results reveal the complex interactions between different sensory and motor features that drive the computation of sensorimotor transformations.
Introduction
Executing a goal-oriented movement requires the brain to transform goal-relevant sensory information to specific motor commands. Neurophysiological and neuroimaging assays of reaching and pointing actions implicate a network distributed over parietal, premotor, and primary motor regions in the computing of transformations from sensory to motor information (Kalaska et al., 1997; Kalaska, 2009; Beurze et al., 2010). These regions represent a complicated combination of sensory and motor features, posing a challenge in identifying loci where actual transformations take place.
For example, neurons in the superior parietal lobule (SPL), dorsal (PMd) and ventral (PMv) premotor cortex, and primary motor cortex (M1) represent the target's spatial location, although in M1, this information is strongly linked to movement execution (Lacquaniti et al., 1995; Shen and Alexander, 1997a, 1997b). Neurons in these regions also show directional tuning, with stronger modulation from kinetic variables in M1 than in PMd or SPL (Scott and Kalaska, 1997; Scott et al., 1997; Hamel-Pâquet et al., 2006). Arm posture representations in M1 are linked to the active maintenance of limb position, whereas representations in PMd and SPL relate to planning and updating posture (Kalaska and Hyde, 1985; Crammond and Kalaska, 1996; Graziano et al., 2005; Kurtzer et al., 2005). In humans, neural representations have been identified for movement direction in M1, PMd, and parietal areas (Eisenberg et al., 2010; Fabbri et al., 2010) and for body posture in SPL (Parkinson et al., 2010).
Here, we aimed to disentangle the functional relationships linking these representations. We reasoned that loci for transformations would represent both the input and output features of that transformation and thus could be operationally identifiable by the extent to which a region's cortical activity reflects the interaction between pairs of features. To address this challenge, we designed an fMRI experiment to directly investigate sensorimotor feature representations and their hypothesized interactions within a region.
Participants performed right-handed wrist movements to vertical or horizontal targets in one of two postures, providing a structured set of movements that differ systematically in terms of target location, movement direction, movement amplitude, posture (wrist orientation), and wrist angle. Movement-related BOLD activity on each trial was evaluated using multivoxel pattern analysis (MVPA; Haynes and Rees, 2006). In our MVPA protocol, a classifier was first trained to decode two movements, A and B, that differed across multiple features (e.g., target location, direction, or posture). We hypothesized that the activity evoked by each feature's neural representation would contribute differentially to the resulting classifier. To parse the independent contributions of individual features, we tested the classifier's ability to generalize to the decoding of movements, A′ and B′, that shared only some features of A and B (e.g., direction and posture), respectively, but not others (e.g., target location). A critical test was whether the features that contributed most to generalization performance varied across regions.
Within this novel framework, we report evidence identifying SPL as a locus in the transformation between target location and movement direction and show the ubiquity of posture-dependent representations in the motor system.
Materials and Methods
Participants.
Fifteen healthy, right-handed participants (six men; mean age 22.73 years, range 20–26) with normal or corrected-to-normal vision participated in the 2 d experiment. All participants provided written consent approved by the Human Subjects Committee, Office of Research, University of California, Santa Barbara and were paid for their participation. One participant's data were not used due to excessive head motion during scanning.
Apparatus.
Participants were positioned supine in the MRI scanner with their legs shifted to the left side of the scanner table on a plastic board to allow extra room for wrist movements. A photograph of the typical positioning arrangement for a participant before being moved into the scanner is shown in Figure 1. Each participant's head and neck were padded with foam to minimize head motion and headphones and earplugs were worn for ear protection and noise cancellation. The participant's right arm was placed on a foam cushion such that the wrist extended slightly in front of the cushion and the elbow rested at an angle. To prevent individual finger movement and to provide comfort, participants wore a contoured swim paddle on their right hand. A custom-made rigid-body tracking device was secured on the back of the right hand with an elastic bandage (Fig. 1). The device consisted of three light-emitting diodes (LEDs) mounted on a Plexiglas frame. Each LED was connected to a battery pack placed on the scanner table. Three shielded cameras (Precision Point Tracking System; WorldViz) placed outside the scanner recorded the six degrees of freedom positions of the LEDs at 150 Hz, allowing for recording of precise movement kinematics.
Experimental setup. Participants were positioned with their legs to the left of the scanner table to allow more room for wrist movement. A custom-made motion-tracking device was placed on the back of the participant's right hand. Cameras recorded the rigid-body positions of the tracking device during the task. The participant is shown in the neutral position (corresponding to the middle target) for the palm-down posture.
All stimuli were back-projected onto a screen using an LCD projector (1024 × 768 resolution, CPX505; Hitachi) housed in a Faraday cage behind the magnet bore. Participants viewed the stimuli via an angled mirror on top of the head coil. The experiment was conducted in darkness so that the participants only received visual input from the stimuli on the screen. A sheet was placed over the front of the magnet bore during the experimental sessions to ensure that the hand and the LEDs on the hand were not visible and to prevent the motion-tracking cameras from picking up extraneous light sources. In addition, we recorded the monocular eye position of eight of the participants using an Eyelink 1000 eye tracker (SR Research) at a 1000 Hz sampling rate during the fMRI sessions.
Stimuli.
Visual stimuli were projected on the rear screen using the Vizard Virtual Reality Software Toolkit (version 4.0; WorldViz). A target consisted of a large, white disc 200 pixels in diameter containing a small, concentric blue circle that was 100 pixels in diameter. The target was displayed in one of five positions—middle, top, bottom, left, and right—relative to the center of the display. A schematic of these target positions and their correspondence to the participant's hand position is shown in Figure 2A. The participant's hand position was marked on the screen by a cursor (yellow circle 50 pixels in diameter) that had its movements yoked to the instantaneous values of the pitch and yaw of the wrist obtained from the tracking device, as shown schematically in Figure 2B. A 7.8° deviation of the wrist moved the cursor 300 pixels from the center of the middle target to the center of the top, bottom, left, or right targets. When the participant made an error (see Procedure), the white part of the target turned gray for the remainder of the trial. Throughout the task, a red fixation cross (40 × 40 pixels) was displayed at the center of the screen. In addition, to make sure that the participants maintained the desired posture, the background on the screen turned red any time the roll position of the wrist was >10° away from the initially calibrated posture.
Task design. A, All wrist movements were completed across four different run types. Each run was performed with either a palm-down or palm-mid posture and to vertical or horizontal targets. The targets are depicted next to the wrist position required to reach the target for the given posture (note that the wrist positions shown exaggerate the actual deviation necessary to reach the target). Each movement within a run was either of small (center-out and to-the-center movements) or large amplitude. B, Task progression. CT, Completion time. Participants were instructed to move their wrist to guide a yellow cursor to the inner blue target after target onset. The cursor and blue target disappeared after reaching the target and participants were required to hold their position until the next target appeared. The boxes depict an example of the visual stimuli seen by the participant when performing a large-amplitude movement to the top target. Below each box are the wrist positions in the palm-down posture associated with each cursor position (the actual wrist deviation required for a large-amplitude movement was 15.6°). The length of the hold depended on the CT such that the next target onset or hold trial followed 4 s after the previous target onset. BOLD responses were estimated from both the target onset of the movement and the onset of the 2 s extra hold trials.
Trial design.
We used a rapid event-related design. Eight functional runs were completed during 2 separate sessions over 2 d. The first session consisted of four functional runs that varied in posture and target presented. The same runs were completed in a different order (counterbalanced for each participant) in the second session. Repeating the runs each day allowed for a sufficient number of trials to be collected for classification analyses. The run order was also counterbalanced across participants. All moves in each run were performed in the same posture: either palm-down (thumb pointing horizontally) or palm-mid (thumb pointing vertically). In addition, all moves in a run were to one of three targets: either vertical targets (top/middle/bottom) or horizontal targets (left/middle/right). This resulted in a two-posture (palm-down, palm-mid) × two-target (vertical, horizontal) factorial design for run type. Blocking the runs in this manner (as shown in Fig. 2A) was intended to simplify the overall task demands.
There were six different types of movements per functional run (24 total movement types across the four runs). The movement types varied in start position, end position, movement direction, and movement amplitude. Movements were in one of two directions (up or down in the vertical target run and left or right in the horizontal target run) and of either small (e.g., top to middle target, 7.8° wrist deviation) or large (e.g., top to bottom target, 15.6° wrist deviation) amplitude. The targets were presented in a pseudorandom order according to a customized maximum length sequence (or m-sequence) to ensure that the six different movements in each run were counterbalanced across trials and to improve statistical efficiency in the BOLD signal estimation (Buracas and Boynton, 2002; Liu and Frank, 2004; Liu, 2004).
Each run contained 186 (±1) trials. Of these trials, 128 were movement trials and 58 were “hold” trials that simply required the participant to maintain their current position for an additional 2 s. We included additional hold trials as null events to vary (i.e., “jitter”) the intervals randomly between the movement trials and to obtain estimates of the BOLD responses evoked by holding a specific posture without any movement. Of the movement trials, 86 were small amplitude and 42 were large amplitude. Participants were provided breaks every 48 trials (three breaks per run). After each break, the movement started at the middle target again, leading to slightly more moves (approximately three) from the middle target than from the peripheral targets in each run. Otherwise, trials were equally divided in terms of start position, end position, and direction of movement.
Procedure.
On the first day, participants completed a 10 min training session inside the scanner for both postures and all target types. Before each functional run, the participant's wrist was aligned to the correct posture and position such that the participant's arm and hand formed a straight line, as shown in Figure 1. We calibrated this position to correspond to the center of the middle target. At the beginning of each run, participants were required to move to, and then hold, their wrist at this neutral position for 14 s. As shown in Figure 2B, once a new target appeared, participants were instructed to move their wrist to guide the cursor to the inner blue area of the target as smoothly and as quickly as possible without sacrificing accuracy. When they reached the blue area of the target successfully, the entire target would turn white and the cursor would disappear. At this point, participants were required to hold their current position until the next target appeared. The hold period was composed of the time remaining in a 4 s window following the end of a movement (4 s minus completion time, CT) and then 0, 1, or 2 additional 2 s hold trials (0, 2, or 4 s of holding) before the next movement trial began. The number of additional hold trials was determined according to the m-sequence distribution. Therefore, between each movement, participants held their current position for a minimum of 2 s up to a maximum of 8 s minus CT. Participants were instructed to fixate on the red cross located in the center of the display at all times during the experiment.
In addition to online feedback from the cursor, participants were provided with feedback dynamically during the performance of the task and with an accuracy score to help them make the desired smooth movements. The target turned gray if the participant missed or overshot the target, moved too slowly, or drifted away from the target during the hold period. If the cursor drifted outside the target during holds, it reappeared until it was guided back to center. Throughout the run, if the participant's posture started to differ >10° from the initially calibrated posture, the background on the screen turned red until the posture was realigned correctly. An accuracy score for each trial was computed based on whether the participant moved to the target without error and on how well the end position was held. A moving average of accuracy scores from the previous 20 trials was displayed every seven trials. A final average accuracy was displayed at the end of each run.
MRI scanning and preprocessing.
fMRI data were collected using a Siemens 3T Magnetom TIM Trio system (12 channel phased-array head coil). BOLD contrast was measured with a T2-weighted echo planar gradient-echo imaging sequence (TR = 2000 ms; TE = 30 ms; FA = 90°; FOV = 192 mm). Each volume consisted of 37 slices acquired parallel to the AC-PC plane (interleaved acquisition; 3 mm with 0.5 mm gap; 3 × 3 mm in-plane resolution). Target onset always coincided with the beginning of a new functional image. Before the functional runs, a high-resolution T1-weighted sagittal sequence image of the whole brain was acquired (TR = 15 ms; TE = 4.2 ms; FA = 9°; FOV = 256 mm).
Preprocessing of the MRI data was performed with the AFNI software package (Cox, 1996). Volumes were motion corrected and coregistered to an EPI template in one interpolation step using a weighted sinc function to minimize smoothing introduced by lower-order interpolation methods. To identify artifacts during preprocessing steps, time series diagnostics were used to assess image quality (http://imaging.mrc-cbu.cam.ac.uk/imaging/DataDiagnostics) and detect excessive head motion. Images with especially high movement variability or problems in acquisition (such as problems with uniform signal distribution or slice intensity) were individually inspected and replaced with the average of the images that followed and preceded it if the image in question contained a defect. If any images were removed for a participant, volumes underwent motion correction and coregisteration again. Overall, 0.15% of the images acquired per subject during trial performance were replaced (∼4.7 images per subject). A brain mask for each participant was created by multiplying binary masks from each run based on signal-to-noise ratios in each voxel. The intersection of the binary masks was taken to produce voxels that were likely to contain brain tissue in all of the runs.
Extraction of regions of interest.
To relate our work to previous neurophysiological and neuroimaging data demonstrating sensory or movement-related representations (Kalaska et al., 1997; Kakei et al., 2001; Fabbri et al., 2010), we restricted our analysis to regions that are broadly accepted to be involved in the planning and control of arm movements (Picard and Strick, 2001; Filimon, 2010). This conservative, hypothesis-driven approach (rather than a data-driven, exploratory approach) was adopted to establish whether our novel analysis strategy could indeed identify sensorimotor representations and transformations and to avoid potential biases and false positives associated with the large number of comparisons in an exploratory approach. Regions of interest (ROIs) were defined a priori from anatomy rather than from the functional data obtained from the experiment because the latter would have restricted the analysis to a network showing movement-related differences in the BOLD signal.
We analyzed six left-hemisphere (LH) ROIs and one right-hemisphere (RH) ROI, individually defined on each participant's cortical surface. The LH ROIs were the anterior portion of motor cortex (M1a), the posterior portion of motor cortex (M1p), PMd, PMv, SPL, and the calcarine sulcus (CA; Fig. 3). Motor, premotor, and parietal areas were chosen based on predefined anatomical parcellations (Geyer et al., 1996; Picard and Strick, 2001; Tomassini et al., 2007; Destrieux et al., 2010). The primary motor cortex ROI was split into anterior and posterior portions because these subregions have known anatomical differences and differ electrophysiologically during sensorimotor tasks (Scott and Kalaska, 1997; Rathelot and Strick, 2009). The LH CA was chosen as a positive control region because it responds selectivity to lateralized visual input (Tootell et al., 1998). The RH CA was used as an additional control for one analysis to test the validity of applying MVPA to the dataset. RH sensorimotor ROIs were not included in the current set of analyses because we focused on regions that control right-handed hand movements directly.
Anatomical ROIs for a typical participant. The RH CA (data not shown) was used for one test as a control region. Otherwise, all ROIs were in the LH.
SPL and CA were all identified from individual T1-weighted scans using an automated parcellation scheme in FreeSurfer (Destrieux et al., 2010). All participants were mapped to a standard sphere using a cortex-based alignment. Regions were then drawn on the standard sphere and mapped back to voxels on participant native space. M1a, M1p, PMd, and PMv were free-drawn on a standardized surface mesh in SUMA (Saad et al., 2004) according to gross anatomical landmarks (Geyer et al., 1996; Picard and Strick, 2001; Tomassini et al., 2007), projected to the standard sphere in FreeSurfer, and finally mapped back to participant native space using their own T1-weighted image.
On average, across all participants, M1a contained 156 voxels (SE = 6, range 128–218), M1p contained 130 voxels (SE = 4, range 109–158), PMd contained 351 voxels (SE = 10, range 279–418), PMv contained 392 voxels (SE = 10, range 297–437), SPL contained 381 voxels (SE = 17, range 279–473), LH CA contained 210 voxels (SE = 11, range 130–283), and RH CA contained 196 voxels (SE = 10, range 121–263).
MVPA.
We used a standard multivariate binary classification protocol (Pereira et al., 2009) to assess how patterns of neural activity in the ROIs differed depending on the type of movement performed. All classification analyses were performed using the single-trial β estimates of the BOLD response (Rissman et al., 2004). Single-trial β-values for all move and hold trial types were estimated independently using the LeastSquares-All method (Mourão-Miranda et al., 2006) with seventh-order Legendre polynomial regressors in the baseline to reduce scanner drift artifacts and physiologic noise. The onsets of the trials were modeled as delta functions and convolved with the canonical SPM double-gamma-shaped hemodynamic response function. Single-trial estimates were based on unsmoothed data because smoothing could blur potentially relevant information (Mur et al., 2009). In addition, serial autocorrelation was corrected for using an ARMA (1,1) model. To normalize the estimates, the β-values across all conditions were z-scored voxelwise. As described in the Trial Design section, an m-sequence was used to order the trials to rigorously ensure a first-order counterbalancing of trials to limit any systematic spillover effects between movement types.
Classification analyses were performed using a soft-margin linear support vector machine implemented in LIBSVM (Chang and Lin, 2011) in MATLAB. The regularization parameter, C, had a fixed value equal to 1 in all our analyses. For each movement type, trials from both sessions were pooled to increase the number of samples available for training, which minimizes overfitting (Hughes, 1968; Trunk, 1979; Raudys and Jain, 1991).
The classification analyses were based on a two-step logic. The first step is to determine whether a classifier can successfully distinguish between two classes of movements, A and B. For example, class A might include movements in the leftward direction and to the left target, whereas class B might include movements in the rightward direction and to the right target. We refer to this first step as “base classification.” If such a base classifier exists, the second step is to determine whether that same classifier can generalize to distinguish between two different movement classes, A′ and B′, which have a hypothesized similarity to the movements in the classes A and B, respectively. In the example, class A′ might include leftward movements to the center target and class B′ might include movements in the rightward direction to the center target. Here, movement direction is a shared feature because classes A and A′ are both leftward movements and classes B and B′ are both rightward movements. Base classifiers for regions that represent of movement direction would be hypothesized to generalize to distinguish between classes A′ and B′. We refer to this second step as “generalization.”
Base classification was evaluated using a “stratified k-fold cross-validation” procedure. The number of folds, k, was dictated by the internal structure of each class, which could consist of several different kinds of movements from runs during different sessions. Stratified cross-validation was used in favor of leave-one-out cross-validation to ensure that trials of each movement type and from each of the two sessions were always equally represented in the training data (Kohavi, 1995). This constraint was achieved as follows: Across the different analyses, each class always consisted of N = n × k × 2 trials, where n is the number of movement types (of the 24 possible movement types) in each class, k is the number of trials of each movement type available for analysis from each run, and 2 is the number of sessions containing trials of each movement type. To ensure that this balanced structure was always maintained, the number of folds was equal to k, with one trial from each of the n movement types and each of the 2 sessions being left out on each fold. The cross-validation accuracy for one participant was the mean of the classification accuracy of the left-out trials across the k folds. Two classes were deemed to be significantly discriminable if the mean group-level cross-validation accuracy was statistically greater than random chance (here, equal to 50% for binary classification) as assessed by a one-tailed t test (Bode and Haynes, 2009; Oosterhof et al., 2012; Gallivan et al., 2013). Because the same classification protocol for base classification was applied to multiple ROIs, the statistical evaluations of mean base classification performance were corrected for multiple comparisons using the false discovery rate (FDR) correction (Benjamini and Yekutieli, 2001).
The generalization test was nested within the cross-validation procedure. The classifier obtained on each fold was subsequently used to classify the trials from two classes having a different composition of movement types from the classes in the base classification step. The generalization accuracy was the mean of the classification accuracy of each of the k classifiers on these new, previously unseen classes. A classifier was deemed to successfully generalize if the mean group-level generalization accuracy was greater than random chance (equal to 50%) as assessed by a one-tailed t test. Importantly, successful classification of the base classes (i.e., above-chance cross-validation accuracy across subjects) is a necessary precursor to evaluate generalization classification. If the base classifier cannot successfully discriminate two movement classes, A and B, then it would be meaningless to interpret the generalization performance of that classifier on some other classes, A′ and B′.
As with base classification, the statistical evaluations of mean generalization performance were also corrected for multiple comparisons (using an FDR correction) based on the number of ROIs that were evaluated for each generalization. Strictly speaking, this correction criterion (i.e., based on the number of ROIs tested) is a more stringent criterion than required as an appropriate correction for generalization is the number of generalization tests evaluated with a particular base classifier.
Movement kinematics.
A key demand of the MVPA analyses is that individual movements that are defined to be of the same type are in fact similar to each other. Therefore, we sought to eliminate outlier movements from classification analyses based on kinematics. To identify potential outliers, we first applied a fourth-order two-way Butterworth low-pass filter at 5 Hz to the kinematic data. The start and end of the movement were defined as the time when the velocity surpassed and dropped <10% of the peak velocity, respectively. For each trial, we calculated reaction time, movement time, mean tangential velocity, movement length, movement error at maximum tangential velocity, and the polar angle between start and movement offset positions.
Movement trials were discarded from further analyses for any one of the following six reasons: (1) reaction time exceeded 750 ms; (2) movement length was >50% greater or less than ideal path length for that trial (7.8° for small amplitude, 15.6° for large amplitude); (3) the start position occurred outside of the outer white circle of the start target (i.e., the wrist angle was >2.6° from the start; (4) the position at movement offset occurred outside of the outer white circle of the goal target (i.e., the wrist angle was >2.6° from the goal); (5) the polar angle between start and offset position was >10° off from the ideal straight-path angle of 0°; or (6) their posture (forearm rotation, as measured by the roll angle) drifted >10° for longer than 15 ms. Hold trials were discarded for any one of three reasons: (1) the length of the hold trial was <190 ms or >210 ms; (2) movement length was >3 SDs above or below the median movement length for holds; or (3) posture (forearm rotation, as measured by the roll angle) drifted >10° for longer than 15 ms. In addition, when multiple hold trials followed one movement trial, only the first hold trial was included in the analysis. The inclusion criteria for movements and holds were strict to minimize differences among movements of the same type because this could influence classification results.
Eye tracking.
We obtained usable eye-tracking data for each of the 24 different movement types for six of the 15 participants. The primary motivation for eye tracking was qualitative—we sought to assess participants' ability to comply with the instructions to fixate under the demanding conditions of the current experiment and to assess whether certain movement types were associated with large, systematic differences in eye movements that could influence the interpretation of the classification results. Because eye tracking was only used for a subset of participants, we did not individually remove any trials from the classification analyses based on eye movements.
During each trial, a saccade was recorded if the eye movement velocity exceeded 30°/s and the eye movement acceleration exceeded 8000°/s2. We included the entire movement period—from the target onset to the beginning of the first hold trial or the next movement trial—in the analysis. For each participant with usable eye-tracking data, we analyzed the number of saccades per trial, saccade amplitude (in degrees of visual angle), and number of saccades per direction (defined for each as right, left, up, or down).
Results
Kinematic performance
A variety of kinematic variables (see Movement Kinematics) were used to exclude individual trials to maintain homogeneity between trials of the same movement type. Based on these exclusion criteria, on average, 8.1% (SE = 0.96%) of movement trials were discarded per participant and 0.3% (SE = 0.11%) of hold trials were discarded, leaving an average of 39.1 (SE = 0.32) trials for each of the 24 movement types across all participants. The three panels of Figure 4A shows the complete set of hand paths as a function of space that met the inclusion criteria for one participant. As can be seen, the hand paths of the same category (center-out, to-the-center, and large-amplitude) are similar in length and consistent across the two postures.
Kinematic performance. A, Example of movement paths from one participant for each of the 24 movement types organized into subplots according to movement category (center-out, to-the-center, or large-amplitude). Each path is depicted as a line connecting a green dot to a red dot, where a green dot indicates the hand position at target onset and the red dot indicates the hand position at movement offset. The gray lines indicate movements made with a palm-mid posture and the black lines indicate movements made with a palm-down posture. The paths are overlaid on the five different target positions to indicate how the wrist positions map onto the cursor positions seen by the participant. B, Average positions of the wrist from the center target as a function of time from movement onset plotted separately for horizontal trajectories (for rightward and leftward movements) and vertical trajectories (for upward and downward movements). Positions for leftward and downward movements were mirror reflected approximately 0° before trial averaging, so that all movements for each movement category could be plotted with the same start point. Each trajectory is plotted up to the mean movement time for the given movement category. The shaded color areas indicate the SE of the trajectories across participants at each time point. C, Average tangential velocity (±SE) across all participants for each of the 24 movement types.
The qualitative kinematic similarity across the movement types was present at the group level as well. Figure 4B plots the average (±SE) hand position across participants as a function of time for both horizontal (Fig. 4B, top) and vertical (Fig. 4B, bottom) movement trajectories. Each of the solid lines indicates the position for a different direction and posture combination. The trajectories start at movement onset and end at the mean movement time for the given category. For interpretative convenience, we mirror reflected the leftward and downward trajectories about the 0° position before averaging so that all movements of the same category could each be plotted relative to the same start point. As can be seen in each of the plots, the instantaneous positions for movements in the same category are qualitatively similar across posture and direction of movement.
However, given the task conditions involved different biomechanical constraints between the wrist movements, we also expected kinematic deviations between movement types. We probed the extent of these kinematic deviations using average tangential velocity as a representative kinematic parameter. The mean tangential velocities for the different movement categories are shown in Figure 4C. As is readily evident from the figure, the mean velocity for large-amplitude movements is noticeably higher than both small-amplitude movements (i.e., center-out and to-the-center). Furthermore, the velocity for large-amplitude movements seems to be in turn modulated by posture (i.e., palm-down or palm-mid) and movement direction (i.e., right, left, up or down).
These qualitative observations were confirmed by a repeated-measures ANOVA with three factors: movement category (center-out, to-the-center, and large-amplitude) × posture (palm-down, palm-mid) × movement direction (right, left, up, and down). There was a main effect of movement category (F(2,26) = 164.7, p < 0.001). Consistent with this main effect, large-amplitude velocities were greater than center-out and to-the-center velocities (t(13) = 13.94, p < 0.001, t(13) = 12.23, p < 0.001, FDR corrected) and to-the-center velocities were greater than center-out velocities (t(13) = 2.32, p = 0.037, FDR corrected). Furthermore, there was a statistically significant two-way interaction between posture and direction (F(3,29) = 10.69, p < 0.001). Velocity was significantly higher in the palm-down posture than the palm-mid posture for both upward (t(13) = 4.28, p = 0.004) and downward movements (t(13) = 2.76, p = 0.033), but not for leftward (t(13) = 1.35, p < 0.265) or rightward (t(13) = 1.14, p = 0.277) movements (all t test p-values FDR corrected). Finally, there was a statistically significant three-way interaction of posture, direction, and movement category (F(3,29) = 8.36, p < 0.001). This interaction is consistent with the prominent role of posture and direction on the velocities of large-amplitude movements, as shown in Figure 4C. No other main effects or interactions were significant.
Although these results indicate that there are clear kinematic differences among the movement types, we sought to determine whether any of these differences would affect the interpretation of our classification analyses. A potential concern is that successful generalization performance for a feature of interest (e.g., movement direction or posture) may be due to neural signals relating to kinematic parameters that covary with that feature. To account for this possibility, we conducted paired t tests to determine whether the base classification categories (tested in the upcoming sections) differed among any of three kinematic parameters: CT (reaction time plus movement time), tangential velocity, and movement error at peak velocity. If there is a significant kinematic difference between the two base classes, A and B, and that same pattern exists in the generalization classes, A′ and B′, then we cannot rule out that kinematic parameter as a confounding factor for generalization.
For the three parameters tested, we only found significant kinematic differences among the base classes relevant to the decoding of posture. These differences, and their possible implications for generalization, are further discussed in the corresponding decoding section (see Decoding Posture across Movement Direction). All other differences were nonsignificant (all t < 2.35, all p > 0.05).
Eye movements
To assess the possibility that the classifier may be influenced by differences in eye movements across conditions, we analyzed the eye movements from six of the participants. Overall, participants made an average of 2.90 (SE = 0.52) saccades per trial, with average saccade amplitude of 0.83° (SE = 0.17) of visual angle. Of the saccades, an average of 29.9% (SE = 6.3%) were in the rightward direction, 31.9% (SE = 7.3%) were in the leftward direction, 22.8% (SE = 5.8%) were in the upward direction, and 15.4% (SE = 4.9%) were in the downward direction and these differences were not significantly different (F(5,3) = 1.21, p = 0.341).
Because the control of eye and hand movements involve premotor and posterior parietal regions (Snyder, 2000), we were concerned that eye movements might systematically covary with the sensorimotor features of interest, which would then influence classification results. Even though participants were instructed to fixate on the center of the screen, it is possible that participants made saccades to the target. Therefore, we conducted paired t tests to determine whether the number of saccades, saccade amplitude, or saccade direction differed among the base classification categories. For example, if more leftward eye movements were made to targets on the left than on the right for both center-out movements (base classes) and for large-amplitude movements (generalization classes), then that would suggest that any successful classification of spatial target location or movement direction might be due solely to eye movements.
Across all eye-movement measures tested, we found only one significant difference in saccade amplitude among the base classes (see Decoding Direction and Joint across Posture). No other significant differences were found for saccade amplitude and there were no significant differences in number of saccades or saccade direction for any of the base classification pairs (all t < 2.86, all p > 0.05 FDR corrected for number of base classification pairs in a decoding section). This suggests that successful classification performance cannot be entirely explained by systematic differences in eye movements.
Decoding spatial target location
In the first analysis, we evaluated the ability of the CA to decode spatial target information. It is well established that CA responds to lateralized stimuli—that is, the LH CA responds to targets on the right side of space and the RH CA responds to targets on the left side of space (Tootell et al., 1998). We used this fact to determine whether we could successfully apply the generalization approach to measure specificity in the BOLD response at the scale of single trials and to evaluate the validity of our data. All reported p-values for base classification and generalization are FDR corrected for two ROIs.
The base classification involved large-amplitude movements, as depicted in Figure 5A, top. The movements differed in target location: one movement was to the left target and the other was to the right target. Across participants, each of the two classes contained an average of 64 trials (SE = 2.67), drawing from movements in both the palm-down and palm-mid postures. The mean accuracy of base classification was well above chance in CA in both hemispheres (LH CA: 79.0%, t(13) = 13.43, p < 0.001 RH CA: 78.1%, t(13) = 11.84, p < 0.001). These high accuracies provided preliminary evidence that the lateralized location of the spatial target could be decoded from CA. However, there are other factors apart from target location that could be driving the difference between these two movements. To evaluate the specificity of the classification to target location, we tested the extent to which the classifier could accurately decode target location even if the movements were different.
Decoding spatial target location. A, Visualization of base classification and generalization tests. Targets (large white circle and blue inner circle) show the goal position for the given movement (red fixation cross is in the center of the screen). The white arrows depict the directionality and length of the movement. For both base classification and generalization, movements in both palm-down and palm-mid postures were included. Base classification accuracy was computed using a stratified k-fold cross-validation procedure. Generalization accuracy was based on how well the base classifier decoded the two movement types in the generalization test. For example, in Generalization 1, a small-amplitude rightward movement to the right target (bottom movement in the center panel) was counted as correct if the base classifier decoded it as a large-amplitude rightward movement to the right target (bottom movement in the top panel). B, Classification accuracies for the two generalization tests for LH and RH CA. Both regions had significant above-chance accuracies for base classification (see Results). Dashed line represents at-chance performance (50%). Error bars represent within-subject SEM (Morey, 2008). Black asterisks indicate significantly above-chance classification after FDR correction (Benjamini and Yekutieli, 2001).
The trained classifier was first tested on how well it could distinguish small-amplitude leftward movements to the center target from small-amplitude rightward movements to the right target (average of 68 trials, SE = 2.35, per class; Fig. 5A, center). Here, the movement directions are the same, but the movement amplitudes are different from the movements in base classification. Importantly, the target locations are matched to the locations in the base classification only on the right side of space. If CA only responds to lateralized targets, then the classifier trained on data from the LH CA, but not the RH CA, should be able to discriminate between these novel movements. This was indeed the case. The LH CA classified significantly above chance (t(13) = 9.07, p < 0.001), but not the RH CA (t(13) = 1.54, p = 0.07). For completeness, we also evaluated the left-field version of the test—namely, we tested the classifier's ability to distinguish between small-amplitude leftward movements to the left target from small-amplitude rightward movements to the center target (average of 66 trials, SE = 2.13, per class; Fig. 5A, bottom). In this case, only the RH CA should generalize above chance. Indeed, the RH CA, but not the LH CA, classified significantly above chance (RH CA: t(13) = 9.80, p < 0.001, LH CA: t(13) = 0.64, p = 0.27; Fig. 5B).
From these two generalization tests, we conclude that the base classifier's model was driven by differential responses in CA in each hemisphere to the target's location in the visual field. If this were not the case, then generalization to novel movements would not have been possible. Furthermore, the clear hemispheric differences in CA provide initial evidence supporting our analysis strategy of using a generalization procedure to detect preferential coding biases within a region. The results suggest that generalization performance is reflective of task-relevant physiological factors, rather than factors arising from movement-related or measurement-related noise. We can continue to use LH CA as a positive control region to assess the validity of the following generalization tests and to compare the performance of CA with that of the LH sensorimotor ROIs.
Relative decoding of movement direction and target location
Central to sensorimotor transformations are the conversion of sensory inputs into a movement vector. To identify candidate sites for these transformations, we investigated whether regions differed in their relative sensitivity to spatial location of the target (i.e., a sensory property) and movement direction and amplitude (i.e., movement-relevant properties). Because the task involved right-hand movements, we restricted our attention to LH sensorimotor regions. All reported p-values for base classification and generalization are FDR corrected for these six ROIs.
The base classification involved the center-out movements shown in Figure 6A, top. Each class had an average of 66 trials (SE = 2.50), which included movements in both palm-down and palm-mid postures. The movements differed both in the location of the targets and in the movement directions. Applying the cross-validation procedure, the two movements were successfully classified in all ROIs (M1a: t(13) = 2.00, p = 0.033; M1p: t(13) = 2.52, p = 0.019; PMd: t(13) = 2.77, p = 0.016; PMv: t(13) = 2.20, p = 0.028; SPL: t(13) = 7.34, p < 0.001; and CA: t(13) = 10.31, p < 0.001; Fig. 6B).
Relative decoding of movement direction and target location. A, Visualization of base classification and generalization tests. Base classification and generalization tests are depicted in the same way as in Figure 5A. In this case, in Generalization 1, a large-amplitude leftward movement (top movement in the center panel) was counted as correct if the base classifier decoded it as a center-out leftward movement (top movement in the top panel). In Generalization 2, a to-the-center leftward movement (top movement in the bottom panel) was correct if it was decoded as a center-out leftward movement. Movements included were pooled across palm-down and palm-mid postures. B, Base classification accuracies for the LH regions. C, Classification accuracies for the two generalization tests. Note that although PMd, PMv, and SPL all show significant above-chance classification in both generalization tests, there is an interaction such that SPL has higher accuracy when movement direction and spatial target location are the same as in base classification. In contrast, PMd and PMv have higher accuracies when movement direction and movement amplitude are the same as in base classification. Dashed line represents at-chance performance (50%). Error bars represent within-subject SEM (Morey, 2008). Black asterisks indicate significantly above-chance classification after FDR correction (Benjamini and Yekutieli, 2001).
Because the movements differed in both target location and movement direction, an accurate base classifier's model could rely on differences in the BOLD response related to either feature or an interaction of the two features. To parse out the contributions of movement direction and spatial target location, we used two generalization tests. The obtained base classifiers were evaluated for their ability to generalize to new classes of movements that: (1) shared the same target locations and movement direction but differed in movement amplitude (average of 65 trials, SE = 2.78, per class; Fig. 6A, center) and (2) differed in target locations but shared the same movement direction and amplitude (average of 68 trials, SE = 1.75, per class; Fig. 6A, bottom).
The first generalization test evaluated how well the base classifier could dissociate large-amplitude movements to the left from large-amplitude movements to the right. In this test, both spatial target location and movement direction matched the movements in base classification, but movement amplitude differed. As shown in Figure 6C, generalization accuracy was above chance in PMd (t(13) = 4.69, p < 0.001), PMv (t(13) = 2.33, p = 0.028), SPL (t(13) = 6.70, p < 0.001), and CA (t(13) = 10.65, p < 0.001). Generalization performance for M1a (t(13) = 1.68, p = 0.070) and M1p (t(13) = 1.20, p = 0.125) was not statistically above chance. This implies that, for the ROIs with above-chance generalization, representations of spatial target location and/or movement direction contributed to the classifier's model in base classification. However, with this first test, we cannot distinguish contributions for target location from contributions for movement direction.
We used the second generalization test to determine the contribution of movement direction, independent of target location, to the base classification model. Leftward to-the-center movements were compared with rightward to-the-center movements. Here, movement direction and amplitude matched the directions and amplitudes in base classification, but the spatial target locations did not match. In this case, generalization accuracy was above chance in M1a (t(13) = 3.22, p = 0.004), M1p (t(13) = 6.05, p < 0.001), PMd (t(13) = 5.42, p < 0.001), PMv (t(13) = 7.15, p < 0.001), and SPL (t(13) = 3.93, p = 0.001), but not in CA (t(13) = 0.90, p = 0.193). Above-chance generalization implies that representations of movement direction and/or movement amplitude play a role in the base classification model for those ROIs.
Based on the combined results of both generalization tests, we can infer the likely representations contributing to evoked responses in the base classification model. First, the at-chance performance in CA when spatial target in generalization does not match base classification again shows that, as predicted, this region represents mainly lateralized visual input (Fig. 6C). Second, M1a and M1p generalized only in the second case, when both movement direction and movement amplitude were the same as in base classification, but not when amplitude differed. This suggests that responses in M1 are dependent on movement-specific properties but independent of target location. Finally, we found evidence that PMd, PMv, and SPL represent movement direction, because these regions showed above-chance generalization across both tests and direction was the only feature that was always matched to movements in base classification. Nonetheless, the question remains of whether spatial target location contributes additional information to the base classifier's model in each of these three ROIs—a question we tested by evaluating the relative differences in generalization accuracy across the two generalization tests. Specifically, we conducted two separate two-way, repeated-measures ANOVA tests to compare generalization accuracies for movement direction and spatial target location in SPL and PMd and in SPL and PMv.
Strikingly, there was an interaction among sensorimotor regions such that generalization accuracy for SPL was higher when spatial target and direction matched target and direction in base classification, whereas PMd had a higher accuracy when direction and amplitude matched, but not spatial target (F(1,13) = 16.51, p = 0.002). The same interaction was present when comparing SPL with PMv: whereas SPL exhibited a higher accuracy for spatial target, PMd had a higher accuracy for direction independent of spatial target (F(1,13) = 12.91, p = 0.003; p-values were FDR corrected for two comparisons). These results suggest that SPL codes for both spatial target and direction, because accuracy was higher when visual signals could contribute to generalization performance. In contrast, the sensitivity to movement properties in PMd and PMv was largely invariant to differences in visual signals, suggesting that these regions contain representations of direction that are distinct from the representations of spatial target location (Fig. 6C).
We also investigated whether there was similar relative decoding of spatial target position and movement direction for vertical movements. The base classifier was trained to discriminate between upward center-out movements and downward center-out movements. An average of 62 trials (SE = 2.47) across both palm-down and palm-mid postures were included in each class. Of the sensorimotor ROIs tested, base classification was significantly above chance in M1a (55.1%, t(13) = 3.15, p = 0.010), PMv (54.4%, t(13) = 2.95, p = 0.009), and SPL (58.8%, t(13) = 4.43, p = 0.002). The classification performance in M1p (52.5%, t(13) = 1.19, p = 0.127) and PMd (53.2%, t(13) = 1.62, p = 0.064) was not statistically above chance, so we could not test generalization performance in these two ROIs.
The base classifier was first tested to determine how well it could distinguish large-amplitude upward movements from large-amplitude downward movements (average of 57, SE = 4.25, trials per class; Fig. 6A, center). Similar to the horizontal case, both PMv (53.9%, t(13) = 2.99, p = 0.008) and SPL (56.2%, t(13) = 3.52, p = 0.006) exhibited significant above-chance generalization, whereas M1a did not reach above-chance accuracy (52.0%, t(13) = 1.67, p = 0.059). In the second generalization test, upward to-the-center movements were tested against downward to-the-center movements. Here, all three ROIs that had significant base classification also had above-chance generalization accuracies: M1a (52.4%, t(13) = 2.17, p = 0.025), PMv (53.1%, t(13) = 2.62, p = 0.016), and SPL (52.8%, t(13) = 3.01, p = 0.015). Furthermore, SPL had a higher generalization accuracy in the first generalization test, when the spatial target matched that of base classification (t(13) = 2.01, p = 0.033), suggesting that, as in the horizontal case, spatial target information enhanced decoding performance in SPL.
Decoding posture across movement direction
In the previous classification tests, movements using different postures were grouped together. However, posture could influence decoding within a region as well (Parkinson et al., 2010). To determine how representations of posture in both dynamic and static conditions might influence the BOLD response, we tested both large-amplitude movements and hold trials at peripheral targets. In the base classification for movements, we trained the classifier to dissociate all large-amplitude right and left movements in the palm-down posture from all large-amplitude right and left movements in the palm-mid posture (Fig. 7A, top). Pooling both directions gave the classifier an average of 65 trials (SE = 2.78) in each class. A separate base classification evaluated the vertical version of the test (Fig. 7A, bottom) and base classification results reflect the average of both tests. All reported p-values for base classification and generalization are FDR corrected for the six LH ROIs. Overall, as shown in Figure 7B, all LH ROIs exhibited above-chance accuracy: M1a, t(13) = 14.47, p < 0.001; M1p, t(13) = 12.00, p < 0.001; PMd, t(13) = 20.16, p < 0.001; PMv, t(13) = 21.57, p < 0.001; SPL, t(13) = 29.05, p < 0.001; and CA, t(13) = 15.75, p < 0.001.
Decoding posture across movement direction. A, Visualization of base classification and the generalization test. Base classification and generalization are depicted in the same way as in Figure 5A except that movements were not pooled across postures. Note that, here, postures are matched for base classification and generalization. For example, a palm-down up movement (one of the left movements in the bottom panel) is correctly classified if the base classifier decodes it as a palm-down left or right movement (top movements in the top panel). To test for static posture, we did not use the movement trials as shown, but rather used 2 s “hold trials” to peripheral targets that occurred after some movements. B, Base classification accuracies for LH regions. Note that there are two separate base classifications: one for movements and one for holds. C, Classification accuracies for the two generalization tests. Generalization accuracies reflect the average of the base classification/generalization pair shown and the opposite pair (i.e., base classification contains up/down movements and generalization contains left/right movements). Because generalization accuracies for posture during movement and posture during holds used different base classifiers, they cannot be compared directly. Dashed line represents at-chance performance (50%). Error bars represent within-subject SEM (Morey, 2008). Black asterisks indicate significantly above-chance classification after FDR correction (Benjamini and Yekutieli, 2001).
Although the main difference between movements evaluated in base classification was posture, it is possible that the classifier's model was driven exclusively by non-movement-related factors. To rule out this possibility, we conducted a generalization test in which movements were aimed to the top and bottom targets. This generalization test evaluated the base classifier's ability to generalize to posture (palm-up vs palm-mid) across movement direction and target location.
In the generalization test, the base classifier that was trained to distinguish between postures for large-amplitude horizontal movements was then tested to distinguish between postures for large-amplitude vertical movements (average of 57 trials, SE = 4.25, per class; Fig. 7A, bottom). Overall generalization results reflect the average of this test and the case where the base classification and generalization sets are switched. As shown in Figure 7C, M1a (t(13) = 2.30, p = 0.023), M1p (t(13) = 3.47, p = 0.006), PMd (t(13) = 3.08, p = 0.009), PMv (t(13) = 2.45, p = 0.022), and SPL (t(13) = 3.75, p = 0.001) showed significant above-chance generalization. CA generalization was not significantly above chance (t(13) = 1.72, p = 0.054).
These results suggest that the sensorimotor regions contain representations of posture and rule out the possibility that the results of base classification in the sensorimotor regions were entirely driven by non-movement related factors such as run effects. It is important to note that it is a combination of posture and the gravity plane that is consistent with these generalization results, and not the muscle groups involved in the movements or the specific movement plane within a wrist-centered frame of reference (Kakei et al., 1999).
Furthermore, the generalization to posture is in fact the opposite of what would be expected if base classification and generalization were based on kinematic properties. For pooled up and down movements, we see significant differences among palm-down and palm-mid movements, such that palm-down movements have higher movement velocity (t(13) = 4.81, p < 0.001), lower CT (t(13) = 3.95, p = 0.003), and smaller movement error (t(13) = 6.18, p < 0.001) than palm-mid movements (all p-values are FDR corrected for the two base classification pairs tested). However, we see the opposite pattern for pooled right and left movements—that is, palm-down movements exhibit slightly slower movement velocity and CT, as well as significantly larger movement error (t(13) = 2.37, p = 0.034). These kinematic results possibly reflect the greater difficulty in performing radial and ulnar deviations (i.e., palm-mid up and down, palm-down right and left) than extensions and flexions (i.e., palm-down up and down, palm-mid right and left). Therefore, if base classification and generalization were based on these kinematic properties, generalization accuracies to posture would have been in the opposite direction from what we observed.
To test for potential decoding of static posture, we modeled the prespecified 2 s hold trials in which the participant held their position at 1 of the 4 peripheral targets. The base classification problem was to distinguish between all holds at the right and left targets in the palm-down posture from all holds at right and left targets in the palm-mid posture. In the generalization test, the base classifier was tested on how well it could dissociate holds at the top and bottom targets in the palm-down posture from holds at the top and bottom targets in the palm-mid posture. Note that the target positions in the hold case were the same as those in the movement tests. An average of 48 trials (SE = 1.48) were used in each posture class for base classification and an average of 46 trials (SE = 1.31) in each class for generalization. The results from this case and the case where the base classification and generalization sets were switched were averaged to give the overall classification accuracies. As with movements, all ROIs had an above-chance cross-validation accuracy: M1a, t(13) = 7.50, p < 0.001; M1p, t(13) = 8.90, p < 0.001; PMd, t(13) = 16.96, p < 0.001; PMv, t(13) = 15.14, p < 0.001; SPL, t(13) = 13.46, p < 0.001; and CA, t(13) = 6.01, p < 0.001 (Fig. 7B). However, no ROI exhibited significant above-chance generalization (Fig. 7C). Because the base classification model was different for moves and holds, we cannot compare the generalization accuracies for dynamic and static posture directly.
Decoding direction and joint across posture
In the final analysis, we examined the decoding of movement direction and joint angle representations that are independent of posture. This allowed us to measure both the main effects of direction and joint and potential posture-direction interactions. A posture–direction interaction in a region is likely if the region can generalize to direction when postures are pooled in both base classification and generalization (Fig. 6C), but not generalize to direction when the generalization test contains movements in different postures from the movements in base classification.
Base classification involved two large-amplitude movements performed in the same posture, but to opposite directions (example shown in Fig. 8A, top). Therefore, each class in base classification differed in both external-related features (i.e., spatial target, movement direction) and body-related features (i.e., joint angle, muscle). There were four possible base classifications that matched this constraint: (1) left palm-mid versus right palm-mid, (2) left palm-down versus right palm-down, (3) up palm-mid versus down palm-mid, and (4) up palm-down versus down palm-down. The overall base classification accuracy reflected the average of the four individual accuracies and all reported p-values for base classification and generalization are FDR corrected for the six LH ROIs. For each base classification (and generalization), an average of 35 trials (SE = 1.65) were used in each class. This is approximately half the number of trials used in the previous classifications, because trials are not pooled across posture or direction. As shown in Figure 8B, all LH ROIs had above-chance cross-validation accuracies: M1a, t(13) = 9.60, p < 0.001; M1p, t(13) = 7.30, p < 0.001; PMd, t(13) = 9.73, p < 0.001; PMv, t(13) = 10.25, p < 0.001; SPL, t(13) = 8.41, p < 0.001; and CA, t(13) = 14.18, p < 0.001.
Decoding direction and joint across posture. A, Visualization of a representative base classification and generalization pair. Base classification and generalization are depicted in the same way as in Figure 5A except that movements were not pooled across postures. In the example shown, in Generalization 1, a large-amplitude leftward movement in the palm-down posture (top movement in the center panel) was correctly classified if it was decoded as a large-amplitude leftward movement in the palm-mid posture (top movement in the top panel). In Generalization 2, a large-amplitude downward movement in the palm-down posture (i.e., a flexion, left movement in the bottom panel) was correct if it was decoded as a large-amplitude leftward movement in the palm-mid posture (i.e., a flexion). B, Base classification accuracies for LH regions. Overall accuracies reflect the individual accuracies for the four possible base classifications (see Results for details). C, Classification accuracies for the two generalization tests. Accuracies shown reflect the average of four base classification/generalization pairs. Note that M1a, M1p, PMd, and PMv do not have significant generalization accuracies when movement direction is matched in the base classification/generalization pairs. This is in contrast to the significant generalization accuracies for movement direction shown in Figure 6C, when classification was based on pooled postures. Dashed line represents at-chance performance (50%). Error bars represent within-subject SEM (Morey, 2008). Black asterisks indicate significantly above-chance classification, after FDR correction (Benjamini and Yekutieli, 2001).
There was a significant difference in eye movements for one of the base classification pairs: saccade amplitude during palm-down movements in the downward direction (M = 0.95, SE = 0.19) was significantly larger than saccade amplitude during palm-down movements in the upward direction (M = 0.80, SE = 0.20) (t(5) = 3.94, p = 0.044, FDR corrected for the four base classification pairs tested). Note, however, that this was the only significant eye movement difference for all four base classification pairs and that the four classification accuracies were similar. Therefore, though we cannot rule out the possibility, it is unlikely that the significant base classification accuracies were driven by differences in saccade amplitude.
Because spatial target, movement direction, and body-related features all differed in base classification, the classifier's model could be based on differences in the BOLD response stemming from any one of these features or an interaction between features. To determine whether the model was based on external features or joint-based features, we performed two separate generalization tests. In each test, the base classifiers for each region were tested on how well they could generalize to different movements that shared spatial target location and movement direction but not joint angle (Fig. 8A, center) and that shared joint angle but differed on movement direction and target location (Fig. 8A, bottom). Note that both generalization tests differed in posture from base classification, so any successful generalization would not make use of responses related to postural representations. In contrast, if representations were posture dependent, then generalization would be unlikely in either test.
In the first generalization test, the base classifier was tested on how well it could distinguish between movements that matched in terms of spatial target and movement direction (e.g., base classification: right palm-down vs left palm-down; generalization: right palm-mid vs left palm-mid). The overall decoding accuracy reflects the average of the four tests. Here, only SPL (t(13) = 4.65, p < 0.001) and CA (t(13) = 5.92, p < 0.001) showed significant above-chance generalization accuracies (Fig. 8C). This implies that the base classifier detected BOLD responses related to target location/movement direction representations in SPL and CA, but not in the other ROIs. Based on this result and the previous results for decoding of spatial target, it is likely that CA detects visual target features, whereas SPL may detect spatial target location and/or movement direction (Fig. 6C). M1a, M1p, PMd, and PMv all classified above chance for direction in the previous test, when postures were pooled (Fig. 6C), but not here, when the test was across postures. This suggests that representations of movement direction in these regions are posture dependent.
The second generalization test evaluated the extent to which the base classifier in an ROI could distinguish between movements based on the joint angle of that movement (e.g., base classification: a right palm-down ulnar deviation vs left palm-down radial deviation; generalization: a down palm-mid ulnar deviation vs an up palm-mid radial deviation). No ROI classified significantly above chance for this intrinsic generalization test, suggesting that, at least with the given analysis, sensorimotor regions cannot decode intrinsic joint angle independently of posture (Fig. 8C). Note, however, that for both generalization tests of direction and joint angle across posture, chance-level generalization does not strictly imply that these ROIs do not code for the features tested. It is possible that the base classifier's model was based predominately on spurious factors or features not specifically tested. Alternatively, it is possible that the classifier's model could not discriminate between mixed representations present in a single ROI.
Discussion
We applied MVPA to fMRI data to identify representations of individual movements in the human motor system in terms of sensory or motor properties. Our goal was to characterize these representations and their interactions to identify regions involved in sensorimotor transformations. Using a robust generalization procedure, we found evidence that SPL is likely involved in facilitating a transformation between spatial target location and movement direction. Furthermore, we found that both PMd and PMv showed sensitivity to movement direction independently of differences in target location or movement amplitude, suggesting that these regions are broadly tuned to direction. In contrast, directional tuning in M1a and M1p depended on movement amplitude. Finally, posture was a modulating factor across all sensorimotor regions examined, highlighting the importance of body-specific components in movement planning and execution.
Previous neuroimaging studies have used a variety of behaviors to dissociate brain maps related to sensorimotor transformation. These include tasks that manipulate movement direction (Cowper-Smith et al., 2010; Eisenberg et al., 2010; Fabbri et al., 2010; Toxopeus et al., 2011), spatial target location (Eisenberg et al., 2011), static posture (Suminski et al., 2007), dynamic posture (Pellijeff et al., 2006; Parkinson et al., 2010), movement goal (Gallivan et al., 2013), and reference frames (Ogawa and Inui, 2012). The present results build on previous work by identifying how multiple features are coded together in different regions of the human motor system. With an MVPA generalization approach, we simultaneously studied both sensory- and movement-related features using the same set of movements. For example, we successfully distinguished between representations of target location and movement direction without the need to change the task by remapping direction via visuomotor rotation (Eisenberg et al., 2011). In addition, our participants made free wrist movements without the use of a manipulandum, allowing movement representations to be established with respect to the hand, rather than with respect to a tool.
SPL contained representations for both spatial target location and movement direction, which is consistent with single-neuron recordings in nonhuman primates (Lacquaniti et al., 1995; McGuire and Sabes, 2011; Bremner and Andersen, 2012), fMRI results in humans (Connolly et al., 2003; Medendorp et al., 2003), and behavioral findings in optic ataxia patients (Khan et al., 2013). Furthermore, the interaction between spatial target and movement direction in SPL (i.e., the enhanced generalization accuracy when target information was available) suggests that SPL might perform computations involving both features that are crucial for a transformation. Although the present data are consistent with the idea that information about target and direction is combined within SPL, we cannot conclude that this is a forward transformation because the results might alternatively reflect feedback from upstream processing or contributions from independent modules (Buneo et al., 2002; Chang and Snyder, 2010; Konen et al., 2013). It is important to consider that, although the present study focuses on SPL, similar transformations between visual and motor information have been found in humans across multiple subregions in the posterior parietal cortex, including those along the intraparietal sulcus (Grefkes et al., 2004; Tanabe et al., 2005; Blangero et al., 2009; Erickson and Kayser, 2013). Future work is needed to clarify the interplay between these subregions in processing the transformation from target location to movement direction.
Unlike SPL, target location did not affect generalization accuracies for movement direction in PMv and PMd. This finding is consistent with results from single-neuron recordings in PMd and PMv in nonhuman primates, in which neurons show sensitivity to direction independently of target location (Kurata, 1993; Shen and Alexander, 1997b). Furthermore, direction was reliably decoded in PMd and PMv even when movement amplitude differed from base classification, suggesting a general representation of direction. However, generalization accuracies for movement direction were higher when movement amplitude was consistent across the base classification and generalization sets, which may imply that these regions are involved in the generation of specific movement vectors (Pesaran et al., 2006). Note that movement direction as discussed here is an arbitrary feature that correlates with other movement factors, such as force output and goal state, which we cannot disentangle in the present study.
Previous studies have characterized differences in spinal connectivity (Rathelot and Strick, 2009) and functionality (Geyer et al., 1996) putatively in M1a and M1p, motivating our interest in analyzing these areas separately. Neurons located in M1p contain more muscle-like features, likely due to their direct projections to motoneurons. In contrast, M1a neurons are more related to external features and have no direct projections onto motor neurons (Scott and Kalaska, 1997; Rathelot and Strick, 2009). Our results show that both M1 subdivisions behave similarly in that they both generalized for movement direction when movement amplitude, but not spatial target, matched the movements in base classification. This suggests that M1 representations of direction are amplitude dependent, supporting recent MVPA findings that M1 can distinguish between different amplitudes during drawing movements (Kadmon Harpaz et al., 2014).
Movement direction was successfully decoded in motor regions when postures were pooled together, but testing for generalization across posture revealed that movement representations are posture specific. Previous fMRI and neurophysiology studies have found evidence for posture-dependent mappings throughout the motor system (Scott and Kalaska, 1997; Ajemian et al., 2001; Graziano et al., 2002; Parkinson et al., 2010; Zimmermann et al., 2013). In our task, we found direct evidence for representation of posture in all sensorimotor regions tested. Conversely, the purely visual CA could not dissociate palm-down from palm-mid postures, indicating that the above-chance generalization for posture in sensorimotor ROIs was not due to visual or other spurious factors. Furthermore, testing across posture eliminated generalization to movement direction in M1a, M1p, PMd, and PMv (cf. Figs. 6C, 8C). SPL generalized across posture, but this may be more related to visual coding of the target, rather than posture-invariant directional tuning (Fernandez-Ruiz et al., 2007).
During movement, SPL in particular is thought to combine visual and proprioceptive signals to continually update an internal estimate of the limb (Wolpert et al., 1998; Shi et al., 2013). Under static conditions, M1 shows more posture-related activity relative to premotor or parietal areas (Kalaska and Hyde, 1985; Crammond and Kalaska, 1996). Surprisingly, we found that all sensorimotor regions could decode posture during movement, but no ROI successfully decoded posture during hold trials. This suggests that representations of posture are related to posture-dependent planning, rather than to the static maintenance of position (Ajemian et al., 2001). However, it is important to keep in mind that, here, lack of generalization does not strictly imply that static posture is not coded in the motor system. It is possible that the base classifiers were not sufficiently trained to detect the relevant features, especially given that fewer trials were used for the posture while holding in position than during movement.
Although previous evidence suggests that neural activity in motor and premotor cortices is related to intrinsic patterns of muscle activity (Kakei et al., 1999; Kakei et al., 2001), which would vary with joint angle, we found no significant decoding for joint angle in any of the ROIs tested. However, the ability to decode “pure” intrinsic features from fMRI data is difficult given our current task design. To perform free wrist movements, participants had to also maintain forearm posture against gravity during each stage of movement (execution and the hold period). Therefore, each movement requires motor cortex activity for postural control as well as for flexion or extension across a given joint. We speculate that the neuronal populations for postural and dynamic control are closely admixed in the motor cortex. In this case, the generalization approach would not be able to detect a true “mixed” representation.
In general, mixed representations in cortical regions pose a challenge for standard MVPA algorithms, which assume singular representations within a region. In the present study, we used MVPA to detect representations of multiple features and their interactions within a region using the generalization procedure as a hard criterion—that is, generalization is only above chance if the classifier behavior is based on the particular feature(s) tested. With this approach, we found that the interactions between input and output features in SPL are consistent with the idea that this region is involved in computations underlying a transformation from spatial target to movement vector. We also identified important interactions among movement direction, amplitude, and posture in premotor and primary motor cortex. Although the present study focused on transformations between features, a future challenge is to identify loci for transformations of the same feature between multiple frames of reference (Ogawa and Inui, 2012). Because reference frames are often mixed within neural populations (Kakei et al., 1999; Chang and Snyder, 2010; McGuire and Sabes, 2011), parsing out these representations with fMRI will require a more sophisticated approach, such as finding voxel subsets that best code for a movement property or reference frame of interest. The ability to characterize both between-feature and within-feature interactions will give a more complete understanding of large-scale representations underlying sensorimotor transformations for goal-directed movement.
Footnotes
This work was supported by the National Science Foundation (Grant DGE-1144085 to D.A.B.) and Public Health Service Grant (NS44393 to S.T.G.) and the Institute for Collaborative Biotechnologies (Contract W911NF-09-0001 from the U.S. Army Research Office to S.T.G.). We thank Felix Rudert and Joe Jablonski for making the motion tracker MRI-compatible; Mario Mendoza for MRI technical assistance; Tim Preston and Eamon Caddigan for assistance with collecting and analyzing eye-tracking data; Nicholas Wymbs for free-drawing the parcellations for regions of interest; and Pavithra Rajesh, Chad Davis, and Michael Roe for assisting with fMRI data collection.
The authors declare no competing financial interests.
- Correspondence should be addressed to Scott T. Grafton, MD, UCSB Brain Imaging Center, Department of Psychological and Brain Sciences, University of California, Santa Barbara, UCEN Drive, Santa Barbara, CA 93106-9660. grafton{at}psych.ucsb.edu