Abstract
We tested cerebellar degeneration in human patients in a task designed to isolate different aspects of motor planning and found a specific relationship between their ability to do inverse kinematic transformation and sparing of Crus I. Our approach was based on an experimental design introduced by Sober and Sabes (2003, 2005). Their paradigm allows behavioral deficits in planning of movement direction to be dissociated from deficits in generation of motor commands and also allows for the relative role played by visual and proprioceptive information to be quantified. Perturbation of visual information about hand position affected cerebellar degeneration patients (N = 12) and age-matched controls equally in determining movement direction, but had less of an effect in both groups in the transformation of movement direction to motor command. However, when provided with vision of the joints, control participants were more affected in generating the motor command in perturbed trials, and cerebellar degeneration participants were not. Thus, cerebellar patients were less able to use visual information about the joints in generating motor commands. Voxel-based morphometric analysis showed that this inability was primarily correlated with degeneration of Crus I. These results show that the cerebellum plays a role in motor planning, and specifically in the generation of inverse kinematic models for sensorimotor processing. The involvement of Crus I is consistent with an emerging picture in which increasingly posterior lobules of the anterior cerebellar cortex are associated with increasingly complex and abstract aspects of motor behavior.
Introduction
Many recent studies point to a role for the cerebellum in the generation of internal models (Wolpert et al., 1998). While there might be different types of internal models, the focus in the literature has been on forward models of dynamics, useful in achieving stable control in the face of sensorimotor delays (Shadmehr et al., 2010). Here, we focus on the hypothesis that the cerebellum also plays a role specifically in internal kinematic models (Gentili et al., 2009; Lisberger, 2009). A kinematic model would be necessary, for instance, when transforming visual information into coordinates necessary for motor planning (Shadmehr and Wise, 2005).
We explored the hypothesis that the cerebellum plays a role in inverse kinematic transformations necessary for generating motor commands that rely on visual information. To do this, we relied on earlier work that identified errors caused specifically by a failure in this transformation. Sober and Sabes (2003) proposed that arm position is estimated twice in the course of planning a reaching movement: first, in calculating the desired direction of movement (MV) toward the target; second, in calculating the appropriate motor command to produce the required joint changes and muscle activations (INV). The transformation from desired movement direction to the appropriate motor command is referred to as an “inverse transformation” (Jordan, 1996). For both estimates, the motor control system relies primarily on sensory information from proprioception and vision. If sensory information is misleading, this will generate errors in the estimations of arm position and cause two different types of errors in movement (Sober and Sabes, 2003): (1) MV errors occur if a wrong movement vector is calculated; (2) INV errors occur if the transformation of the movement vector into a motor command is incorrect. Because these two types of errors vary differently with movement direction, it is possible to distinguish MV errors from INV errors (Fig. 1). Sober and Sabes used shifted visual information to produce errors in movement, and thus isolated the relative importance of vision in the two estimates of hand position. The MV estimate depends mostly on visual information, while the INV estimate depends more heavily on proprioception. However, the relative weight of vision and proprioception depends on the task (Sober and Sabes, 2005). For instance, when providing visual information of joint position of the whole arm instead of only hand position feedback (Fig. 1), the weight of vision in the calculation of motor commands increases.
We predicted that this increased reliance on vision would not be seen in patients with cerebellar atrophy because use of the visual information for generating motor commands requires an inverse kinematic model generated by the cerebellum. We tested this prediction by applying the experimental design of Sober and Sabes (2003) in patients with cerebellar degeneration and matched control subjects. Our findings confirmed our predictions, and, furthermore, show using voxel-based morphometry (VBM) that Crus I has a preferential role in this transformation. In Discussion, we suggest that this is consistent with the idea that increasingly posterior cerebellar lobules are involved in increasingly more abstract and complex aspects of motor planning and control.
Materials and Methods
Participants
Twelve participants with cerebellar degeneration (7 males, 5 females; mean age, 52.3 ± 14.5 years; range, 27–73 years) and 12 age- and gender-matched controls without any known neurological diseases (7 males, 5 females; mean age, 52.6 ± 14.5 years; range, 28–74 years) were included in the study. All participants were right-handed. The severity of cerebellar symptoms for all participants was assessed by an experienced neurologist based on the International Cooperative Ataxia Rating Scale (ICARS) (maximum score, 100) (Trouillas et al., 1997) and the scale for assessment and rating of ataxia (SARA) (maximum score, 40) (Schmitz-Hübsch et al., 2006). In the cerebellar patients' group, mean total ICARS score was 24.9 ± 11.4 and mean total SARA score was 9.7 ± 4.6. In the control group, mean total ICARS score was 0.2 ± 0.4 and mean total SARA score was 0.1 ± 0.3.
None of the cerebellar patients or control participants revealed extracerebellar signs except mildly reduced malleolar vibration sensation in four cerebellar patients and three control participants. For a detailed overview of cerebellar patients' characteristics, see Table 1. All subjects gave written informed consent before participation in the study. The study was approved by the local Ethics Committee of the University Duisburg-Essen.
Experimental setup
Subjects held a two-joint robotic manipulandum in their dominant, right hand with a power grip (Fig. 2A). The hand was supported with a sling from the ceiling that maintained posture with the arm in a horizontal plane. The manipulandum moved underneath a projection screen located <1 cm above the subject's hand. The location of the manipulandum was measured using encoders on each joint with a resolution of 106 counts per revolution, which was then converted to XY location using standard kinematic transformations. The encoder position and velocity was sampled at 500 Hz using the analog inputs of a motor controller card (Galil motion control; DMC-1740). The subjects' vision of their arm and the manipulandum was blocked with a cloth stretched from the screen to the neck of the subject. Visual feedback on the projection screen was provided using a projector with a resolution of 1024 × 768 (Samsung L220) positioned 120 cm above the projection screen. Thus, the projector showed images that appeared to lie in the plane of the arm. The distance to the manipulandum and the chair height were individually adjusted to ensure a comfortable position and good vision of the screen.
Feedback of arm position
The projection screen provided subjects with one of two different representations of the arm position. The first was a 1-cm-diameter circle placed immediately above the manipulandum handle and the subject's hand (Fig. 2A, hand feedback). The second was a simplified line drawing representing the position of the subject's upper and lower arm, with a 1-cm-diameter circle at the end point indicating the hand (Fig. 1B, arm feedback).
Task
We defined three starting points located 6 cm apart in the center of the workspace and six targets located 60° apart on a circle with a radius of 12 cm centered on the middle starting point (Fig. 2B). Each trial began when one of the starting points was cued using a green circle of 1.4 cm diameter. Subjects moved the cursor into the circle. After a 1 s hold in the starting circle, one of the targets was cued using a red circle of 1 cm diameter. Targets were presented in a pseudorandom order. The subjects were required to hold in the starting location for an additional 500–1500 ms. At this point, the target turned green, which was the go signal. After the go signal, the subjects moved the manipulandum quickly toward the target and continued through it without stopping. The cursor was erased once the hand had moved 5 mm from the starting point. The remainder of the reach was performed with no visual feedback of hand position. The movement was slowed and stopped by a virtual “cushion” generated by the robot motors that damped the movement after the subject passed the target. Movement onset was defined as the first time that hand velocity exceeded 0.01 m/s; movements were considered terminated when the cursor passed out of a 12 cm circle around the center starting point.
Trial type
Following the experimental design of Sober and Sabes (2003), there were five trial types that can be divided into three categories: one baseline trial, two experimental trials, and two control trials.
Baseline.
(1) Center zero: reaches are made from the center starting point with veridical feedback.
Experimental trials.
(2) Center right: reaches are made with the cursor shifted 6 cm toward the right. The cursor appears in the right starting circle while the hand is actually in the center starting circle.
(3) Center left: The same as center right, but shifted in the opposite direction, with the cursor appearing in the left starting circle.
Control trials.
(4) Left zero: reaches are made from the left starting circle with veridical feedback.
(5) Right zero: reaches are made from the right starting circle with veridical feedback.
The experiment consisted of eight sets. Each set contained trials of more than one type that were combined in a pseudorandom order. The first two sets were familiarization sets and contained 60 trials, 20 each of center zero, left zero, and right zero. In the familiarization sets, the cursor was not erased during the movements, so subjects had visual feedback of arm position throughout the movement. The first set used hand feedback, and the second used arm feedback. The next six sets were experimental sets. Each set contained 30 center left and 30 center right trials. To address the influence of the order of the feedback types, subjects were divided into two groups. The first group did three experimental sets with arm feedback and the next three with hand feedback. The second group had the order reversed. We found no difference in performance between these two groups, so the groups were combined in all analyses.
Exit interview
After the experiment, subjects completed an exit interview designed to determine awareness of the shift in the visual feedback. Subjects were asked whether they felt differences between their hand position and the visual feedback provided. If so, subjects were asked to estimate the size of the difference in centimeters. Means and SDs were calculated for these estimates, and significance was determined using the Mann–Whitney U test.
Data analysis
Our procedures follow closely those of Sober and Sabes (2003). Movement duration was defined as the time period between on-line movement onset and the time the hand reached the target radius. We measured the initial direction by taking the direction from the actual starting point (center starting point) to the location of the hand when the subject first reached 50% of the maximum speed for that movement. We computed the initial directional error by subtracting the initial direction in each reaching movement from the mean of the initial direction in baseline trials to the same target. This estimate, especially because of the lack of visual feedback during the actual movement, probably reflects primarily the movement plan and not an on-line feedback correction. Sober and Sabes' model of movement planning described in Introduction hypothesizes that the subject generates two, possibly different, estimates of arm position during movement planning. They provided a mathematical formulation for extracting the weight of vision and proprioception in each estimate from the errors generated by a subject. To avoid confusion, we will use the word estimator to describe values we calculate from the data in our statistical analyses, and leave the word estimate for the hypothesized values generated internally by the subject. Thus, the subject has an estimate of arm position used to calculate target direction, x̂mx, and another used to calculate the appropriate motor commands, x̂inv. These are each generated from hand position sensed both visually, x̂vis, and proprioceptively, x̂prop. For the purposes of the model, we assume that the estimates are generated as weighted averages of the sensed positions as follows (Eq. 1): Perturbing the visual feedback changes x̂vis. The effect of this perturbation on the subject's estimates of hand position depends on the weighting constants, αinv and αmv. Different values of the estimates x̂inv and x̂inv will lead to different patterns of directional errors. This is demonstrated in Figure 1, C–F, and a more complete analysis is given in the study by Sober and Sabes (2003). The plot of MV error and INV error as a function of movement direction (Fig. 1D,F, insets) show that errors in x̂mv lead to a sinusoidal pattern in the directional errors, and errors in x̂inv lead to a consistent shift in the directional error. Thus, we can use the pattern of directional errors in the data to generate estimators for the values of x̂mv and x̂inv. These, then, can be used to generate estimators of the weighting constants, αinv and αmv. Increase or decrease of the visual weight parameters will influence our prediction of the directional error induced by the visual perturbation (Fig. 3). Rightward and leftward perturbations will produce similar error patterns in opposite directions.
Estimators for αmv and αinv (or fitting model predictions to the data)
For each subject, the values of the weight coefficients (αmv and αinv in Eq. 1) were simultaneously determined to best fit to the directional errors calculated from the center right, center left, and center zero trials. As in the study by Sober and Sabes (2003), the weighting parameters αmv and αinv were fit to minimize the squared error between the model predictions and the measured initial movement directions using a nonlinear regression algorithm. To find whether the model captures the main features of the observed error pattern, we measured R2, the percentage of the total variance explained by the model. The weights were calculated separately for arm and hand feedback. x̂prop was set to be the Cartesian location of the centered starting point in all experimental trials, and x̂vis was set to be the Cartesian location of the visual feedback seen by the subject (either the same as or 6 cm to the left or right of x̂prop).
Statistical analysis of behavioral results
Data analysis was performed using MATLAB with the Statistics Toolbox (Mathworks). Paired t tests were used to compare motor performance of patients and controls. Motor performance was characterized using movement duration, maximum velocity, and SD of the directional error. Cerebellar volumes and values for αmv and αinv were highly non-Gaussian, so the Mann–Whitney U test was used. Spearman's rank correlation coefficient was used to assess correlations between visual weight parameters and cerebellar degeneration. We used a significance threshold of p < 0.05.
Voxel-based morphometry
High-resolution three-dimensional T1-weighted MPRAGE scans were obtained for each participant [TR, 2400 ms; TE, 3.63 ms; FOV, 280 mm; matrix, 256 × 256; voxel size, 1.0 × 1.0 × 1.0 mm3 using a 1.5 T MRI scanner (Siemens)]. We implemented a version of the standard VBM method (Ashburner and Friston, 2000). More specifically, VBM of the cerebellar cortex was performed in the cerebellar patients using the SUIT toolbox in SPM5 (http://www.fil.ion.ucl.ac.uk/spm/software/spm5), developed by Jörn Diedrichsen (Institute of Cognitive Neuroscience, London, UK http://www.icn.ucl.ac.uk/motorcontrol/imaging/suit_register.htm) (Diedrichsen et al., 2009). The program first assigns each voxel a probability of being gray mater, white matter, or CSF according to the voxel intensity. It then uses this information and prior knowledge about where the cerebellum ought to be to isolate the cerebellum from surrounding tissue (Diedrichsen, 2006). Further analysis masks out the rest of the brain and relates only to the cerebellum. A nonlinear normalization that projects the MRI volumes for all individual subjects onto a probabilistic atlas of the cerebellum is then applied. We used the deformation map generated by the normalization step to map individual subject images onto the template SUIT image and the inverse deformation map to map the template SUIT image onto individual subject images (http://www.fil.ion.ucl.ac.uk/spm/software/spm5).
Degeneration of the cerebellum decreases the density of cerebellar gray matter. The SUIT toolbox uses a version of the segmentation algorithm of John Ashburner (Ashburner and Friston, 2005) to ascribe each voxel a probability of being white matter, gray matter, and CSF. As argued by Ashburner and Friston, the probability of being gray matter can be interpreted as a concentration of gray matter in that voxel (Ashburner and Friston, 2000). The gray matter that an individual subject has associated with a particular voxel in the SUIT template is the volume mapped onto that voxel (under the deformation map described above) multiplied by the concentration of gray matter in that volume as determined by the segmentation algorithm (Donchin et al., 2012). Mapping each patient′s gray matter concentration map onto the SUIT template gave us a voxel-by-voxel assessment of gray matter associated with every voxel in the SUIT template. We smoothed this map using a 8 × 8 × 8 mm3 median filter.
The degree of degeneration determined using the algorithm described above was compared with degeneration assessed with conventional volumetry using semiautomatic algorithms of the ECCET software (http://www.eccet.de). Correlation with conventional volumetric measures of the volume of gray matter in individual lobules was high (ρ = 0.89; p < 0.0004).
To estimate cerebellar atrophy independently of head size, we expressed all cerebellar volumes as a percentage of the subject's total intracranial volume (TICV) (Fig. 4D).
We next calculated the correlation of αmv and αinv (Eq. 1) with total normalized cerebellar gray matter volume and also with normalized gray matter volume of individual cerebellar voxels across patients. Significance of correlations with total cerebellar volume was determined using the standard significance test for Spearman's correlation. Significance of correlation with individual voxels was determined using a permutation test. To perform the permutation test, we created 500 permutations of the data where, for each permutation, the matching of subject behavioral performance to MRI data was randomized. For each of these permuted data sets, we calculated the correlation across patients of αmv and αinv in Equation 1 with normalized gray matter volume in individual voxels. We then calculated the maximum (over voxels) of the correlation (over permuted subjects). We set our 95% significance level such that 25 of the 500 permuted data sets had maximal correlation values outside of the significance level. In this way, our significance level corrects appropriately both for the test repetition caused by testing many voxels and for the reduction in test repetitions caused by correlations between voxels.
In addition, we estimated the slope of the relationship between normalized gray matter volume and αinv for each voxel using the Theil Sen estimate (the median of the slopes between each pair of points).
Results
Motor performance
As expected, cerebellar patients and controls had significant differences in motor performance. Figure 4, A–C, shows measures characterizing motor performance for patients and controls: movement duration (158 ± 9.0 ms for controls, 262 ± 6.9 ms for patients), maximum velocity (46.9 ± 7.9 cm/s for controls, 18.4 ± 10.0 cm/s for patients), and SD of the directional error that was calculated across all trials (5.8 ± 2.0 for controls, 12.0 ± 2.8 for patients). Each of these measures showed a significant difference between patients and controls (movement duration, F(1,24) = 11.5, p = 0.0026; maximum velocity, F(1,24) = 25.82, p < 0.001; SD of directional error, F(1,24) = 27.82, p < 0.001).
Trajectory analysis
For model analysis purposes, we characterized the subject's initial reach direction using the hand position along the trajectory at which the velocity first exceeded 50% of its maximal value. This occurred at 117 ± 19 ms after reach onset for controls and 142 ± 32 ms for cerebellar patients. Figure 5 shows reach trajectories to all six targets for a typical control subject and patient for center right trials (Fig. 5A,D), center zero trials (Fig. 5B,E), and center left trials (Fig. 5C,F) for both hand (red) and arm (blue) feedback. For all subjects, leftward perturbations induced rightward directional errors and rightward perturbations induced leftward directional errors. Directional errors were in the same direction for the arm and hand feedback. However, the magnitudes of the errors were different. These tendencies are exemplified in Figure 6, A–D. Here, rightward directional errors are positive when moving away from the body (directions 45, 90, and 135°) and negative when moving toward the body (directions −45, −90, and −135°), and leftward directional errors have the opposite signs. The consistency across subjects is demonstrated in Figure 6, E and F. The effect of the visual feedback depended on the direction of movement. For controls, hand feedback induced smaller errors than arm feedback during movements away from the body, and the reverse was true during movements toward the body. This can be seen in Figure 6, A and C, and leads to a positive shift in the blue line in Figure 6E and a negative shift of the blue line in Figure 6F. This will cause αinvarm to be larger than αinvhand for control subjects, as explained in Figure 3. For patients, the difference is in the opposite direction, although not significantly so. We did find a significant difference in the change in directional error (arm hand) between control subjects and patients for both right and left perturbations (p < 0.002). It is this difference that causes the difference between patients and controls in the estimators of αinv discussed below.
Weighting visual weights parameters
We fit the model described by Sober and Sabes (2003) (also see methods) to our data. The quality of fit was quite good. The median R2 value for the fit for the controls for the hand feedback data was 0.63 (range, 0.55–0.81), and for the arm feedback data, it was 0.68 (range, 0.51–0.87). For patients, the numbers are 0.80 (range, 0.65–0.96) and 0.78 (range, 0.56–0.93).
The estimators of the weighting parameters for the example control subject (Fig. 7A,C) for hand feedback were αmvhand = 0.64, αinvhand = 0.22, and for arm feedback αmvarm = 0.57, αinvarm = 0.42. The model captures the features of the observed error pattern discussed above (R2 of the fit is 0.71 for hand feedback and 0.73 for arm feedback). This suggests that, for movement vector planning (MV) using hand feedback, this control subject relied primarily on vision to estimate hand position (64% visual and only 36% proprioceptive). When translating the MV into a motor command (INV), this subject increased reliance on proprioceptive feedback (22% visual and 78% proprioceptive). The MV numbers did not change by much when using arm feedback (57% visual and only 43% proprioceptive). However, the INV numbers did. Arm feedback increased reliance on vision in translating movement vector to motor command (42% visual and 58% proprioceptive). These results reproduce findings reported by Sober and Sabes (2005).
Results are different in the example cerebellar subject. Our estimators of the weighting parameters (Fig. 7B,D) were αmvhand = 0.65, αinvhand = 0.29 (R2 = 0.71), and αmvarm = 0.61, αinvarm = 0.05 (R2 = 0.74). Thus, the cerebellar subject is similar to the control subject during hand feedback. However, the subject is different from the control in that reliance on vision for the INV estimate (αinv) is reduced rather than increased. Reliance on vision decreased from 29 to 5% when more visual information was available.
To see whether this difference between patients and controls was consistent, we compared values of α across subject groups and across feedback conditions; see Figure 7A,B for all α weighting parameters for all subjects. The differences across subjects are summarized in Figure 7C. We found a significant difference in αinvhand between the patient group and the control group (p = 0.016), where controls had lower values of αinvhand than patients. In the control group, we found a significant difference between αinvhand and αinvarm (p = 0.036). Although the difference in the median of αinvhand and αinvarm are of similar size and opposite directions (0.25 for controls and −0.23 for patients), the difference in patients did not reach the threshold of significance (p = 0.067). We did find a significant difference in αinvarm − αinvhand between patients and controls (Mann–Whitney U, p = 0.0012), while there was no difference in αinvhand − αinvhand between patients and controls (p = 0.14; Fig. 8).
Conventional MRI volumetry
As expected, we found a significant difference in the normalized cerebellar volume between patients and controls (Mann–Whitney U, p = 0.005; F(1,24) = 15.53). Cerebellar volumes are expressed as percentage of TICV. Mean normalized cerebellar volume was 11.09% for controls and 8.95% for patients (Fig. 4B). We also found a significant correlation between the αinvarm − αinvhand parameter and normalized cerebellar volume across all subjects (ρ = 0.7; p = 0.0002; Fig. 8). To rule out the possibility that the correlation is based only on differences between the two groups, we also tested correlation in the patients group separately (ρ = 0.60; p = 0.041).
VBM
We next tried to localize possible areas of cerebellum where degeneration was correlated with the visual weight factors. On the whole, correlations of degeneration with MV visual weights were weaker than with INV visual weights in both feedback conditions. Likewise, correlations with visual weights in the arm feedback condition were stronger than in the hand feedback condition for both MV and INV.
Significant positive correlation between local cerebellar degeneration and αinvarm were detected primarily bilaterally within Crus I (Fig. 9) with direct extensions into neighboring lobules VI and Crus II on the right side. Further correlations were found within lobules V, VIII, IX, and vermal lobules IX and X. Volumes that showed a significant correlation were by far the largest in lobule Crus I bilaterally (2.85 cm3 on the right, and 2.13 cm3 on the left). Second largest volume was within neighboring parts of VI on the right (1.48 cm3). All other volumes were significantly smaller (0.01–0.5 cm3; Table 2). Decreasing gray matter density (that is, increasing atrophy) in these cerebellar areas leads to less use of visual input in generating motor commands when arm feedback is given.
αinvhand showed similar (but much more subtle; volumes, ≤0.42 cm3) correlations bilaterally with the gray matter density of voxels in lobules VI, Crus I, Crus II, VIII, IX, and the vermal lobule X.
Significant correlations between degeneration and αinvarm comprised small patches (volumes, ≤0.26 cm3) of gray matter located on the right side for lobules VI, Crus I, Crus II, VIII, IX, and the vermis (lobule VIII, X). Correlations with αinvhand were only significant in lobule IX and the vermis (lobules VIII, X).
Together, the VBM data show that the ability to use visual information of joint position in generating motor commands correlates primarily with degeneration of Crus I.
One concern that might be raised is that areas of cerebellum where degeneration tends to be less pronounced might be overlooked by our analysis of correlations. To address this, we also examined a nonparametric estimate of the slope of αinvarm as a function of the gray matter in specific voxels. This estimate should be independent of the amount of variability in each voxel. There was essentially no difference in the areas affected using this measure and those affected using the standard Spearman correlation.
Exit interview
The results of the exit survey showed that 10 of 12 controls (83.3%) noticed the perturbations. The same proportion (10 of 12; 83.3%) of patients as controls noticed the perturbations. Controls estimated that the perturbations had a magnitude of 3.8 ± 2.8 cm (mean ± SD), and patients estimated 4.1 ± 2.8 cm (mean ± SD). The estimates of controls and patients were not meaningfully different (Mann–Whitney U, p = 0.776).
Discussion
Our results show that the cerebellum plays a key role in the inverse kinematic mappings that allow us to use visual information about body position in the generation of motor commands. They further suggest that this inverse kinematic mapping depends predominantly on the integrity of Crus I, indicating a role for this area in the control of sensorimotor functions.
Our results support the framework first developed by Sober and Sabes (2003, 2005) that hypothesized separate estimates of arm position used in planning movement direction and translating that movement direction into a motor command. However, we extend their findings by showing that the contribution of visual information to one of those estimates depends specifically on the cerebellum. When provided with visual information about joint positions, healthy subjects use that information when generating a motor command. Cerebellar patients did not increase their use of visual information, although in other parts of the computation they performed similarly to controls (Fig. 6). Indeed, some cerebellar patients even reduced their use of visual information when presented with the more complex visual input, and while this finding was not actually significant, the size of the decrease was not very different from the size of the increase in control subjects. While our experiment was not designed to test this question, one possibility is that this reflects some version of Bayesian maximization across modalities, as has been shown previously for visual and haptic feedback (Ernst and Banks, 2002), with the motor system relying less on processes that are known to be unreliable when faced with more complex inputs. In any case, our findings support the Sober and Sabes framework, which posits that translating movement direction into motor command depends on a neural substrate that is separate from that used to generate the desired movement direction, and localizes that substrate at least partially within the cerebellum.
However, our findings also address the more specific issue of the role the cerebellum plays in motor planning. The cerebellum is thought to participate in building internal models for motor control (Wolpert et al., 1998). “Internal models for motor control” is a general term that describes different types of models including forward and inverse models, and including models of kinematics and dynamics (Kawato, 1999). However, most of the research on the topic of cerebellar internal models has focused on dynamic models (for review, see Ebner and Pasalar, 2008). Indeed, much recent work has focused particularly on the idea that the cerebellum plays a role in generating a forward model of arm dynamics that can be used to estimate arm position during movements despite sensory noise and feedback delays (Shadmehr et al., 2010).
Our results suggest the cerebellum also plays a role in movement planning, during which the issues of sensory delays and control of dynamics have no role. Specifically, our results are consistent with the idea that the cerebellum plays a role in estimation of hand position during movement planning. In this sense, the role for the cerebellum supported by our findings—estimation of hand position during movement planning when the hand is not moving—is similar to the role of the cerebellum that is the focus of other recent work—estimation of hand position during actual movement. The specific internal model necessary to achieve each task, however, is different. Estimation of hand position during movement execution requires a good forward model of movement dynamics to overcome sensorimotor delays. Estimation of hand position during movement planning would require instead an inverse model of arm kinematics to allow visual information to be incorporated into the estimate.
In our subjects, the use of visual information for movement planning was most strongly related to the sparing of Crus I, although it was also partly associated with sparing in lobules V, VI, Crus II, VIII, IX, and X. Lobules V, VI, and Crus I have known connections to primary motor and premotor cortex (MI and PM, respectively). There have been a number of retrograde label tracing studies mapping out the connections of cerebral motor areas to the anterior lobules of cerebellar cortex (primarily lobules IV through Crus I, but also extending into lobule III rostrally and into Crus II caudally). The overall impression from these studies is that increasingly caudal cerebellar areas are connected to increasingly higher levels of the motor hierarchy. For instance: Kelly and Strick (2003) see MI being connected to lobules IV–VI; Hashimoto et al. (2010) find caudal PM connecting to these areas and also Crus I and rostral PM connecting to Crus I and Crus II; Prevosto et al. (2010) find posterior parietal cortex being associated primarily with Crus II. In a review of the literature, Glickstein et al. (2011) propose just such a gradient, at least for the cerebellar afferents. A similar picture emerges in a study of the corticocerebellar networks seen in functional connectivity maps found using resting state fMRI (Buckner et al., 2011).
It is true that Lu et al. (2007) report results that contradict the earlier Kelly and Strick (2003) finding because they see MI connections also to Crus I, but a likely explanation is that they injected slightly more rostrally than Kelly and Strick. It is widely believed that there is no clear border between MI and PM, so that it makes sense that there would be a gradient of connections from increasingly more posterior cerebellar cortex to increasingly more rostral cerebral cortex.
Another possible objection to the idea that Crus I is part of a rostrocaudal gradient of connection to the cerebral sensorimotor hierarchy is that there have been a number of studies suggesting that Crus I plays a role in cerebellar cognitive processing. In an extensive meta-analysis of fMRI studies, Stoodley and Schmahmann (2009) find consistent support for cognitive processing in Crus I, including language processing and executive functions. We see no contradiction between the two views. In the first place, a sharp division between cognitive and sensorimotor functions may not exist. Most cognitive tasks have motor components, and “complex” motor tasks are complex precisely because they involve cognitive components. Furthermore, Crus I is a large area and anatomical boundaries in the cerebellum have generally not been associated with functional divisions. Thus, it is possible that the cognitive parts of Crus I and the sensorimotor parts of Crus I exist side by side.
In short, Crus I, which we find plays a central role in the utilization of visual information for motor planning, seems to occupy a role midway up this gradient of connection to the cortical motor hierarchy. Specifically, it is associated with connection to rostral MI and PM. Such connectivity would be consistent with our results since PM is thought to play a central role in sensorimotor transformations (Kurata and Hoshi, 2002). This is consistent also with a general pattern in fMRI studies and lesion studies showing that Crus I plays a role in more complex motor tasks (Miall et al., 2001; Schlerf et al., 2010; Küper et al., 2012).
Notably, there are likely two major gradients of connection to the cerebral sensorimotor hierarchy in the cerebellum, with a reversed order (mirror imaged) (Buckner et al., 2011): one in the anterior cerebellum extending into the posterior lobe, and one in the more posterior parts of the posterior lobe extending into its anterior parts. This is much in line with the two body representations within the cerebellum, one in the anterior lobe (lobules III–V, with some extensions into VI) and one in the posterior lobe (lobule VIII) (Grodd et al., 2001). These lobules are primarily connected with MI, but also PM (Buckner et al., 2011; Glickstein et al., 2011). Our findings of additional minor contributions of lobules V, VI, and VIII therefore make good sense. Although we did not find a significant correlation between use of visual information to generate motor commands and oculomotor disorders based on the clinical ataxia score, additional but small contribution of lobule IX and X suggest some contribution of disordered oculomotor control (Baier et al., 2009).
In earlier work, we have found deficits in more anterior lobules to be associated with reaching adaptation tasks (Rabe et al., 2009; Donchin et al., 2012). In all of our studies, performance on motor tasks has been associated with more than one cerebellar area. However, a gradient is also clear. Specifically, the ability to adapt to force fields depends primarily on lobules IV and V. The ability to adapt to cursor rotation depends primarily on lobules V and VI. The ability to incorporate visual information in planning depends primarily on Crus I. The tasks we have explored can be arranged in a loose hierarchy of function, with increasing relationship to sensorimotor integration and planning. This is consistent with existing views of the division of function in the motor cortical hierarchy. For instance, Shadmehr and Wise (2005) describe the motor hierarchy from posterior parietal through premotor cortex and to primary motor cortex as a series of transformations from an abstract, task-oriented, and visual space to a low-level description focusing on forces and torques.
Thus, we see the current results, especially in the context of our earlier findings and the existing neuroanatomical literature, supporting the idea that the anterior lobules of the cerebellar cortex are arranged in a rostrocaudal gradient of connectivity with cerebral cortex and that this gradient reflects a preferential involvement of more caudal cerebellar lobules in more abstract and complex aspects of reaching planning and execution.
Footnotes
- Received March 5, 2012.
- Revision received August 31, 2012.
- Accepted September 4, 2012.
This work was supported by the Israeli Science Foundation.
- Correspondence should be addressed to Opher Donchin, Department of Biomedical Engineering and Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, P.O. Box 653, Beer Sheva 81405, Israel. donchin{at}bgu.ac.il
- Copyright © 2012 the authors 0270-6474/12/3216274-11$15.00/0