Abstract
Intended reaches triggered by exogenous targets often coexist with spontaneous, automated movements that are endogenously activated. It has been posited that Parkinson's disease (PD) primarily impairs automated movements, but it is unknown to what extent this may affect multijoint/limb control, particularly when patients are off their dopaminergic medications. Here we tested nine human patients with PD while off dopaminergic medication versus nine age-matched normal controls (NCs). Participants performed intentional reaches forward to a target in a dark room and then transitioned back to their initial posture. Upon target flash, three forms of guidance were used: (1) memory with eyes closed, (2) continuous target vision only, and (3) vision of their moving finger only. The trajectories of their arm joints were measured and their joint velocities decomposed into the (intended) task-relevant and the (spontaneous) task-incidental degrees of freedom (DOF). We also measured the balance between these two subsets of DOF as these movements unfolded. In PD patients we found that the incidental DOF values were abnormally variable during the retracting movements and prevailed over the task-relevant DOF values. By contrast, their forward intentional motions were abnormally dominated by the task-relevant components. Moreover, the patients abruptly transitioned between voluntary and automated modes of joint control, and, unlike NCs, the type of visual guidance differentially affected their postural trajectories. These findings lend support to an emerging view that there is a loss of automated control in PD patients that contributes to impairments in voluntary control, and that basal ganglia–cortical circuits are critical for the maintenance and balance of multijoint control.
Introduction
When performing instrumental activities of daily living, we may voluntarily monitor some portions of the movement while other portions spontaneously change, without much conscious awareness. For example, during a reach, once the goal is acquired, we often deploy a motor program that launches the hand forward and retracts the arm to the resting position without our consciously thinking about which joints are rotating, flexing, extending, adducting, or abducting. Often both our covert-spontaneous and overt-intentional modes of action work together, and we can easily multitask fluidly, without completely freezing one movement in favor of another. This is not the case in patients with Parkinson's disease (PD) (Isoda and Hikosaka, 2007; Hikosaka and Isoda, 2008, 2010; Heilman and Valenstein, 2011), who typically have more problems initiating internally evoked (endoevoked) than externally evoked (exoevoked) movements, a phenomenon called “akinesia paradoxical” (Heilman and Valenstein, 2011). Assessing the movements of participants with PD allows us to test hypotheses about the role of basal ganglia–cortical circuits in mediating intentional versus more automated controls of action.
The basal ganglia are critical not only for the initiation and maintenance of movements, but also for the learning and maintenance of procedural memories (Jog et al., 1999; Lehericy et al., 2005; Yin et al., 2009). The loss of dopamine in the posterior regions of the putamen—a region of the basal ganglia associated with the control of habitual behaviors and procedural memories—forces PD patients to more heavily rely on conscious planning and intentional guidance (Chevalier and Deniau, 1990; Albin et al., 1995; DeLong and Wichmann, 2009). In contrast, conscious-intentional guidance—thought to be mediated by the caudate and rostral putamen (Middleton and Strick, 2000; Yin et al., 2004, 2005a,b, 2006)—appears to be relatively spared from the degenerative process early in the course of PD (Kish et al., 1988). Because of this dichotomy, it has been suggested that many of the behavioral difficulties, including bradykinesia (slowness of movement), lack of flexibility in switching between tasks, and an inability to multitask, can be at least in part traced back to the deterioration of this endogenous/automated control system. A recent hypothesis suggests that abnormal function of the automated control system may also “impede the expression of goal-oriented actions” in PD patients (Redgrave et al., 2010, p 760).
This postulate has been challenged by studies of habitual learning (Frank, 2005; Shohamy et al. 2006), but the results have been confounded with medication status. Furthermore, using carefully controlled tasks, other studies have shown intact trial-and-error, habitual learning in PD patients (Swainson et al., 2006; de Wit et al., 2011). Thus, it is unclear whether the deficits were due to impaired procedural/habitual learning or impaired memory or attentional processes (Swainson et al., 2006).
In the present study, we compared the performance of participants with PD, tested off dopaminergic medications, and age-matched normal controls (NCs) as they reached forward to an external target and retracted toward the initial posture. We specifically addressed the balance between voluntary and automated modes of controlling the degrees of freedom (DOF) of the arm.
Materials and Methods
Participants
We asked nine patients with PD (one female, eight males) and nine age-matched healthy subjects to perform 3D pointing motions (forward and back) in the dark. Clinical and demographic descriptions of the PD patients are presented in Table 1. Standard kinematic analyses of the forward hand motions of these subjects were presented by Adamovich et al. (2001).
Demographic and clinical features of nine PD patients tested in the “off” state
Briefly, the PD patients were all mild to moderate in degree (Hoehn and Yahr Stages 2 and 3; Hoehn and Yahr, 1967) with mean (SD) United Parkinson's Disease Rating motor scores of 25.96 (5.34). All patients had clinically typical PD, as reviewed by at least one movement disorder specialist, and their motor disabilities were responsive to antiparkinsonian medications. No patient had any off-state action tremor or dyskinesia of more than minimal amplitude. All subjects were right handed (Oldfield, 1971) and reached with their right arm. PD patients were studied in the “off” state (had not taken their antiparkinson medication for at least 12 h before testing) (Langston, 1991; Defer et al., 1999).
Rationale
There are different levels of intentionality and spontaneity in everything that we do. One way to examine such levels is by studying the influences of the fluctuations in dynamics on the variability of movement parameters at different levels (Torres, 2011). Fluctuations in dynamics can be affected by manipulations of the form of sensory guidance. Here the focus is on motor variability at the postural level. We study the recruitment, release, and balance between two complementary sets of DOF in the arm, which we have parameterized as rotational joint angles. We study the DOF behaviors under different forms of visual and kinesthetic sensory guidance, because we want to know how this sensory information is integrated during the reaches in patients with PD in relation to age-matched NCs. To this end, we examined reaching movements projected outwardly to an external target (or to the memorized visual target) and reaching movements projected inwardly toward a kinesthetically defined goal, a well-defined arm posture. In both cases, in spite of the level of intentionality or of the type of goal (visually based or kinesthetically based), each segment has complementary coexisting sets of joint angles in the proprioceptive domain. We studied the balance and interactions of these complementary degrees of freedom as the motion unfolded.
For each of the forward and retracting components of the reach, we separated the DOF of the arm into the dimensions relevant to the goals of the reach segment and the dimensions that were incidental to the segment's main goals. These supportive degrees of freedom are very unlikely to be under overt awareness. In that sense, we refer to them as more “automated” than their task-relevant counterparts. We refer to the task-relevant DOF values as those under more “voluntary” control. Among the purposes of this work is to help define objective criteria to establish such a dichotomy between these DOF values. To this end, we first investigate the effects in healthy control subjects that the changes in sensory guidance may exert on the reach dynamics along the movement trajectory, specifically on the speed, and then we investigate the effects that these changes in speed, if any, may exert on the variability of the two complementary sets of DOF values in the healthy system. Finally, we compare the postural performance of the control subjects with that of patients with PD.
We know from prior work that the task-relevant components in our approach contribute to maintaining low variability in the intended course of the reach despite fluctuations in the dynamics of the movement (Torres and Zipser, 2004; Torres, 2010). Their variability can be linked to the notion of motor equivalence (Scholz et al., 2000, 2007; Latash et al., 2002; Martin et al., 2009). The task-incidental degrees of freedom, in contrast, greatly vary with changes in dynamics (Scholz and Schoner, 1999; Latash et al., 2001, 2002; Schoner and Scholz, 2007) and are most likely typically related to the updating and correction of errors from trial to trial. We test here whether these task-incidental components may relate to more automated aspects of reaches not explicitly under conscious control by examining the patterns of variability that the sensory guidance evoke. A simple criterion that we have followed in the past to determine different levels of intentionality is to examine the motor variability. The postural trajectories described by intended reaches are not only less variable than those more automated; they are also conserved (along with various trajectory metrics). In contrast, the postural trajectories described by more automated reaches are not only more variable, they also change with the dynamics fluctuations.
We hypothesized that if patients with PD have dysfunctional striatal/frontal networks that impair habitual/automatic actions, then patients who are off their dopaminergic therapy should show marked impairments in the automated components of actions and have an abnormal reliance on the conscious task-relevant components. If, on the other hand, habitual/automatic control of actions is relatively preserved in PD, we may see an opposite pattern.
Description of procedures or investigations undertaken
The subjects were seated with their right arm flexed at the elbow, with the forearm semipronated and vertical such that the hand was on a sagittal plane that was ∼10 cm to the right of the subject's ear (Fig. 1A). The subjects faced a programmable robot arm (Hudson Robotics; CRS 255A) that presented targets in 3D space. A small light-emitting diode was attached to the tip of the robot's arm and served as the target. Two optoelectronic cameras (Northern Digital) were used to record positions of five infrared emitting diodes (IREDs) that were affixed to the following segments of the subject's limb: the acromial process of the scapula (shoulder), the lateral epicondyle of the humerus (elbow), the ulnar styloid process (wrist), as well as on the nail of the index fingertip and on the robot arm tip. The subjects were asked to fully extend their right forefinger and to not move it with respect to the wrist. Two-dimensional coordinates of the IREDs were monitored by each camera. Data from both cameras were sampled at 100 Hz and stored as 2D binary files. Then they were low-pass filtered using a Butterworth filter with a cutoff frequency of 8 Hz, and three-dimensional coordinates were reconstructed.
Methods. A, Schematic diagram shows the participant seated and pointing at the central target. The projection of all five targets on the frontal plane is shown next to the arm in the initial position in relation to the five targets in a slightly rotated side view. B, The experimental epochs for the three sensory-guidance conditions. Each subject was seated in a relaxed position to bring the arm to the initial posture. The target flashed for 1.5 s, and then the following occurred: Block 1 was to reach to the target from memory with closed eyes; target was left on and the subject pointed without vision of the forelimb; and the subject had vision of the moving finger but not of the target.
The robot randomly presented five targets in two planes (Fig. 1A). Four targets formed a diamond in a frontal (coronal) plane. The geometric center of this diamond was on a sagittal plane that was defined by the subjects' right shoulder but was ∼43–48 cm in front of the right shoulder. The two diagonals of the diamond were ∼50 cm long. The fifth target was located on a sagittal plane directly in front of the right shoulder, but ∼12 cm further from the shoulder than the four target diamond. Exact distances from the shoulder were individualized for each subject by first positioning the furthermost (fifth) target at a distance approximately equal to the length of the subject's arm with the subject's fingers being clenched. This positioning of the subject relative to the target prevented the subject from having to fully extend the arm to reach any of the targets.
All subjects reached using their dominant right forelimb. Their initial limb position, as mentioned above, was with their right arm flexed at the elbow, with the forearm semipronated and vertical such that the hand was on a sagittal plane that was ∼10 cm to the right of the subject's ear. The subjects attempted to “touch” the target with their right forefinger and returned their arms to their initial positions in one smooth movement, without pausing at the target. Three conditions (Fig. 1B) were designed to examine the contributions of visual cues during the forward-reaching movement, including extrapersonal vision (target cue), or a body-centered visual cue (the moving forefinger), versus no cues (darkness). In each of the three conditions (No Vision, Target Vision, Finger Vision), targets were initially transiently presented as points of light within 3D space in a completely darkened room.
In the body-centered visual cue condition, the moving hand's location was marked by a point of (light emitting diode) light placed on the tip of the forefinger. In the other conditions (target cue and total darkness), this body-centered cue was not present. In the target cue condition, the target light remained illuminated during movement, but the moving finger could not be seen. In the no cue condition (No Vision), neither the finger nor the target light was illuminated during the reaching movement.
In all three experimental conditions, the robot arm held the target position for 1.5 s, during which time the subject was able to view the target. Then a short auditory signal (tone) instructed the subjects to close their eyes in the No Vision condition, at which time the robot arm retracted. A second auditory tone 1 s later signaled subjects to “touch” the memorized target location with their forefinger and then to bring their arm back to the initial position in smooth continuous movements without “corrections” near the target. The subject's eyes were closed throughout the movement.
In the Finger Vision condition, the timing of target presentation and response initiation was the same as in No Vision condition; however, the LED on the fingertip remained illuminated and visible throughout the movement. During the Target Vision condition, the LED on the forefinger was turned off, but the target light remained on, and the experimental condition was otherwise the same as the former conditions. The intensity of the target LED on the robot arm was adjusted to prevent the possibility of the subject seeing his/her forefinger at a distance >1–2 cm from the target. In addition, a strong overhead light was turned on between trials to prevent dark adaptation. Thus, the only available visual information throughout the movement in this condition was the point/light target.
Ethics
All procedures were undertaken with the understanding and written consent of each subject. The Rutgers University and the University of California, San Diego's Institutional Review Board approved the study. The study conforms to the Code of Ethics of the World Medical Association (Declaration of Helsinki).
Statistical and analytical measures
To address the hypothesis of voluntary versus automated control, we focus on the arm joints and the DOF decomposition. However, the degrees of freedom of the arm map to the position orientation of the hand. Thus, we examine here two additionally related sets of parameters: the end point error around the target and some symmetry related to coordinate transformations between visual targets and arm joint configurations (Table 2).
The table shows the slope, intercept and regression values for the trajectory symmetry ratios in the forward (1) and backward (2) motions for each sensory condition and subject type
First, we focus on the end point errors at the target. We investigate how the manipulations of the source of sensory guidance affect the patterns of variability around the target in the participants with PD in relation to the NCs. The purpose of this measurement is twofold. We want to identify the most effective source in sensory guidance to reduce end point error in the patients, important both for understanding the nature of motor control in a system with specific dysfunction of basal ganglia–cortical loops and for therapeutic purposes (e.g., when pinching a piece of food on the plate and then bringing it to the mouth). We also want to know to what extent excess end point variability around the target affected the retracting postural path, that is, if the variability would be such as to change the continuity of the arm postural history along the reach loop).
Second, we examine a trajectory metric that relates to the transformation from visual target to kinesthetic postural configurations when constructing a path from the initial posture of the arm to the final target, or when bringing the hand back from the visual target toward a final posture. To perform these actions one has to resolve both the forward and the inverse map between external goals and internally sensed configurations. The metric that we use yields a symmetry that speaks of a conservation of hand displacements Δx when converting to joint displacements Δq under coordinate transformations.
End point errors.
We separately analyzed the forward and the backward segments of a continuous, smooth, single reaching action performed without corrections at the visual target. In the forward segment for each trial, we computed the end point errors between the three-dimensional positions of the target presented by the robot and the end of the forward path of the forefinger. We used Euclidean distance (in Cartesian coordinates) to compute these errors. This measure can give us a sense of end point error variability. A significant reduction in variability can be interpreted as an increase in reach accuracy in visual space. We examined this increase in accuracy as a function of the form of sensory guidance to reveal the most effective form of guidance in the PD cases in relation to NCs.
For the retracting reaches, we measured instead the variability of the final posture in relation to the initial posture (as the instructed goal in this segment was to return the arm to the initial posture). This metric gives us a sense of consistency in the kinesthetic domain. The effects of the form of sensory guidance on this postural error in PD—in relation to the NCs—also provides us with a sense of the form of guidance most effective in these patients when designing therapies to help them in activities of daily living, e.g., coordinating the joints of the arm to correctly position the hand to grasp a cup of tea, or when folding the laundry, etc.
The two metrics of end point error provide a measure of what form of guidance permits better integration in PD of visual and kinesthetic cues as the forward and back reach loop unfolds.
Hand-trajectory metric.
We use a geometric symmetry derived previously (Torres, 2010) to assess metric-distance conservation under transformation of coordinates. Metric-distance conservation under transformation of coordinates means that a displacement of the hand Δx is congruent with a body displacement at the joints Δq (despite the nonlinear, many-to-one map from body to hand configurations) (Torres and Zipser, 2002). These ratios have been used in other patient populations to help us identify the most effective form of sensory guidance that helps restore sensory-motor performance toward typical levels in the context of visually guided reaches (Torres et al., 2010).
When the system conserves the desired hand displacements in visual space under transformation to displacement in kinesthetic space, we expect a distribution tightly centered at ½ for each of the ratios. The two ratios are interrelated. Both denote geometric properties of the curve in relation to the Euclidean straight line. The curve described by the movement of the hand can be characterized as a “straight line,” the shortest distance path (in a non-Euclidean sense), i.e., with respect to a non-Euclidean distance metric. The two ratios in human data measure the departure from isometric transformation, i.e., nonconservation of the shortest distance path (between hand and target) under transformation between visual and kinesthetic coordinates. A large spread in the scatter means that the integration of different forms of sensory information from different sensory spaces (e.g., visual and proprioceptive) is inadequate. It indicates that the desired (or planned) magnitude and direction of the hand displacements were not conserved under transformation into the arm's joints displacements.
The perimeter ratio relates to the bending of the curve and reflects the length properties of the hand paths. The area ratio relates to the twisting of the curve and reflects length properties of the arm postural (rotational) paths. Excess bending reflects nonconservation of the shortest-distance property mostly due to errors in the visual-space domain. Excess twisting rotation reflects nonconservation of the shortest distance property in the kinesthetic-space domain. Conservation of both symmetries and their covariation is registered by a tight linear regression fit through the scatter and reflects the isometric transformation from visual to kinesthetic coordinates. Possible scenarios in the patients with PD are as follows: (1) too much bending at the start of the reach in the first half of the motion, inducing perimeter ratios that are <½, or too much bending toward the end in the second half of the motion inducing a >½ perimeter ratio; (2) analogous situations with the area signaling excess rotation earlier or later in the path and lack of proper coarticulation of the joint angles; (3) abnormal spread, indicating poor integration of visual and kinesthetic information; and (4) lack of covariation of the ratios, indicating departure from an isometric (distance-metric-preserving) transformation. Notice here that intentionality is denoted by a very specific goal defined for task completion and that in the patient case some form of guidance will restore the symmetries and their covariation. In other words, when patients restore these metrics, we see that the manifestation of these symmetries and their covariation are not merely a byproduct of the biomechanics of the arm and of the physical laws of motion. Rather, they are under the control of the nervous system, and their restoration can help us identify which form of guidance is the most effective for an injured system.
In previous empirical work, the slope of the line remained conserved (despite changes in dynamics and level of skill at performing the task), unless there was unintentional curvature of the hand trajectory (Torres, 2010). Unintentional curvature (hand paths that are not the shortest for the task) emerges, for instance, during deadaptation “after effects.” Here, the shortest path should be straight to the target, but the hand instead follows a highly curved (longer than desirable) path in the absence of obstacles or of any other potential deterrents from the desired straight line. During reaches that are unintentionally curved, and thus longer than optimal in the spaces of interest (visual and proprioceptive), the slope of the regression tilts differently depending on the area–perimeter interplays. A possible outcome here in the patients with PD is that the scatter from the retracting reaches generates a regression fit with a different slope from those of the forward reaches. This would flag the failure to conserve the intended course of the action in both the visual and the kinesthetic domains. Motivated by this hypothesized outcome, we assess the extent to which both the symmetry and the covariation of the ratios fail in PD.
To obtain the ratios, each hand trajectory (forward or backward) was measured as a unit-speed curve with a fine partition. The bending of the curve relative to the straight line was obtained by computing the normal distance from each equally spaced point along the curved path to the corresponding point on the straight line. The point of maximum bending was obtained and used to compute two ratios. The area ratio was obtained as the quotient between the partial area enclosed between the line and the curve up to the point of maximum bending and the total area. The perimeter ratio was similarly obtained as the quotient between the partial and total perimeters (see Fig. 3A). We assessed the similarity of these ratios and the degree to which they violated the symmetry (departed from an isometric transformation of coordinates).
Similarity of the area and perimeter ratios.
In the metric above, the difference in variance between the area and the perimeter ratios is tied to the evolution of the acceleration phase of the reach. This segment of the motion is used to compute the numerator quantity involving values up to the point of maximum trajectory bending. The Friedman test (Zar, 1996) was used to analyze the pairwise differences in variance between the patients with PDs and NC participants of the area and the perimeter ratios, pooled across targets and trials.
Decomposition of the arm's DOF from the transformation (the linearization of the map) between joint-angle and hand configurations.
We recovered the joint angles that best reconstructed the arm positional trajectories recorded by the markers. Figure 2A shows the reconstructed trajectories (thin lines) connecting the real trajectories (dots) of the various markers. Figure 2B shows the joint-angle paths for seven of the joint angles of the arm. Notice that other parameterizations of arm posture can include scapular motions as well, thus comprising 10 DOF. With 7 DOF though, we can illustrate the general use of this methodology and the separation according to task goals.
Degrees of freedom decomposition steps. A, Reconstruction of the markers' trajectories in the three-dimensional physical space from the seven joint angles recovered from the sensor paths. The fine line is the reconstruction, and dots are the real markers output at the shoulder, elbow, wrist, and end point (shoulder paths are occluded by the arm schematics). For simplicity, only the initial and final postures are shown. B, Joint-angle path (in degrees) to move the hand to one target and retract it as in A. Notice that each joint rotates back close to where it started. C, The DOF decomposition in each of the forward and retracting segments for one subject and one target (Fig. 1, Target 2, to the right of the midline of the subject). Values of the projection span from −1 to 1 were normalized between 0 and 1 to later compute the average across each set of task-relevant and task-incidental degrees of freedom. D, Schematics of the elements used in the practical version of our theoretical model to study the joint velocities' variability. Seven-dimensional joint trajectories in Q posture space map through f to the hand–arm configurations in the space X of goals and configurations (goals relate to target positions, orientations of hand and arm plane in this task). All elements in X are q dependent. The map r relates current hand configurations to physical distances (measured by the error or cost; see Materials and Methods). The composite (r ○ f) tracks the errors due to both posture Δq and hand Δx physical displacements measured by the sensors until all goals are attained and r = 0.
The map from joint angles to hand configurations that are goal driven has no inverse in closed form. This is a nonlinear, many-to-one map that makes the inverse-kinematics transformation problem underdetermined. The inverse-kinematics transformation problem is as follows: given a desirable hand directional displacement (could be a rotation and/or translation evoked by external goals), determine an internal postural rotational displacement that will move the hand in the given external direction and extent specified by that desirable hand configuration.
We have resolved this problem with a locally linear isometric embedding that provides, in general, a linear correspondence between the external goals of a task and the adequate internal postural configurations to accomplish those goals (Torres and Zipser, 2002). The linearization of the original map according to this geometric prescription permits the decomposition of the joint velocities into the DOF that directly correspond to the goals of the task at hand (exoevoked task relevant) and those corresponding to the DOF that are redundant for that task (endoevoked task incidental). In linear algebra terms, they correspond to the null and the range subspaces of the linear transformation that our solution builds (Torres and Andersen, 2006, their appendix), but we will simply refer to them here as task-relevant and task-incidental DOF.
Critical to this solution is the introduction of an error (or cost) function that quantifies the physical distance traveled by the hand due to arm-joint physical displacements. This error function is composed with the function that maps configurations from posture to hand displacements, so that when the physical displacement of the arm joints, which affects the hand physical position, is quantified, we can also obtain the error in relation to the target in physical space (Fig. 2D). Movements directed to a visual target reduce this distance to 0 when and only when the hand is at the target. Because of this property and the fact that we measure physical distances in body space and in three-dimensional spaces, we can obtain the Jacobian matrix (the derivative of the map linking arm postures to hand displacements) from physical displacements alone, independent of the coordinate functions used to represent the points in posture space or in hand space. Using this theoretical idea, the equations necessary to quantify movement trial-to-trial variability in the actual joint velocity domain in this way are as follows:
is the map from arm postures to hand configurations; e.g., n = 7 (three rotations at the shoulder, two at the elbow, and two at the wrist) and m = 3 (positions in three dimensions, up–down, left–right, front–back).
where r is the map from hand configurations to the positive real numbers in the real line. This function is 0 if and only if the current position of the hand equals the target position. In general, m is the number of goals that define the task and n ≫ m.
is the composite function that enables us to measure the error in both spaces as the arm moves and the hand moves. We minimize this composite map, and to update movements, we follow the negative gradient of this function:
which is used to obtain the joint displacements that bring the physical distance to the target in three dimensions to 0.
In Equation (4), the gradient of r with respect to x is an 1xm vector of partial derivatives of r with respect to each one of the f goals' components; J is an mxn matrix whose coefficients are the partial derivatives of f with respect to each of the q joint components.
In the theoretical model, we used the metric tensors appropriate to each space parameterization (the set of coordinate functions chosen to represent points in that particular space) and could estimate the similarity transformation matrix between our estimated model metric and the actual data displacements in each space (Torres and Zipser, 2002). The coefficients of this metric when we followed the “natural gradient” in the error minimization process drove the rate of change in error-based learning (Amari, 1999). Here we do not need to rely on tensor-based methodology, which under numerical approximations (Torres and Zipser, 2002) could be prone to introduce errors and biases in the computation of the proposed task relevant/task incidental decomposition. Instead, we can estimate the Jacobian matrix directly from the data using the physical displacements that our sensors are measuring in the hand and in the arm spaces.
The data provide us directly with what we need to solve for J in Equation (4), dq = ∇rx · J, where ∇rx is directly measured by the sensors; dq is also provided by our reconstruction algorithm from the sensors, with minimal error to reconstruct the three-dimensional displacement trajectories of the shoulder, elbow, wrist, and the tip of the hand (Fig. 2A,B). Thus we can obtain J = ∇rxT · dq, where the transpose of the gradient in X is mx1 and the dq is 1xn. In this way, we obtain directly from the physical positional displacements of the joint angles Δq and the hand Δx the relative contributions of each joint to the decrements in the physical three-dimensional distances from each component of the hand's current position to the target position.
The paths of relative joint displacements in time (to build the joint velocity vectors) Δq that best subserve the hand's displacements Δx over the course of the reach can provide a geometric description of how the variability of the movement dynamics unfolds. They can be helpful to distinguish aspects of the movement dynamics that remain conserved from aspects of the movement dynamics that change along the motion path in resonance with the natural trial-to-trial movement dynamic fluctuations. Critical to this decomposition step is the manipulation of sensory guidance and the effects that different sensory sources have on the redundant DOF. While the task-relevant degrees of freedom are known to remain with low variability, perhaps to keep the intended course of the action on track, the redundant degrees of freedom covary with the changes in sensory guidance (Torres et al., 2010). They serve to channel out through movement the most effective form of sensory guidance in a compromised system. In our experience with studies of redundancy in movements, these have been the most informative degrees of freedom with regard to the automated/spontaneous aspects of behavior.
In our formulation, the exoevoked range (denoting the task-relevant DOF) of the geometric transformation by construction corresponds to the intended-goal components (less affected by changes in the endogenously generated dynamics). The normal complement endoevoked incidental component relates to the automated mode of the motion (changing with fluctuations in the endogenously generated dynamics). Our introduction of the error function (Torres and Zipser, 2002, 2004) linking both posture and hand spaces to the notion of goals' completion permits the practical use of this theoretical model to quantify in the actual data the partition of relevant subspaces in a coordinate-free way. Since to obtain the Jacobian we rely only on the measurements of physical displacements directly from the sensors, we can avoid spurious errors due to the computation of derivatives from the sensor data, as well as the issue of having to choose some set of coordinate functions/metric over another to best represent the phenomena.
The physical distances that our limbs travel are invariant to the units that we choose to describe them. We do not know how the CNS parses movement information, as it could do so in multiple spatiotemporal scales. Quantifying movement in the proposed way can help us understand and measure natural movement variability in relation to intended-goal compliance, and also in relation to spontaneous transitions related to the self-motion components of the reach. This dichotomy will be critical to establish signatures of intentionality in segments from complex movements (Torres, 2010) and to contrast their conservation (low variability or endodynamic invariance) with the nonconservation of spontaneous segments (high variability and endodynamic dependencies).
Defining forward reach goals.
Each task spans different goal-related dimensions. The dimensions defining the pointing goals in this task were three, corresponding to the x, y, and z positional coordinates of the hand at the target. The decomposition of the 7 DOF joint-angular velocity vector in this case spans 3 DOF for the intended-goal (task-relevant) components and 4 DOF for the automated (task-incidental) components.
Defining goals for reaching back to a sensed posture.
The dimensions defining the goal in this portion of the task were five. They corresponded to the spatial position of the hand (three dimensions, all of which depend on the arm posture) near the face at ear height; the orientation of the palm of the hand (one dimension, the angle defined by the Euler–Rodrigues's parameter) (Altmann, 1986) with the palm of the hand facing the ear, which also depends on the arm posture φ(q); and the orientation of the plane of the arm [one dimension, which also depends on the arm posture θ(q)] [defined by Soechting et al. (1995)]. This is the angle that a vector normal to the plane spanned by the upper arm and the forearm makes with the horizontal plane. In this case, the vector normal to the plane of the arm was aimed at being approximately parallel to the horizontal plane, when the arm returned to its initial configuration. Whereas the forward motion decomposition was 3 DOF for the target and 4 DOF for the self-motion dimensions, in the backward segment the 7 DOF joint velocity vector spanned 5 DOF for the intended goals of pulling back to a certain position and inclination of the plane of the arm with the hand oriented toward the ear (task-relevant components), and 2 DOF for the remaining automated (task-incidental) components.
To obtain the orientation of the hand, we used the Euler–Rodrigues angle vector parameterization of rotations, φ = arccos[(A11 + A22 + A33 − 1)/2], from A, the rotation matrix at the hand, and the unit vector ê = [e1e2e3]T, defined as e1 = (A32 − A23)/(2sinθ), e2 = (A13 − A31)/(2sinθ), and e3 = (A21 − A12)/(2sinθ). The angle of the plane of the arm is cos(θ) = 〈n⃗, (0, 0, 1)〉, and n⃗ = u⃗ × v⃗ for u and v unit vectors from the shoulder to the elbow and from the shoulder to the wrist, respectively (Fig. 3B).
Methods: definition of area–perimeter ratio and inclination of the plane of the arm. A, Perimeter ratio. The curved hand movement trajectory was projected on the straight line, and a fine partition at equally spaced points was obtained. The maximum normal distance from the curve to the straight line (marked with a red star) was obtained. The partial perimeter enclosing the yellow area up to the point of maximum bending (max normal distance) was computed as Ppartial = Xpartial + Ypartial + Z, where Xpartial is the length of the curve from beginning to max bending, Ypartial is the length of the straight line from beginning to max bending, and Z is the max normal distance (max bending value). The perimeter ratio is the partial perimeter divided by the total perimeter (length) of the curve and the straight line. The area ratio was computed similarly but using the area enclosed between the line and the curve and the partial area (yellow) up to the point of maximum bending. B, Schematics of the inclination of the plane of the arm obtained from the unit vectors running along the red vectors spanning a plane defined by the positions of the upper arm and the forearm. Vectors run from shoulder to elbow and from shoulder to wrist. The angle θ between the unit vector n⃗1 normal to this plane and the unit vector n⃗2 perpendicular to the horizontal plane define the inclination of the plane of the arm.
Data-based task-relevant vs task-incidental DOF decomposition.
In the theoretical approach to measuring the contributions of both the range and the null components of the joint-angle rotations as the movement unfolds, we projected each component on the unitary basis from the singular value decomposition of the metric in the tangent space to the joint-angle space (postural configuration manifold). The elements of the tangent space are the joint velocities. In our theoretical formulation of the inverse solution, we preserve the metric from the hand–goal configuration space (hand goal–configuration manifold) under coordinate transformation (Gray, 1998): Gqμ = JTGxαJ, where Gxα is 3 × 3 in the forward pointing case and 5 × 5 in the case of reaching back to the initial arm posture. The Jacobian matrix of partial derivatives is 3 × 7 in the former case and 5 × 7 in the latter. The resultant matrix, Gqμ, is 7 × 7 positive definite, and its coefficients, the gij values, define the new metric under change of coordinates (from hand to arm-posture configurations in this case). In the practical approach to the actual data, we use the Jacobian obtained from the physical displacements and set Gxα to the identity matrix.
We use the singular value decomposition to factorize the Gqμ = UΣV*, where U is an m × m unitary matrix in the real field K, the matrix ∑ is an m × m diagonal matrix with nonnegative real numbers on the diagonal, and V* is an m × m unitary matrix over K (the asterisk denotes the conjugate transpose of V). A common convention, which MATLAB follows, is to order the diagonal entries ∑i,i in descending order. In this case, the diagonal matrix ∑ is uniquely determined by Gqμ (though the matrices U and V are not). The diagonal entries of ∑ are the singular values of Gqμ.
In each version of the task we project the joint-angular velocity unit vector from the data trajectory onto U and extract the task-relevant and task-incidental components (three vs four or five vs two). In the theoretical model, the angular velocity vector is the natural gradient (length-minimizing) direction (Amari, 1999) with respect to the metric Gqμ in the tangent space to the manifold of postural configurations. Here in the practical version of this model, implicit in the goals' contribution are the scaling factors related to the translational distance components and the rotational components (φ) for the palm orientation and (θ) for the inclination of the plane of the arm. These scaling factors are important to characterize how the system drives the rates of change of rotations (e.g., degrees or radians) and linear translations (e.g., centimeters), whose interplay change from task to task. In the model, these would correspond to relative changes in joint rotations and translations. Here we obtained the lengths of the projections properly normalized by the number of dimensions in each of the task-relevant and the task-incidental components and assessed the variability of the degrees of freedom across targets (1–5) and across all trials and subject groups (normal vs PD). Figure 2A–C shows the above described steps for one target (Fig. 1, Target 2, to the right of the body midline). In Figure 2A, the trajectories of the joints are shown along with the arm posture at the start of the reach and the final posture at the target. The reconstructed trajectories from the joint-angle paths are also superimposed in Figure 2A on the sensor trajectories. Figure 2B shows the joint-angle paths for the forward and retracting motions, whereas Figure 2C shows the decomposition of the task-relevant and task-incidental degrees of freedom in each segment.
ANOVA.
For each of the extracted task-relevant and task-incidental degrees of freedom (independent variables), we measured the significance of the effects of the form of sensory guidance (No Vision, Target Vision, Finger Vision) across target locations. We asked if the form of sensory guidance exerted a significant effect on the interactions between these two subsets if the arm's degrees of freedom in these two types of reaches where the goals were different. First we assessed the normal performance, and then we compared this performance to that of the PD patients.
Automated vs intended prevalence.
Using ANOVAs, we also assessed the effects of the form of sensory guidance on the variability of the lengths of projections per dimension (for both the task-relevant and the task-incidental degrees of freedom). We examined whether their variability was larger when the motion was inwardly directed (no explicit visual goal) versus when the motion was outwardly directed (as defined by an external target).
We used a ratio of incidental/task-relevant degrees of freedom measured in each sensory condition to assess the relative contributions of the full movement's components: the automated transitions back and the intended-goal forward components. If the ratio was close to 0, the intended component dominated. Otherwise, if the ratio was >1, the automated component dominated the motion. A ratio close to 1 would reflect comparable contributions from both voluntary and automated modes. These analyses were performed for each of four segments of the reach trajectory: 1–25%, 26–50%, 51–75%, and 76–100%. They are reported as a function of the motion corresponding to each percent of the path. Similar related ideas have been exploited previously with success to assess coordination in gait and other tasks (Latash et al., 2002; Dingwell et al., 2010).
Results
Forward motions to an external target
End point errors
Hand trajectories from a typical NC and a typical PD patient are depicted in Figure 4 along with the speed profiles. Notice the significant effects of the form of sensory guidance on the peak velocity value in NCs (p < 10−10). These effects were not significant in the participants with PD (p > 0.8). The green segments mark the first pulse of the reach up to the velocity peak.
Hand trajectories from forward reaches and corresponding speed profiles. Green marks the paths traveled up to the peak velocity. Black dots mark the target locations presented by the robot. The black arrow marks the direction of the reach from the starting point to the external targets. Speed profiles show the bradykinesia of parkinsonian motions with double the movement duration. Notice the different scales, with the NCs reaching average speeds between 100 and 160 cm/s and the patients with PD moving on average with a speeds of 55–65 cm/s. Notice also that the finger vision case ended the motion earlier than the other conditions. Insets show the results of ANOVA on the effects of the form of sensory guidance on the peak velocity value. Effects were highly significant for typical controls (F(2, 90) = 27.78, p < 5.4 × 10−10) but non significant for the patients (F(2, 90) = 0.17, p > 0.84) (scale in insets is similar to the speed profiles, centimeters per second).
The beneficial effects of finger vision can be best appreciated by assessing the end point errors (Fig. 5). With end point errors as the dependent variable, the effects of target location and stimulus (sensory-guidance) conditions were significant (two-way ANOVA) for both groups of participants with an α level of 0.01. Each column in Figure 5 shows, for a representative participant in each group, the end point error distribution at each target location. Notice that in each participant group, the type of sensory guidance had a significant effect on spatial accuracy. During finger vision, PD patients were at their best in these forward reaches. These differences, when compared pairwise between subjects with PD and NCs, were statistically significant across target location and the type of sensory guidance.
End point accuracy: end point errors based on Euclidean norm from the forward reaches. Solid-line squares mark the reduction for PD patient in end point variability during the Finger Vision condition. Dashed-line square shows that there was also a reduction in the end point variability during the Target Vision condition, but not as much as with visual feedback from the moving finger. Black dots mark the target position presented by the robot.
Trajectory ratios
Recall from the methods that there are two important aspects to these ratios. (1) They are ½ in the ideal case (theoretical) scenario. The ratios express a symmetry that indicates conservation of displacements in one set of coordinates under transformation to another set of coordinates. In the arm model, the coordinates represent goal-directed extrinsic hand displacements at a one-to-one correspondence with goal-directed intrinsic joint rotational displacements. Ideally such displacements are modeled as geodesic (length-minimizing) directions with respect to a distance metric that represents the goals of the task. We have previously shown (Torres, 2010) how this metric in the extrinsic space X can be conserved under transformation of coordinates to the joint-angles configuration space Q, so that the length-minimizing displacement Δx transforms to a length-minimizing displacement Δq. In real movements, there is noise and natural variability so we can measure the departure from the ideal case in the scatter of points from repeats of the movement trajectories to targets across space. (2) The area and perimeter ratios ideally covary with a linear relation: across positions in space they maintain the symmetry property despite changes in dynamics, and this covariation can be well characterized with a linear regression fit (Torres, 2010; Torres et al., 2010). This covariation across space is indicative of a continuous integration of the displacements from both sets of coordinates to result in a length-minimizing path with respect to the task goal-dependent distance metric.
The questions here are to what extent the typical controls deviate from the ideal theoretical scenario and which form of sensory guidance in the patients with PD brings their performance closer to that of the typical controls. The geometric interpretation in the worse performance cases is a lack of conservation of the intended or desired hand displacements under coordinate transformation to joint-angle displacements and the improper continuous integration of visual and proprioceptive inputs to produce a path that is the shortest in length with respect to the metric that the task goals define. The former manifests through the violation of the symmetry. The latter manifests through a change in the linear relationship that typically characterizes covariation of these ratios in the primate arm system (Torres, 2010; Torres et al., 2010). For example, if the perimeter ratio is predominantly <½ with the corresponding tilt in the slope of the regression line, this signals that the partial perimeter (the length of the curve and the Euclidean straight line up to the point of maximum bending) is less than the total perimeter. This in turn indicates excess bending in the second half of the forward movement. As the hand approaches the target, there is improper use and integration of the visual and kinesthetic feedback accompanied by poor control of the arm postural path. In other words, the desired trajectory in visual space might have been straight, but its actual implementation yielded excess bending as the hand approached the target. In the data from the NCs, we can see a tendency to conserve the symmetries and their linear covariation across all sensory-feedback conditions. By marked contrast, the participants with PD violate all aspects of this theoretical metric. Yet, the condition where they received continuous visual feedback from their moving finger brought their performance significantly closer to typical patterns. We quantified these differences and similarities below using the Friedman's test.
Normal controls vs participants with PD
In all three conditions, the scatter and the linear regression fit were significantly different between NCs and participants with PD, indicating atypical transformation of coordinates and atypical integration of visual and proprioceptive inputs across the different spatial target positions under examination.
The Friedman's test revealed that in the No Vision condition, the area and perimeter ratios for the forward motions in the participants with PD were not significantly different from those of the NC subjects (p = 0.67; xdf=1,62 ≤ 0.18; mean ranks, [6.70, 6.30]; unlike the ratios for the backward movement trajectories, described below). The distributions and regression lines are depicted in Figure 6 for each condition. Notice that the ratios were more variable in the PD case (broader scatter at ∼½, but still the two ratios significantly covaried linearly in the forward case). The linear regression lines in the PD case had different slopes than those of NCs, indicating that, across trials and target locations, there was conservation of the intended hand displacement when such displacement transformed into a postural displacement.
Area–perimeter ratios. Normal controls conserved the distributions of the area and perimeter ratios around ½, as well as their covariations, despite changes in the form of sensory guidance, which induced scaling in the reaching speed. This conservation was manifested in normal controls for both the forward (red) and the back (blue) reaches. Parkinson's patients broke the symmetry but restored the covariation of the ratios for both the forward and backward reaches when continuous visual feedback of the moving finger was provided. (For statistics, see Results, Trajectory ratios; for regression information, see Table 3).
With vision of the target, the ratios of the participants with PD were significantly different from those of the NCs in both the forward and backward motions (p = 0; xdf=1,62 ≤ 43.5, mean ranks, [9.07, 3.93]). The patients with PD shifted the perimeter ratio distribution to a value significantly lower than ½ (t test, p < 0.01) in the forward case, implying poor integration of the visual cues and the kinesthetic feedback when prompted to rely more on the external target for guidance.
In the Finger Vision condition, there were no significant differences in the ratios between the NCs and the PD patients (p = 0.29; xdf=1,62 ≤ 1.15; mean ranks, [5.96, 5.04]), suggesting stronger similarity between the forward and the retracting strokes than in the other cases. Notice also that as in the normal controls, the patient's trajectory ratio value was close to ½ (two-tailed t test, p < 0.01). This condition improved the performance of the patient in the retracting segment, yet the scatter from the retracting trajectories was more variable. This implies that in this stroke, the transformation from visual to kinesthetic coordinates did not conserve the desired hand displacement under coordinate change to joint angles. As we will see later, the joint angles in this condition were more variable and had a prevalence of the task-incidental DOF that contributed to the violation of the symmetry in several trials.
Table 3 summarizes the relevant parameters for the ratio measure in the forward case with subscript 1.
Statistics of DOF-decomposition from forward and retracting motions
Retracting motions
Trajectories
The NCs improved the retracting trajectories both with vision of the target and with finger vision, but in the finger vision case, the trajectories were less variable and straighter overall. This can be seen in the first column of Figure 7. Figure 7 shows that the PD participant, who is representative of this group, had highly variable and curved trajectories when retracting the hand toward the body. This participant, however, demonstrated that both target vision and finger vision improved performance on average for the retracting stroke. In addition, in the PD group, vision of the finger throughout the movement shortened the movement latency compared to continuously viewing the final target. Although the peak velocity during vision of the target was on average higher in this typical PD participant, he actually slowed down the ending of his backward motions toward the final posture with a long tail in the speed profiles. This contributed to the increase in variability of the retracting postural path in this condition. This can be appreciated in the last column of Figure 7, where we show in the insets that in both the NCs and the patients there was a significant effect of the form of sensory guidance on the values of the peak velocity (NC, p < 10−5; PD, p < 0.004). We later see that these speed effects in the NCs did not change the retracting postural path and the final posture. By marked contrast, the speed effect on the retracting trajectories altered the postural paths and the final posture of the patients with PD.
Hand trajectories from the reaches back to the initial posture and corresponding speed profiles. As before, the green segments mark the path traveled up to the point of maximum velocity. The black arrow marks the retracting direction of the motion from the targets back to the initial posture. Corresponding speed profiles from reaching back to the initial posture are shown. In both the NCs and the patients with PD, the form of sensory guidance exerted a significant effect on the values of the peak velocity, as shown in the insets from the ANOVA (with F(2, 90) = 12.21, p < 2.3 × 10−5 and F(2, 90) = 5.79, p < 0.004, respectively; scale in the insets is similar to the speed profiles, centimeters per second).
End position postural error
In the retracting trajectories of the arm, we examined the end postural error instead of the end point variability in visual space. This is because the target in the retracting reach was proprioceptive, to reproduce the initial posture. We also wanted to know the extent to which the system with PD failed to maintain the continuity of the retracting postural path congruent with the history of the forward postural path in the face of changes in sensory guidance that altered the speed. Since the final posture was instructed as a goal of the task, the retracting path was highly constrained by the desired final orientations of the arm (the plane of the arm) and by the final orientation of the hand. Thus, despite expected higher variability in the retracting path with changes in sensory guidance, typical performance had to lead the arm along similar retracting postural paths across sensory-guidance conditions. As a result of the instructions, the retracting path had to end in a similar final posture—if the system was compliant with the task demands. Although the NCs manifested consistency in the postural paths back to the instructed final posture, this was not the case in PD patients, who altered both the retracting postural path and the final posture with the speed changes; yet, as expected, the conservation of the postural path and of the final posture manifested in the NCs.
Between-subject comparisons
Across subject type, a two-way ANOVA was performed with the dependent variable, the difference between the inclinations of the plane of the arm Δθ at the start of the forward stroke, θinit, and at the end of the retracting stroke, θfinal. The difference in inclinations of the plane of the arm would be zero if the subject perfectly reproduced the initial posture as instructed.
The two factors in the ANOVA were target location and type of visual guidance (finger, target, or none). Between the two subject groups, we examined the influences of the form of visual guidance on the final arm posture and on the postural path. We found that the type of sensory feedback had a significant effect on the increase in the variability, Δθ, with significant interactions between these factors (p < 0.01).
The ANOVA results showed that PD patients had significant effects of the type of sensory guidance. Their final arm posture changed significantly from one form of sensory guidance to another. Likewise, the postural paths in the retracting movements were not only more variable in PD compared to NC; they also actually changed significantly for each pairwise comparison in relation to the NC (multivariable Wilk's lambda test, Λ = 0.13 ≥ Λα,d,vH,vE* =0.717, rejected the null of mean postural path equality).
The abnormal variability in the incidental DOF of the arm in the PD cases resulted in different final arm postures for each sensory condition. This was not the case in NCs, who maintained the retracting postural paths and consistently ended in the instructed posture. Recall that the Wilk's lambda rule rejects the null hypothesis of mean equality for Λ ≥ Λα,d,vH,vE*, where α = 0.05 and d = 7 from seven recovered joint angles, and vH = 2 − 1 and vE = 2(7 − 1) are the degrees of freedom for hypothesis and error terms, respectively, for the hand paths. In our case, the number of samples was k = 2 (pairwise comparison). Each block had seven trials; we performed the test for each point along the joint-angle path (100 points) from the reconstruction in Figure 2, A and B. From Rencher (1995, his Appendix B), Λα = 0.05, d = 7, vH = 1, vE = 12* = 12 = 0.717. Values of Λ that cannot reject the null hypothesis of mean equality are such that Λ > Λα,d,vH,vE*.
Trajectory ratios
As in the forward reaches, here we examine the trajectory ratios in the retracting reaches to try and understand the system's implementation of coordinate transformation and the integration of visual and kinesthetic cues. Recall that a scatter of the ratios tightly centered at ½ has a geometric interpretation that indicates conservation of hand displacements under coordinate transformation to postural displacements from the joint rotations. A large scatter is suggestive of nonconservation, thus indicating problems in arm postural control possibly linked to faulty integration of different sensory inputs. In the cases where the ratios covary linearly, the geometric interpretation is that the rate of change of a set of coordinates (e.g., hand displacements) smoothly and continuously corresponds to the rate of change of the other coordinates (e.g., the rotational displacements), and this correspondence leads to consistently minimal path lengths across repeats in both the hand and the postural spaces. The sum of the displacements connecting the start and target yields the shortest distance path with respect to the task-dependent metric. The course of action intended or planned toward the completion of a set of goals is successfully implemented and maintained along the implementation of the action. Failure in the implementation of such plans is reflected in the inconsistencies of the slopes or in the lack of linear relations between the ratios.
Normal controls vs patients with PD
The Friedman's test revealed that in the patients with PD, during the backward movement, the ratios in the three conditions were significantly different. However, in the Finger Vision condition, PD participants and the NCs had similar ratios (p = 0.12; xdf=1,62 ≤ 5.1; mean ranks, [6.23, 4.89]). The effect of finger vision on the PD case can be seen in Figure 6, second column and third row, where both distributions showed a similar slope and intercept. Note that this is not the case for the patients with PD in the No Vision and in the Target Vision conditions. Table 3 summarizes the relevant parameters for the ratio measure in the backward condition with subscript 2.
DOF decomposition analyses for forward and back movements
Variability of the incidental DOF was significantly larger than the variability of the task-relevant DOF when the speed scaled with manipulation of sensory guidance
The form of sensory guidance, which normally scaled the speed of the reach, had a larger effect on the variability of the DOF components incidental to the task than on the variability of the task-relevant DOF. This increase in the incidental variability was abnormally high in the patients with PD, particularly during the spontaneous transitional movements back toward the body. Unlike NC, in this return segment of the task the two types of DOF—incidental and relevant to the task—were not balanced toward the end of the trajectory when the hand approached the face. The interactions and variability of the automated components incidental to the task were particularly atypical in the patients with PD in the movements back during the finger vision condition and in the forward movements with target vision. This is depicted in Figures 8 and 9, where we contrast PD and NC trends in the Finger Vision condition. Notice in Figure 8 that in the forward reach the smoother component (task-relevant) dominated the reach from 20% of the path on, whereas in the retracting reach the task-incidental DOF dominated the reach throughout the movement. The patient with PD sticks either to the voluntary mode in the forward reach or to the automated mode in the retracting reach, which no smooth modulation in between. Once in a given mode, they did not modulate the transition between moving away from the body and moving toward the body. In contrast, NCs modulate these transitions, both when about to reach the target in the forward path and when approaching the face on the retracting path. Their incidental DOF were normally more affected than their task-relevant DOF by the fluctuations in temporal dynamics linked to the changes in sensory guidance. Yet, the NC smoothly modulated the interplays between these two complementary sets of joint angles as the hand was projected outwardly or inwardly. The differences between task-relevant and incidental DOF were significant between subject groups (p < 0.01) and depended on the target location (p < 0.01) at the 75% of each path, the critical point to start “braking” on the forward path and also the critical point to not hit the face coming back. These effects are shown in Table 3 both for the forward motions and for the reaches back.
Incidental/task-relevant DOF ratios
Figure 9A shows the results of the ANOVA on the incidental/task-relevant DOF ratios for the forward stroke of the reaching movement. Recall that ratios <1 indicate a predominance of task-relevant DOF, suggesting voluntary control, whereas ratios >1 indicate prevalence of task-incidental DOF, suggesting more automated motions.
We found significant task performance differences between the NCs and the participants with PD. In their forward reaches, the patients with PD had a ratio <1 across all conditions, indicating abnormal voluntary monitoring of the goals. In contrast, the NCs modulated this ratio as a function of the form of sensory guidance. However, notice that the forward reaches with vision of the target was the condition that showed the most marked differences between the patients with PD and the NCs. In this condition, the ratios for the NCs were 1 in the first 25% of the path and >1 in the rest of the path. This result suggests that the NCs initially balanced the DOF, but the rest of the path was automated with a prevalence of the task-incidental DOF (ratio >1). In contrast, the patients with PD during the Target Vision condition showed a ratio <1 throughout the path, suggesting that the voluntary DOF dominated all throughout. Likewise, in the No Vision condition, the NCs showed some balanced ratio of 1 throughout the path (from 25–75% of the path) that was absent from the patients with PD. In the condition where the finger guided the movement, both the NCs and the patients with PD had ratios <1, suggesting that voluntary monitoring dominated the guidance in both groups. This was the forward reach condition where the performance between the NCs and the patients was qualitatively closest, albeit quantitatively (statistically) different. The qualitative similarity between the NCs and the patients with PD in the pattern of joint-angle recruitment/release and in the balance between voluntary and automatic control during the finger guidance condition is congruent with the changes seen in hand-level control, in the area–perimeter ratios in the hand trajectories and in end point accuracy of the hand space. These changes in PD hand trajectories and end point accuracy with finger guidance were significant and toward the normal levels. Yet, we note that no full recovery was quantified at any level. In particular, the joint-angle-level control in the patients with PD suffered more than the hand-level control.
The aberrant interactions in PD extended to their movements back toward their body with a ratio ≫1, indicating abnormal reliance on the task-incidental DOF. This is depicted in Figure 9B, where significant differences in the variability of the NC ratios were found between the last two segments of the path (Kruskall Wallis test p < 0.01) but were not present in the participants with PD. The NCs and the participants with PD showed the largest differences in the No Vision condition, whereas both the target and the finger visual guidance gradually changed the ratio toward 1 in both groups. Quantitatively, the two groups still significantly differed in the second half of the path. However, some improvements toward typical patterns were captured that coincided with the improvements observed in the end point errors at the hand-trajectory level (the point in space where the retracting reaches initiated from) during both conditions.
In summary, the NCs demonstrated marked differences in the DOF recruitment, release, and balance ratio between the forward and retracting paths. In the forward reaches, the condition in which performance of the participants with PD was closest to that of the NCs was when they were using finger guidance. During this condition, both the NCs and the participants with PD showed a prevalence of task-relevant DOF (voluntary monitoring the reach). In the retracting segment of the reach, both groups showed ratios >1 (prevalence of automated task-incidental DOF), with the No Vision condition showing the largest difference between the PD patients and NCs. The largest difference between PD patients and NCs in joint space is congruent with the pattern between groups in end point errors in this condition, where there was the largest difference in accuracy between groups, and with the area–perimeter ratios, which had the largest slope differences in this condition as well (Table 3).
We notice here that the marked improvements in end point error and trajectory ratios observed with finger visual guidance were not as prominent in the DOF performance. This apparent discrepancy is not unexpected, as the map from joint angles to hand displacements is complex, and the DOF decomposition may be mediated by multiple structures involved in conscious and unconscious proprioception that emerge from different modes of kinesthetic input. Yet, the congruence in behavior at both the hand and the joint-angle levels is best appreciated during the worst performance in the No Vision condition, where all the metrics in the participants with PD show a disintegration of their sensory-motor performance. Improvements were observed with both vision of the target (in end point errors) and with vision of the finger (both in the trajectory ratios and in the end point errors). Likewise, the DOF ratio had the most similar pattern in the forward path during the finger vision guidance, yet the improvements at the hand level were far more noticeable than at the joint level. This finding suggests that proprioceptive control may be severely impaired in PD. This may be particularly the case when it comes to the balancing of conscious proprioception from voluntary monitoring of the movements, and unconscious proprioception from automated monitoring, both of which could be mapped to the task-relevant and the task-incidental DOF analyzed here.
Discussion
This work examined the interplay between voluntary and automated control in patients with PD as they experienced different forms of sensory guidance during reaches. These reaches consisted of forward and retracting segments performed in a closed loop, without pauses. We aimed at understanding the unfolding of voluntary and automated modes of control of the arm's DOF, as well as unveiling the most effective form of sensory guidance for these patients. To examine the voluntary versus automated degrees of freedom, we decomposed the arm's joint angles using a coordinate-free methodology (Torres and Zipser, 2002, 2004) that relies on the physical displacements of the arm joints and on those of the hand in three-dimensions as the hand-target distance kinesthetically decreases. We separated the joint angles of the arm into task-relevant and task-incidental degrees of freedom to track the recruitment, release, and balance of complementary degrees of freedom as the hand transitioned between forward and retracting reaches. We also quantified the predominance of one set of degrees of freedom over the other as the movements unfolded and the task's main goals switched from visual to kinesthetic.
In this postural context, we examined the hypothesis that in patients with PD off their antiparkinsonian medication, automated control would be severely impaired, and there would be excess reliance on voluntary control over the arm's DOF. Such excess reliance on voluntary control would thus impede the normal expression of voluntary control. The analyses of the unfolding over the course of the reach of these complementary DOF supported these hypotheses. Furthermore, our findings at the kinesthetic level revealed that in the forward segment of the reach anchoring visual feedback to the moving hand reduced the end point error toward normal levels of variability. In turn, these results agreed with those from a measure of hand-trajectory symmetry. This latter measure expressed the conservation of the physical hand displacements under coordinate transformation from the visual to the kinesthetic domains. Finally, the finding that egocentric visual guidance helped the patients with PD to better integrate visual and kinesthetic information complements our previous, related work in a patient with a left parietal lobe lesion in whom vision of the external target, rather than vision of the moving hand, was the most effective form of visual guidance (Torres et al., 2010). This methodology and experimental paradigm can be combined to track performance gains in different patient groups and uncover appropriate forms of guidance for therapy.
Additional motivation for these analyses in PD came from previously known classifications of failure to initiate movements (akinesias) or to properly unfold movement upon initiation (Heilman and Valenstein, 2011). Since movements can be produced in response to external stimuli (exoevoked) or occur independent of the stimulus (endoevoked), the akinesias have been categorized accordingly. Patients with PD are typically more impaired when initiating endoevoked than exoevoked movements; however, while the failure to self initiate movements is well known, this endoevoked versus exoevoked dichotomy had not been explored during movements upon initiation. To investigate this, it was important to induce spontaneous transitions between movements that were triggered by specific external visual cues and then also endogenously guided. The results revealed an extension of the akinesias' classification to movement modes, specifically in the postural domain.
Influence of the vision of the target and vision of the hand on postural control
In NCs, the manipulations of the source of sensory guidance modulated the reach tempo, but did not significantly change the hand trajectories. The NCs modulated the interactions between the complementary degrees of freedom and kept them balanced when transitioning from the forward to the backward motions. In the NCs, the task-relevant degrees of freedom were less affected by the evoked speed changes than the task-incidental degrees of freedom, a result consistent with previously proposed schemes to attain motor equivalence (Scholz and Schoner, 1999; Latash et al., 2001, 2002). Pushing variability toward the task-incidental DOF is also consistent with the proposed “principle of least intervention” (Todorov and Jordan, 2002).
In marked contrast, the patients with PD showed abnormally high variability in the DOF incidental to the task, which did change their physical trajectories in posture space, particularly those in the retracting movements. The terminal joint-angle errors were abnormally large in PD patients. Also unlike in NCs, the PD patients did not show the dependency of the final posture on the initial arm position. This typical continuity feature of postural trajectories (Soechting et al., 1995; Desmurget et al., 1998; Torres and Zipser, 2002) was violated by the PD patients. Their final arm posture had little to do with the preceding history and was noncongruent with the initial arm posture. The conservation of this continuity feature in posture space is typically robust whether changes in speed are explicitly instructed (Nishikawa et al., 1999; Torres and Zipser, 2004; Guigon et al., 2007) or implicitly evoked with similar manipulations in sensory guidance (Adamovich et al., 1994, 1998, 1999; Torres et al., 2010), as those used here. Here, the speed of the reach implicitly changed with the source of sensory guidance. In PD patients, unlike in NCs, the retracting postural path and the final posture lost their continuity. Such abnormal postural effects in PD patients are consistent with reported deficits in multijoint proprioception (Konczak et al., 2008; Konczak et al., 2009), arm–trunk coordination (Poizner et al., 2000), deficits in interaction torques of the shoulder and elbow (Seidler et al., 2001), and deficits in reaching to body targets (Tunik et al., 2007).
Balance and transitions between modes
The patients with PD also showed a lack of balance between the voluntary and automated modes of control along both the forward and the backward trajectories. The excess variability in the task-incidental DOF paralleled the patients' abnormal reliance on the voluntary mode and may have impeded voluntary modulation. In this regard, the form of sensory guidance most effective for the PD patients on the forward reaches was the vision of their moving finger according to the ratio of incidental-to-task-relevant degrees of freedom. According to this ratio, the NCs moved quite automatically after the first quarter of the path, but the PD patients maintained higher values of the exoevoked task-relevant components. This suggests that participants with PD, when compared to NCs, have abnormal voluntary monitoring of the goal components. During the second half of the forward movement, the NCs balanced and modulated the voluntary and the automated modes of postural control—a trend that was absent from the performance of the patients with PD.
The benefits of egocentric-visual-guidance forward reaching in the PD cases were at the expense of excess automated control in the retracting reach. We found that their incidental/task-relevant ratio in the forward motions was below 1, indicating a prevalence of the task-relevant DOF. In marked contrast, NCs showed that the incidental degrees of freedom nearing the end of the reach were modulated by the task-relevant degrees of freedom (Fig. 9A, incidental/task-relevant ratio turned close to 1, finger vision). On the retracting reach, such modulation coincided with the time along the path toward the final posture when the hand approached the face (around 75% of trajectory completion; Fig. 8B, arrow). The patients with PD had a very different evolution of this ratio in the backward movement, where the degrees of freedom that were incidental to the task abnormally dominated and remained more variable than the task-relevant degrees of freedom. In the patients, this abrupt switch to task-incidental DOF and their prevalence in the retracting reach was generally present if in the forward stroke they had relied too much on the task-relevant DOF. It was as though PD patients turned “on” the self-monitoring mechanism in the forward reach and then, unlike NCs, turned this self-monitoring completely “off,” or else could not modulate it when retracting the arm. These new methods extend previous results on two-stroke or more complex handwriting motions requiring control over sequential motions (Van Gemmert et al., 2001; Rand et al., 2002; Park and Stelmach, 2009). Our results also served to quantify a form of “sticky control” in the postural domain in PD.
DOF decomposition in PD patients and normal controls during finger vision. A, Forward reaches. The exoevoked task-relevant components (corresponding to the range of the linear transformation from posture to hand displacements) dominated the endoevoked task-incidental DOF (corresponding to the null subspace of the transformation) in the nine PD patients (red), yet the nine NC (blue) modulated the interplay between task-relevant and the task-incidental degrees of freedom in the second half of the motion as the hand approached the target and the motion slowed down (marked with an arrow). The mean and SD task-relevant and task-incidental degrees of freedom are plotted from all trials and targets averaged across subjects. B, Reaching back toward an arm posture normally switched the patterns, with a balanced interplay marked by an arrow at the point in the trajectory when the hand approached the face toward the end (85%) of the reach. Notice the abnormally higher variability in the PD patients and their lack of modulation of the task-relevant DOF that were intended toward the goals.
Automated-to-voluntary (task incidental/task relevant) ratios across 25% segments of the forward movement trajectories. The line of unity indicates equally balanced task-incidental and task-relevant degrees of freedom. A, Forward motions compared between PD patients and normal controls for each type of visual guidance. Notice that in each condition, the PD patients had a ratio <1 across the trajectory, indicating abnormal (excess) voluntary monitoring [more task-relevant (voluntary components) engaged than incidental (automated or self-motion components)]. NCs had significantly different ratios, particularly with continuous vision of the target, where across the hand trajectory the ratios were >1. Vision of the moving finger had a trend from <1 (higher task-goal component) toward 1 (balanced task-incidental and task-relevant interactions), particularly in the last 25% of the trajectory, when the finger approached the remembered target. B, Retracting movements show a prevalence of the automated task-incidental DOF in both the NCs and patients (ratio, >1); yet notice that in the No Vision and Finger Vision cases, the PD patients' automated control was significantly higher than that of the NCs, particularly toward the end.
The results presented here provide evidence in support of a current hypothesis suggesting that lack of automated control impedes the expression of voluntary control in goal-intended behaviors (Redgrave et al., 2010). They also suggest that the basal ganglia participate in controlling this balance in the postural domain. This is, to our knowledge, the first such demonstration in the context of postural control, but it is not entirely surprising since it is known that there are closed loops between the structures of the basal ganglia and various cortical areas involved in posture control (Middleton and Strick, 1996, 1997, 2002). The means by which basal ganglia–cortical loops control arm posture are at present poorly understood, so other interpretations of our results are also possible. It is possible that other basal ganglia loops covertly modulating behavioral changes are impaired in PD, thus preventing smooth, involuntary transitions between approach and avoidance behaviors. Further investigation along these lines of inquiry will be necessary. The present results suggest that the intactness of sensorimotor loops between striatum and cortex partly underlie habitual control of action and critically contribute in mediating the balance between voluntary and automated modes of control.
Footnotes
This work was supported by NIH Grant 2 R01 NS036449 and Office of Naval Research Multidisciplinary University Initiative Grant N00014-10-1-0072 to H.P., NSF Grant SBE-0542013 to the Temporal Dynamics of Learning Center, an NSF Science of Learning Center Grant to University of California, San Diego, and NSF Cyber Enabled Discovery and Innovation Type I (Idea) Grant 0941587 to E.B.T.
- Correspondence should be addressed to Elizabeth B. Torres, Rutgers University, Psychology Department, Bush Campus, 152 Frelinghuysen Road, Piscataway, NJ 08854. ebtorres{at}rci.rutgers.edu