Abstract
A novel obstacle avoidance paradigm was used to investigate the planning of human reaching movements. We explored whether the CNS plans arm movements based entirely on the visual space kinematics of the movements, or whether the planning process incorporates specific details of the biomechanical plant to optimize the trajectory plan. Participants reached around an obstacle, the tip of which remained fixed in space throughout the experiment. When the obstacle and the start and target locations were rotated about the tip of the obstacle, the visually specified task constraints retained a rotational symmetry. If movements are planned in visual space, as indicated from a variety of studies on planar point-to-point movements, the resulting trajectories should also be rotationally symmetric across trials. However, systematic variations in movement path were observed as the orientation of the obstacle was changed. These path asymmetries can be accounted for by a class of models in which the planner reduces the likelihood of collision with the obstacle by taking into account the anisotropic sensitivity of the arm to external perturbations or uncertainty in joint level control or proprioception. The model that best matches the experimental results uses planning criteria based on the inertial properties of the arm.
- human psychophysics
- visuomotor control
- motor planning
- reaching
- obstacle avoidance
- optimal control
- theoretical model
Many of the motor tasks facing the CNS are characterized by extrinsic (visual space) constraints, which are insufficient to identify a motor response uniquely. When reaching to a visual target, for example, the CNS must convert visual information such as the target position and the location of obstacles in the workspace into one of the infinite possible motor sequences that would attain the goal. One solution, the visual planning model, is to begin by treating the arm as a single point in space, i.e., an end point such as the index finger, and specifying an extrinsic trajectory for that point. This strategy is attractive, because it allows the CNS to plan movement in the space where tasks are typically defined, leaving the kinematic and dynamic details of the biomechanical plant to a subordinate controller. On the other hand, these details could prove useful to the planner, providing information to satisfy the task constraints in a more optimal manner.
The visual planning model derived from observations of invariances in the extrinsic kinematics of pointing movements (Bernstein, 1967;Morasso, 1981; Abend et al., 1982) and from computational models that accounted for those invariances (Hogan, 1984; Flash and Hogan, 1985). A recent wave of evidence for visual planning has come from experiments showing that these same invariances re-emerge after adaptation to altered dynamic environments (Flash and Gurevich, 1991; Lackner and DiZio, 1994; Shademehr and Mussa-Ivaldi, 1994) or perturbed visual feedback (Flanagan and Rao, 1995; Wolpert et al., 1995; Sabes, 1996). However, other researchers have argued that the kinematics of arm movements can be better explained by models of intrinsic (e.g., joint level) planning (Soechting and Lacquaniti, 1981; Kaminsky and Gentile, 1986; Flanagan and Ostry, 1990; Desmurget et al., 1995). And further evidence suggests that arm dynamics can play a role in the planning process. For example, Uno et al. (1989) show that when movements are made through one of two via points located symmetrically about the line from initial position to target, the resulting paths are not symmetric, contrary to the predictions of the visual planning model.
One reason for the inconclusive nature of these studies is that they have mostly been based on an overly restrictive set of tasks: simple point-to-point reaching movements. In this paper, we explore a more complex task, that of reaching around a visually displayed obstacle. From trial to trial the obstacle tip remained fixed in space, whereas the obstacle, the initial position, and the target were all rotated around the fixed obstacle tip (Fig. 1). As a result, the task constraints were rotationally symmetric across trials.
This obstacle rotation design has a dual purpose. First, we want to determine whether this more complex set of movements displays extrinsic kinematic invariances, i.e., whether these obstacle avoidance trajectories obey the rotational symmetry of their task constraints. Systematic differences in the trajectory as a function of obstacle orientation would suggest that movement planning is not based entirely on the extrinsic coordinate frame but, rather, takes information such as the kinematic or dynamic properties of the arm into account. Second, the obstacle rotation design allows a quantitative analysis of any such trajectory variation, from which we can hope to identify the operative planning criteria.
We show that participants did exhibit systematic variations in movement trajectories as a function of obstacle orientation. To account for these variations, we propose a class of models based on minimizing the sensitivity of the arm, with respect to the obstacle, to position uncertainty or force perturbations. Finally, we show that one of those models, that based on the inertia of the arm, accounts best for the observed data.
MATERIALS AND METHODS
Apparatus. Participants were seated at the virtual visual feedback system shown in Figure 2. Participants wore a strap to ensure that their right shoulder was fixed in space and rested their right arm on a table at shoulder height. Also, the right wrist and index finger were fixed in a fully extended posture. Movements made with that arm were thus constrained to two degree of freedom (shoulder and elbow rotation) planar motions. Finger tip location and joint angles were recorded with a Northern Digital (Waterloo, Ontario, Canada) Optotrak infrared position-monitoring system. Participants wore an infrared marker on the tip of the index finger and a rigid body containing six markers on the upper arm. Experiments began with a calibration procedure in which the positions of the shoulder and elbow with respect to the rigid body were measured, allowing for on-line determination of the shoulder and elbow angles. Participants’ view of their arm was blocked by a mirror reflecting a projection screen. A 72 Hz 640 × 480 VGA projector (MediaShow, Sayett Technology) provided visual feedback in the form of a 1-cm-diameter white-filled circle, the virtual image of which followed the position of the index fingertip. Obstacles, starting locations, and targets were similarly displayed.
Procedure. Each trial began with a white (start) circle, a blue (target) circle, and a yellow triangular obstacle appearing in the workspace (see Fig. 1). Participants were instructed to move their finger into the start circle and wait for a tone, at which point they were to reach around the obstacle tip to the target circle, making sure to avoid hitting the obstacle with their finger. If the fingertip collided with the obstacle, a low tone was sounded, and the trial was restarted. Otherwise, when the participant’s fingertip came to rest in the target circle, a high tone was sounded, and the screen went blank until the next trial. Participants were given no further instructions, except to move naturally and comfortably.
For each experimental session, the location of the obstacle tip was prespecified as a point in joint space (θ1,θ2) (see Fig. 1). The layout of each trial was determined by a presentation angleφ, corresponding to the orientation of the obstacle with respect to the positive x axis (rightward). If the presentation angle was φ = 90°, for example, the obstacle pointed away from the participant. Trials occurred in “there-and-back” pairs; identities of the start and target circles were switched within a pair, but the presentation angle was held fixed. A session consisted of 150 trial pairs with presentation angles randomly chosen from a uniform distribution over the circle. In addition, at the beginning of each session, participants were given a short warmup set of about about 10 trial pairs to familiarize them with the task. Participants were five right-handed males, aged 18–28 years, who had normal or corrected to normal vision and were naive as to the purpose of the experiment. All five participated in two sessions, one at each of two obstacle tip locations: position 1, θ = (30°,110°); and position 2, θ = (75°,75°). Three subjects were tested at position 1 first, two at position 2 first.
Trajectory analysis. Velocities were calculated by simple first differencing of positions. For higher derivatives, the planar positions of the fingertip were fit with cubic smoothing splines, and derivatives were taken analytically from the spline fit. Curvature of movements was calculated using the equation: where v. and a.are the velocity and acceleration, respectively, in the subscripted direction.
Four trajectory landmarks were defined: the near point (NP) or point of closest approach to the obstacle tip; the apex or point of maximal deviation from the straight line path (AP); the location of the local minimum of velocity (VM), if there was one; and the location of the peak of curvature (CP). For each landmark a corresponding angle, δ, is defined as the difference between the presentation angle and the angle of the landmark from the obstacle tip. Figure3 illustrates the case of the near point angle, δNP.
A sensitivity model
Later we will show that the trajectory near points tended to cluster at opposite poles of the obstacle tip, roughly aligned with the orientation of the forearm. This observation suggests that the planner chose the near point location based, indirectly at least, on the configuration of the arm. What properties of the arm would make one location more desirable for the near point than others? We suggest that the key to answering this question is the notion of the anisotropic sensitivity of the arm. Because the only constraint on the movement, other than the start and target points, is to avoid colliding with the obstacle, it would be desirable to choose a path which minimizes the sensitivity of the arm to uncertainty or perturbations in the direction of the obstacle. We next introduce three definitions of sensitivity: one purely kinematic, one based on the inertial properties of the arm, and one based on its elastic properties. We then show why these directional sensitivities are relevant to trajectory planning.
Kinematic sensitivity: manipulability. The first definition of sensitivity is based solely on the kinematics of the arm. We are interested in how uncertainty in joint angle control or proprioception propagates to uncertainty in the end point position. Assume that the joint controllers (or sensors) are noisy, with independent noise at each joint having variance ς2. Then the covariance of the resulting uncertainty in achieved (sensed) Cartesian end point position can be derived as follows: Equation 1where dx and dθ are the end point and joint uncertainty, respectively, J(θ) is the Jacobian of the arm at the specified joint configuration, E(·) is the expected value of the argument, and ′ indicates the matrix transpose. The approximation at the second step follows from the definition of the Jacobian and is valid as long as the uncertainty is sufficiently small. From Equation 1 we see that the matrix shapes the independent joint noise into anisotropic end point noise. M can thus be thought of as a measure of directional sensitivity; it is more difficult to position or sense accurately along the major eigenvector of M than along its minor eigenvector.
Yoshikawa (1990) calls the matrix M manipulability, because of the fact that the eigenvalues of the matrix correspond to the maximum end point velocities achievable along the respective eigenvectors for a given magnitude of joint velocity. Increased manipulability leads to greater end point velocity for the same angular velocity, but it also requires finer joint control or sensing to achieve the same accuracy at the end point. Here, we focus on how the CNS might use the information represented by M to best take advantage of (or cope with) anisotropic manipulability.
In the present experiment, the arm is constrained to planar two-joint movements, so the Jacobian is well approximated by: where l1 andl2 are the lengths of the upper arm and forearm, respectively. We can thus compute the manipulability matrices from experimental data. These matrices can be displayed as ellipses representing 1 SD of end point noise. Examples are shown as thesolid ellipses in Figure4.
We note that the assumption of independent and equal magnitude measurement or control uncertainty at the two joints is simplistic. The existence of muscles (and hence spindle receptors) such as the biceps, which span both the shoulder and elbow, make it clear that uncertainty at the two joints should not be independent. Scott and Loeb (1994)analyzed the proprioceptive uncertainty that results from the distribution of spindles across the musculature of the arm and found neither independent nor equal magnitude uncertainty at the two joints. We applied their estimates of joint covariance to Equation 1, but the refinement did not greatly change the quantities of interest here. Thus, for the sequel we will use the simplified model shown above.
Inertial sensitivity: mobility. A second measure of sensitivity is based on the instantaneous response of the arm to dynamic perturbations. Following the definition of Hogan (1985), we define the end point mobility matrix: where I(θ) is the inertia matrix of the arm.W is the inverse of the joint inertia matrix transformed into Cartesian space, and it relates a force f at the end point to the resulting acceleration: a = Wf. As in the case of manipulability, the eigenvectors are easily interpreted; the major (minor) eigenvector is the direction along which force perturbations have the largest (smallest) effect.
Direct measurements of the inertia of the arm are not available. Instead, we used a simple model, which treats each segment of the arm as a point mass, m1 andm2, located a fraction a orb along the respective segment length. The resulting inertia matrix is: where cθ is the cosine of the respective angle of the respective angles. The values of the masses and the fractionsa and b were taken from LeVeau (1992). A variety of reasonable values were tried, having little differential effect on the quantities of interest here. In the sequel, we used valuesm1 = 1.76 kg; m2 = 1.65 kg; a = 0.475; and b = 0.42. The point mass approximation is reasonable in this case, because shoulder and elbow movements in the horizontal plane of the shoulder involve very little rotation outside the plane. The resulting mobility matrix estimates can be displayed in a manner analogous to that used for manipulability matrices; examples are shown as the dashed ellipses in Figure 4.
Elastic sensitivity: admittance. Finally, the importance of the elastic stiffness, or mechanical impedance, of the arm for movement control has long been discussed in the literature (Bizzi et al., 1976,1982). The inverse of the stiffness, the mechanicaladmittance, can be thought of as a measure of sensitivity; a small static force perturbation will result in a displacement proportional to the admittance.
We first define the stiffness of the arm in joint space,R(θ). Given R and a joint displacementdθ about the current equilibrium θ, the resulting joint torque is given by τ = Rdθ. When R is inverted and transformed into Cartesian space, we obtain the end point admittance: Equation 2Z determines the Cartesian displacement from the current equilibrium, which will result from a static force perturbation: dx = Zf. Again the eigenvectors are easily interpretable: forces of a given magnitude applied at the end point will result in maximal (minimal) displacement when the force is oriented along the major (minor) eigenvector ofZ.
Although we did not measure the admittance of participants’ arms, we were able to estimate values roughly for the joint stiffnesses from data presented by Mussa-Ivaldi et al. (1985). In particular, that paper listed the values of R at five locations in joint space. Using that data, we fit a linear predictor to the components ofR as a function of θ and then used this model to estimateR for the current arm configurations. The resulting values of R at the two locations in joint space were: Given these values, the end point admittance Zcould be computed for a given participant by Equation 2. This procedure makes a number of simplistic assumptions, such as a linear dependence of R on θ and the invariance of R(θ) across participants. However these simplifications are not unreasonable for our purposes, because the orientation of the eigenvectors ofZ changed by no more than about 10° for a given participant, and location in the workspace when the values forR were varied within the range seen by Mussa-Ivaldi et al. (1985) for the whole workspace. Estimates of the admittance are displayed as the dotted ellipses in Figure 4. The reader should keep in mind that the admittance ellipse is the inverse of the impedance ellipse more commonly encountered in the literature.
Sensitivity and obstacle avoidance. For all three matrices introduced above, the minor eigenvector represents the least sensitive direction, i.e., the one in which the least response to position uncertainty or force perturbations is expected. This relationship is the key to the following discussion. Because the comments apply equally well to all three matrices, they will be referred to collectively as sensitivity matrices.
To understand how a sensitivity matrix relates to movement planning, consider the examples of Figure 5. Thetop panel shows a possible path around an obstacle, the presentation angle of which is along the x-axis. The sensitivity matrix for that location in the workspace is shown as anellipse centered at the obstacle tip. Would sensitivity considerations deem this a good path? The region around the obstacle tip is expanded in the right panel, which shows that the line from the obstacle to the near point lies along the minor eigenvector of the sensitivity matrix. Because the arm is most vulnerable to collisions when it is closest to the obstacle, it is desirable for the arm to be relatively insensitive to uncertainty or perturbations along the perpendicular to the path when passing the near point. In this example, that criterion is maximally satisfied, because the path perpendicular is the direction in which the arm is least sensitive. Note that we have drawn the sensitivity matrix for the obstacle tip, not for the actual location of the finger. This simplification is justifiable, because all three sensitivity matrices vary slowly over the workspace.
Figure 5, bottom panel, shows an obstacle centered at the same location but rotated 90°. The path displayed here is also the same as above but rotated to achieve the start and target points. The near point now lies along the major eigenvector of the sensitivity matrix. This means that when the fingertip comes closest to the obstacle, the arm is maximally susceptible to uncertainty or perturbations along the direction that will lead to a collision. For this presentation angle, then, the same near point angle is a poor choice.
These considerations can be turned into a simple model of near point placement: the minor eigenvector of the sensitivity matrix represents apreferred axis for the near point. To minimize the risk of collision, the planner chooses the path of the arm to bring the near point closer to this minimally sensitive axis. This idea can be captured formally with the following statistical model of the dependence of the near point angle δNP on the presentation angle φ: Equation 3where ε is zero mean, normally distributed noise with SD ςε and y = x%180°, the “signed modulus,” is defined as the y in the interval [−90°,90°] such that x = y +n 180° for some integer n. The two parameters of the model are the preferred axis ω and the slope b. The latter is a measure of the strength of the dependence of δNP on φ.
To see how the model of Equation 3 relates to the idea of a preferred axis for near point placement, consider the hypothetical data in Figure6. The top row shows data generated from Equation 3 with the parameters ω = 160°;b = 0.5; and ςε = 25°. Note that the plot of δNP versus φ (Figure 6B) has two zero crossings at φ = ω and φ = ω + 180°. These angles constitute the near point preferred axis; as the presentation angle decreases from the zero crossing value, δNP becomes positive, bringing the near point back toward the zero crossing direction, and similarly for larger presentation angles. Figure6A shows the location of the near points relative to the obstacle tip. They cluster toward the preferred axis.
Figure 6, bottom row, shows a second data set generated from Equation 3, this time with b = 0. Here, there is no dependence of δNP on φ, and the near points are uniformly distributed about the obstacle tip. Data such as these are consistent with the visual planning model.
Given a set of experimental data, we wish to find the values of the preferred axis ω and the slope b that best account for the data. We take a maximum likelihood approach. Equation 3 defines the probability, or likelihood, of seeing a particular δNPgiven some φ. We want to find the parameters ω and bthat maximize the likelihood of the observed data. We solve this nonlinear regression problem by iteratively maximizing the likelihood with respect to each parameter, holding the other constant. Given ω,b is easily calculated as the correlation between δNP and (ω-φ)%180°, and standard one-dimensional optimization techniques can be used to optimize ω given b. Confidence intervals for the preferred axis are derived using the fact that twice the difference in log likelihood between the optimal ω and some other value is approximately distributed as χ2(McCullagh and Nelder, 1989). Confidence intervals for b can be computed as in simple linear regression. Finally, we observe thatb plays the same role regarding hypothesis testing here as in linear regression; if b is significantly different from zero, the model is supported by the data, and the null hypothesis that the δNP does not depend on φ (i.e., the visual planning model) is rejected.
RESULTS
Subjects were able to perform the task easily, never colliding with the obstacle on more than one or two trials per session. The mean (SD) movement time across subjects was 736 (118) msec.
If the visual planning model were correct, there should be no systematic variation in trajectory shape as the presentation angle changes. However, participants’ paths did not display this rotational symmetry. Figure 7 shows two sets of paths from one participant, rotated into a canonical orientation. The presentation angles for the trials in the two panels were ∼90° apart, and there are marked differences between these two sets of movements. Those in the left panel are fairly symmetric, with near points clustering along the line of the obstacle. But when the presentation angle was shifted 90°, the paths became much less symmetric. In particular, near points tended to cluster away from the obstacle tip. Such differences in movement path were characteristic of all participants in the experiment; at some presentation angles paths tended to be symmetric, and at other angles they were more skewed.
The lack of rotational symmetry in obstacle avoidance paths can be seen more clearly by looking at all the landmark locations in an experiment. If planning is performed in Cartesian space, the position of the landmarks relative to the obstacle should be independent of the presentation angle, φ. Because φ was chosen uniformly throughout the circle, the landmark locations would then be uniformly distributed as well. Figures 8 and9 show the location of all four landmarks for two participants, one at each joint space location. The left columns show landmark locations relative to the obstacle tip. Note that in each case landmark density varies around the circle. This effect is seen more clearly in the δ versus φ plots, shown in theright columns. There is a dependence of landmark angle on presentation angle in every case, but there is a particularly simple and suggestive order to the near point angles, which appear to be piece-wise linear with negative slope. Comparing these plots with the model-generated data in Figure 6, we see that the experimental data qualitatively match the model prediction with slope b> 0.
We can quantify this agreement by fitting the piece-wise linear model of Equation 3 to each data set, yielding estimates of the preferred axis ω and the slope b. Figure10 summarizes the results. The main point is that for every participant, in both locations, the regression has a significantly positive slope, with a mean (SD) of 0.17 (0.05). Furthermore, the preferred axis is roughly constant across participants for a given position in joint space but differs significantly with arm configuration (one-way ANOVA, p = 0.001). These findings indicate the existence of a joint space-dependent preferred axis for near point placement and cast serious doubt on the viability of the strict visual planning model for obstacle avoidance movements.
Although the piece-wise linear model does account for a significant amount of the variance in near point angles, there are some aspects of the data it does not capture. In particular, the direction of movement has a significant effect on the shape of the path. Consider again Figure 7, right panel, which shows movements with presentation angles near the antipreferred axis. The paths are skewed toward the movement origin, and the near points for the two directions of movement lie in separate clusters, each closer to the respective target. This example is a special case, because the model is discontinuous at φ = ω ± 90°, and there is no reason to prefer one side of the obstacle over the other for near point placement. However, this same direction-dependent bias in near point angle exists across presentation angles. Figure 11shows near point angles from all 10 experiments, with presentation angles aligned to the respective preferred axis. Figure 11,A and B, shows the clockwise (CW) and counterclockwise (CCW) near point angles separately, each with its overall mean: −10.9° for CW movements and 9.4° for CCW movements. Figure 11C overlays the two groups of near points with their biases removed. The two data sets now largely overlap. Finally, Figure11D shows the difference between δNPCCW and δNPCW for each trial pair; the positive bias persists for all presentation angles. The difference between the two directions of movements can thus be described as a bias in near point placement toward the movement target.
Returning now to the preferred axis regression, we can compare the results with the predictions of the sensitivity models introduced above. A summary of the comparison is shown in Figure12, in which the preferred axes for the two experimental positions are plotted against each other for each subject. Note that the area of the plot represents the space of possible model predictions. If the preferred axis were independent of location in the workspace, the data would lie on the dashed line representing identity. In fact, the data lie significantly above this line, as do the predictions from all three sensitivity models. This illustrates that the sensitivity model in general is able to capture the dependence of the preferred axis on workspace position qualitatively. The mobility model in particular exhibits good quantitative agreement with the data, especially considering the range of possible predictions.
DISCUSSION
There are three main points of this paper: (1) we introduced the obstacle rotation paradigm, which provides a means for the systematic study of trajectory planning of obstacle avoidance movements; (2) the experiment revealed a dependence of movement path on presentation angle, ruling out a strict visual planning model; and (3) the nature of the variation is consistent with a sensitivity model of path planning. We discuss each of these points in turn.
Obstacle avoidance
One reason researchers are still largely divided over the validity of the visual-planning model is the fact that most relevant studies have been limited to simple point-to-point reaching movements. The class of models that has proven most successful in capturing these experimental data is based on the principle of optimal control. These are essentially models of smoothness or efficiency, defined either extrinsically (Nelson, 1983; Flash and Hogan, 1985) or intrinsically (Hasan, 1986; Uno et al., 1989). Point-to-point movement tasks impose no external constraints; therefore, smoothness criteria may be all that the CNS can use to choose between possible trajectories. Nelson (1983)argues for a combination of optimization criteria, weighted according to the task at hand. We subscribe to this point of view but suggest that when extra task constraints are added, the CNS will incorporate new planning criteria aimed at optimally satisfying those constraints. Obstacle avoidance movements provide an experimental paradigm for exploring this hypothesis, because they involve a clear criterion by which the CNS could weigh potential trajectories: the likelihood of colliding with the obstacle.
Other researchers have investigated reaching under similar conditions.Abend et al. (1982) asked participants to reach around a linear obstacle protruding into the straight line path. They found that the resulting trajectories displayed high-curvature, low-velocity regions near the tip of the obstacle, as if participants had segmented the task into two parts, getting past the obstacle and then getting to the target. Flash and Hogan (1985) showed that this behavior could be captured by the minimum jerk model if a via point constraint was introduced, i.e., a location in space through which the trajectory is constrained to pass. Because this model leaves open the question of how the via point would be chosen, it makes no predictions regarding the movement asymmetries seen here. Dean and Brüwer (1994) conducted a more comprehensive study along the same lines. In agreement with our results, they found that obstacle avoidance paths vary over the location and orientation of the movement in the workspace. Although they argued that this result was inconsistent with a strict visual planning model, there was no systematic variation of the task constraints. The obstacle rotation paradigm provides such a systematic approach, allowing us to investigate the principles underlying observed variations in the movement plan.
Systematic path variations
We have argued that the dependence of the landmark locations on the presentation angle rules out a strict visual planning model. However, there are alternate explanations for these data. Let us begin by assuming that the movement variations are truly planned, i.e., they are represented in the central neural command. In this case, there must be some criteria by which the CNS varies the path according to presentation angle. Those criteria could be based on any combination of visual cues or distortion, kinematics of the arm, or dynamics of the arm. The sensitivity models presented in this paper fall into the two latter categories, which are both inconsistent with a visual planning model. What about the possibility that the path asymmetries have a purely perceptual origin? It is known that visual distortions of the workspace can be associated with corresponding distortions in movement path in the case of point-to-point reaching (Wolpert et al., 1994). The data presented in this paper cannot rule out a perceptual genesis of the movement asymmetries described above. However, this possibility has been excluded by comparing the near point distributions from two experiments centered at the same point along the participant’s midline but performed with opposite hands. The preferred axes of the two experiments turn out to be reflections of each other about the sagittal plane, as would be expected if the asymmetries were attributable to the details of the biomechanical plant. This mirror symmetry would not result if the effects described in this paper were attributable to perceptual distortions (Sabes, 1996).
Finally, it is possible that the results of the obstacle rotation experiment are attributable entirely to low-level dynamic factors and not to a central planning mechanism. This concern is also not addressed by the results of this paper, but see the work of Sabes (1996).
The sensitivity model
The three sensitivity models qualitatively capture the main features of interest in the experimental data: the clustering of near points about a preferred axis and the dependence of that axis on workspace location. The mobility model provides the best quantitative match to the experimental data, suggesting that the CNS uses information about the inertia of the arm in planning obstacle avoidance movements. It has been shown that when participants are asked to estimate the location of the tip of a visually occluded object that they are allowed to wield freely, their responses are quite accurate and are predictable given only the eigenstructure of the inertia of the object (Fitzpatrick et al., 1994). Because the CNS is good at estimating the inertia of objects with which it interacts dynamically, it is reasonable that it would have access to information regarding the inertia of the arm itself.
The interesting difference between the three sensitivity models is the nature of the information they embody—purely kinematic or both kinematic and dynamic, for example—and not the specific analytic details. Although the superior quantitative fit of the mobility model suggests that the inertia of the arm may be of primary importance, the three sensitivity matrices are similar both in their analytic form and in the orientation of their eigenvectors. Analytically, they are the transformation of an intrinsic matrix (the joint space uncertainty, inertia, or admittance, respectively) into Cartesian space. The first of these was assumed to be a scalar multiple of the identity matrix, and the latter two have diagonal entries of comparable magnitude, which are larger than the off-diagonals, meaning that they induce little rotation. Thus, the orientation of the eigenvectors of the these matrices is dominated by the Jacobian. Furthermore, the models we have considered are somewhat simplistic. For example, our admittance matrices were based on the quasistatic measurements of Mussa-Ivaldi et al. (1985), yet it has been shown that the stiffness of the arm changes during the course of a movement (Bennet, 1990; Gomi and Kawato, 1996). And we have not considered other aspects of the dynamics of the arm, which could represent measures of sensitivity, most notably the viscosity. In part, this omission is attributable to the difficulty in making precise measurements or model-based estimates of the relevant quantities. But more importantly, the ability to distinguish between quantitatively similar sensitivity models (or combinations of them) will not depend on collecting more precise estimates of a greater number of relevant kinematic and dynamic quantities but, rather, requires a method that separates out the influences of these various types of information. For example, one could repeat our experiment after altering the effective inertia of the arm, leaving the rest of its kinematics and dynamics unchanged (Sainburg and Ghez, 1995).
Whatever its exact definition, the sensitivity constraint, in isolation, would be maximally satisfied if the near point always lay at the preferred axis, i.e., if the slope of Equation 3 were unity. Why then do we find a relatively small value of 0.17 for the mean estimated slope in our experiments? First, at presentation angles 90° away from the preferred axis, the model has no preference for direction of the near point angle. This ambivalence is seen in the data as well. Figure11, A and B, shows large near point angles of both signs at the antipreferred axes, resulting in a downward bias in the estimated slopes. Nonetheless, the same figure shows that the “true” slope is certainly smaller that unity. Why is the sensitivity criterion only partially satisfied? We argue that this is the result of a tradeoff between a set of planning criteria, of which the sensitivity-based collision avoidance scheme is only one. For example, some notion of smoothness is almost surely a consideration in the planning process, and larger near point angles result in less symmetric paths, decreasing the overall smoothness of the movement. The difference between clockwise and counterclockwise movements may reflect another such criterion, one in which there is a preference for paths skewed toward the movement origin (perhaps to allow time for feedback to influence the movement before crossing close to the obstacle).
And finally, we note that these results are not necessarily inconsistent with recent perturbation studies supporting the visual planning model for point-to-point reaching (Flanagan and Rao, 1995;Wolpert et al., 1995; Sabes, 1996). A Cartesian planner could have at its disposal information about the inertial properties of the arm in extrinsic space, i.e., about the mobility of the arm, and it could use this information to plan obstacle avoidance trajectories in that space. This may be just one example of a general strategy in which the planner incorporates various additional criteria from a repertoire designed to deal with the wide range of kinematic and dynamic constraints encountered in daily activities.
Footnotes
This project was supported by a grant from the United States Office of Naval Research. P.N.S. was supported by a training grant from the National Institute of General Medical Sciences. We thank D. M. Wolpert and N. Hogan for many helpful discussions and an anonymous reviewer for helpful suggestions on an earlier draft of this paper.
Correspondence should be addressed to Philip N. Sabes, Salk Institute, Computational Neurobiology Lab, 10010 North Torrey Pines Road, La Jolla, CA 92037.