Abstract
An experiment investigated in human adults the sensorimotor transformation involved in pointing to a spatial target identified previously by kinesthetic cues. In the “locating phase,” a computercontrolled mechanical arm guided the left [condition LR (left–right)] or right [condition RR (right–right)] finger of the blindfolded participant to one of 27 target positions. In the subsequent “pointing phase,” the participant tried to reach the same position with the right finger. The final finger position and the posture of the arm were measured in both conditions. Constant errors were large but consistent and remarkably similar across conditions, suggesting that, whatever the locating hand, target position is coded in an extrinsic frame of reference (target position hypothesis). The main difference between the samehand (RR) and differenthand (LR) conditions was a symmetric shift of the pattern of endpoints with respect to the midsagittal plane. This effect was modeled accurately by assuming a systematic bias in the perception of the postural angles of the locating arm. The analysis of the variable errors indicated that target position is represented internally in a spherical coordinate system centered on the shoulder of the pointing arm and that the main source of variability is within the planning stage of the pointing movement. Locating and pointing postures depended systematically on target position. We tested qualitatively the hypothesis that the selection of both postures (inverse kinematic problem) is constrained by a minimumdistance principle. In condition RR, pointing posture depended also on the locating posture, implying the presence of a memory trace of the previous movement. A scheme is suggested to accommodate the results within an extended version of the target position hypothesis.
 kinesthetic pointing
 frames of reference
 arm movements
 arm posture
 sensorimotor transformations
 inverse kinematics
 position sense
Reaching out, without looking, for an object that we had placed nearby is a common action. Thus, a motor plan can be set up on the basis of postural information acquired during the previous placing action. However, aside from the earlier (and somewhat inconclusive) literature on motor memory (Posner, 1967; see also Laszlo, 1992), relatively few studies have considered realistic instances in which kinesthetic cues are used to specify the target of a hand movement (Paillard and Brouchon, 1968; Wallace, 1977; Larish and Stelmach, 1982; Helms Tillery et al., 1991, 1994). In particular, it is not known yet whether the neural mechanisms for reaching a position defined by kinesthetic cues share a common spatial frame of reference with the mechanisms for reaching memorized visual targets.
When one arm is used both for locating a point in space and for reaching again that point, the problem of sensorimotor coordination seems to be easier than in the visual case because, in principle, accurate reaching may be achieved by reproducing the same postural angles of the locating phase. This matching strategy, however, is ineffectual when different arms are involved in locating and reaching points that do not belong to the midsagittal plane. In this case, a highly nonlinear (even though well defined) fourdimensional 4D → 4D mapping would be necessary to transform the locating posture into the appropriate pointing posture. If so, the computational complexity in the two situations would be significantly different, and so should be, one might argue, the resulting accuracy.
The socalled target position hypothesis (MacNeilage, 1970; Russel, 1976) provides a different solution to the problem of sensorimotor coordination. According to this hypothesis, postural information from the locating arm is recoded into position information in threedimensional (3D) extrinsic space. Thus, the subsequent reaching would involve a 3D → 4D mapping similar to the one contemplated in the case of visual targets (Soechting and Flanders, 1989a,b; Flanders and Soechting, 1990; Flanders et al., 1992). The target position hypothesis applies equally well to the same and differenthand cases, the computations involved being similar. In fact, Larish and Stelmach (1982) argued that if the hypothesis holds true, the pattern of reaching errors should be fairly equivalent whether the same or different hands are used.
Recent investigations of kinesthetic pointing in the samehand (Helms Tillery et al., 1991) and differenthand (Helms Tillery et al., 1994) conditions reported that (1) accuracy was much poorer than in the visual case; (2) the pattern of errors, as well as the relation between target position and posture, differed from those reported previously by the same group for visual targets; and (3) in the samehand condition, posture matching was the preferred strategy.
We report a further study of kinesthetic pointing in which controlled arm movements brought the participant’s finger to one spatial location in the absence of vision (locating phase). Then, the participant was asked to point again to that location, either with the same or with the opposite arm (pointing phase). With this switchedlimb paradigm, we addressed the following questions: (1) Is hand position coded in a 3D spatial system of reference analogous to that postulated for visually directed movements? (2) What is the origin of reaching errors? (3) Which factors determine arm posture?
MATERIALS AND METHODS
Participants. Four righthanded male adults (S1–S4) volunteered for the study (ages, 29, 31, 34, and 28 years; heights, 175, 168, 176, and 185 cm). Participants were naive about the purpose of the experiment. They gave their informed consent and were paid 15 Swiss Francs. The experimental protocol was approved by the Ethical Committee of the University of Geneva.
Target positions. Targets (N = 27) were arranged at the nodes of a three × three orthogonal lattice, each layer of the lattice including nine targets (Fig.1). The spacing between horizontal, frontal, and sagittal layers was 150, 100, and 170 mm, respectively, so that the vertical, frontal, and transversal dimensions of the workspace were 300, 200, and 340 mm, respectively. The center of the array was 350 mm away and at the same height as the participant’s nose. Targets were numbered consecutively, starting from the furthest frontal layer and from the lower left to the upper right corner of each layer. The global frame of reference was provided by a righthanded orthogonal coordinate system, with the xaxis pointing toward the right of the subject, the yaxis pointing forward, and thezaxis pointing upward.
Experimental procedure. Participants rested in a tightly fitting racingcar seat in front of a fivedegreesoffreedom computercontrolled mechanical arm (the “robot”; SCORBOT ERVII; Eshed Robotec LTD). Two adjustable armrests defined a fixed reference posture for the upper limbs. Except during rest periods, participants were kept blindfolded. Each trial consisted of two phases. In the “locating phase,” the robot distal segment approached gently either the right [condition RR (right–right)] or the left [condition LR (left–right)] index finger of the participant. A switch mounted on the distal segment checked that contact had been established. Thereafter, the robot moved to a selected target position. Participants were instructed to follow the movement, remaining in touch with the switch. The locating movement lasted between 1.3 and 4.5 sec, depending on the target. One second after the end of the movement, a tone instructed the participant to move his arm back to the reference posture. Three seconds after the first tone, a second tone with a different pitch signaled the beginning of the subsequent “pointing phase” (the interval between tones was largely sufficient to reposition the arm). In this phase, the participant had to try and reach again with his right index finger the target position reached in the previous phase. When the finger position was judged to be accurate, he signaled the end of the phase by closing a switch with the right foot. Then, the arm was brought back to the armrest. Using proximity sensors mounted on the armrests, the computer checked that, before each phase, both arms were in the reference posture. In the pointing phase, accuracy was stressed without explicit time constraints. On every trial, the hand to be used for locating was indicated by the computer via a voice synthesizer. The synthesizer also warned the participant when the initial posture was not correct and whenever contact with the robot was accidentally interrupted during the locating movement. In this (very rare) event, the robot stopped until contact was established again.
Each target was selected 10 times in each condition. The resulting 540 trials [2 (conditions) × 10 (repetitions) × 27 (targets)] were divided into 20 equal blocks that were administered sequentially, alternating conditions RR and LR. The order of selection of the targets within each block was randomized under the constraint that no two consecutive targets belong to the same horizontal, frontal, or sagittal layer. The presentation of one block required ∼12 min, and one complete experiment (including two calibration phases, see below) required between 8 and 9 hr divided evenly over three or four sessions. Short rest periods interrupted the experiment every 25 min.
Data acquisition and processing. Movements were recorded with a threecamera ELITE system (BTS Technology). The system measures the 3D coordinates of passive markers (diameter, 4 mm) reflecting in the infrared band (accuracy, 1 mm; sampling rate, 100 Hz). The number of degrees of freedom of the hand–arm complex was reduced to four by blocking the wrist with a cast. Movements of the forearm were described by four markers. Three were mounted on an orthogonal frame fixed to the cast; one was placed at the tip of the outstretched index finger. Movements of the arm were described by three markers mounted on a second orthogonal frame strapped around the biceps. Trunk position was measured by an additional threemarker frame firmly strapped to the chest of the participant in correspondence with the sternum. For certain postures, not all markers remained visible simultaneously to all three cameras. However, in most cases the tracking software (ELIPLUS; BTS Technology) filledin the missing information. In a few instances, the tracking procedure failed, and it was impossible to describe fully the posture of the arm.
Both in the locating and pointing phase, data acquisition was triggered by the opening of the proximity switch on the armrest and was stopped when the arm came back to rest, closing again the switch. The end of the forward movement was timed by recording either the control signal to the robot (in the locating phase) or the footoperated switch (in the pointing phase). Shoulder, elbow, and finger positions at this point were computed by averaging the coordinates of the markers over a 50 msec period.
Calibration. Defining an arm posture from the instantaneous coordinates of the shoulder and elbow joints in the global coordinate system requires the knowledge of the (invariant) joint positions with respect to arm and forearmbased references, respectively. These anthropometric data were estimated by the following calibration procedure. Let m _{1},m _{2}, and m _{3} be the instantaneous position of the markers strapped around the arm in the global coordinate system and [i =m _{2} − m _{1},j = m _{3} −m _{1}, and k = (m _{2} − m _{1}) × (m _{3} − m _{1})] be the associated (nonorthogonal) moving reference. The participant was asked to perform for 5 sec random 3D rotations of the extended arm, trying to keep the shoulder as still as possible. Any triple of coordinates [x,y,z] in the moving reference corresponds to a point (p = x i + y j +z k + m _{1}) in the global coordinate system. The position p _{s} of the shoulder joint was estimated by finding with a standard minimization algorithm the triple [x_{s} ,y_{s} ,z_{s} ] such that p _{s} has the least variance during the calibration movements. A similar method was used to compute the position of the elbow joint. In this case, the calibration movements involved only the forearm, the elbow being immobilized.
The reliability of the procedure was tested by repeating the calibration at the end of each experimental session. The data from a trial were discarded if either of two events occurred: (1) the distance between any two markers on the same frame differed by >2 mm from the corresponding session average, or (2) the estimated distance between shoulder and elbow joints differed by >3 mm from the session average.
RESULTS
Constant errors
The pointing error in a trial is the 3D vector ejoining the true and the estimated target position (endpoint). Overall pointing accuracy was measured by the average absolute error, i.e., by the average of the length (‖e‖) of the error vector over all targets and repetitions. By this criterion, subjects were fairly inaccurate in both conditions, the average absolute error ranging from 72.6 to 156.0 mm (Table 1).
In a further analysis, we decomposed each error vector into two components, the constant error across repetitions (e _{c}) joining the target to the center of gravity of all endpoints and the variable error (e _{v}) joining the center of gravity to each endpoint. The first component measures the systematicpointing bias. The second component is a random variable describing the dispersion of the data with respect to this bias.
Figure 2 contrasts the deformation of the array of the targets in conditions RR and LR (pooling individual data). Average endpoints exhibited a rather regular pattern, the orthogonality and orientation of the array being well preserved. Distortions concerned mostly the lateral and vertical dimensions of the array. The main effect associated with the locating hand was a leftward shift in condition RR and a rightward shift in condition LR. In condition RR, the constant error was larger for targets in the left sagittal plane than for those in the right sagittal plane. The opposite was true in condition LR.
The pattern of deformation was consistent across participants (Fig.3), the main difference being the position of the array of endpoints with respect to the body. In all cases, the array of the targets was stretched along the lateral direction (top view) and compressed along the sagittal and vertical directions (front and side views). Table 1 reports the amplitude (‖e _{c}‖) and the directional biases (e _{cx}, e _{cy},e _{cz}) averaged over all targets for each subject and condition separately. In both conditions, participants S1, S2, and S3 underestimated the distance of the center of the array from the body (e _{cy} < 0) and overestimated its elevation (e _{cz} > 0). Individual data confirmed that the main effect of the experimental manipulation was to shift the center of gravity toward the left (e _{cx} < 0) in condition RR and toward the right (e _{cx} > 0) in condition LR. Similar shifts have been reported by Wallace (1977) andLarish and Stelmach (1982). The (signed) difference across conditions between the average directional biases was similar for all participants. Along the lateral direction, the differences were S1, 60.2 mm; S2, 76.0 mm; S3, 45.3 mm; and S4, 48.5 mm. Both along the sagittal (S1, 11.4 mm; S2, −3.9 mm; S3, −8.8 mm; and S4, 5.9 mm) and the vertical (S1, 16.7 mm; S2, 31 mm; S3, 13.6 mm; S4, −16.8 mm) directions, differences were less dependent on the arm used for locating the targets. The systematic error increased only slightly (S1, 1%; S2, 20%; S3, 24%; and S4, 7%) when different arms were used for locating and pointing (condition LR) with respect to condition RR in which the same (right) hand was used.
Variable errors
In this section, we argue that the most significant aspects of the variable errors are independent of the experimental condition. We assumed that endpoints are multinormal (3D) Gaussian variates. Thus, the spatial distribution of the endpoints is characterized by the confidence ellipsoid defined by the HotellingT
^{2} statistics (Morrison, 1976):
From the definition of the confidence ellipsoids, it follows that comparing the endpoint distributions is equivalent to comparing statistically the underlying covariance matrices. Using Box’s (1949)test of the equality of covariance matrices (see Morrison, 1976, page 252), we compared for each target the endpoint distributions in conditions RR and LR. None of the 27 pairwise comparisons revealed a significant difference (p > 0.05). This test, which compounds all aspects of the variable errors, indicated that indeed the endpoint distributions were independent of the arm used for locating. To test more specific differences between the two conditions, we proceeded to analyze independently volume, shape, and orientation of the ellipsoids.
The volume of the ellipsoid (V = 4/3 πABC) affords a measure of the dispersion of the endpoints around their center of gravity. The volume was significantly larger when different hands were used for locating and pointing (condition LR) than when the same hands were used (condition RR) [oneway ANOVA;F _{(1,52)} = 43.46; p < 0.001]. The volume varied across participants (Table 1), but for all of them, it was significantly larger in condition LR than in condition RR. The increase (S1, 125%; S2, 87%; S3, 71%; and S4, 44%) was larger than the increase for systematic errors (see above). A principled relationship between dispersion and position emerged by averaging the volume of the ellipsoids across layers. Figure5 shows the result of collapsing the data across horizontal (top view) and sagittal (side view) layers. A fourway ANOVA [2 (conditions) × 3 (sagittal layers) × 3 (horizontal layers) × 3 (frontal layers)], with all interactions except the fourway one, confirmed the condition effect mentioned above [F _{(1,8)} = 100.7;p < 0.001] and revealed a significant increase of the variability from the leftmost to the rightmost sagittal layer [F _{(2,8)} = 20.5; p < 0.001]. All other main effects, as well as all interactions, failed to reach significance (p > 0.05). The sagittal layer effect was confirmed, for each condition separately, by regressing the volume against the coordinates of the center of gravity:V = a _{0} +a _{1} x +a _{2} y +a _{3} z. In both conditions, the only significant contribution was that of the xaxis coordinate (RR, a _{1} = 106.3; t _{23} = 4.16; p = 0.0004; LR, a _{1} = 149.2; t _{23} = 3.64, p = 0.0014; twotailed). The positive values of the coefficienta _{1} indicate that the volume was larger for ellipsoids belonging to the right sagittal plane that was close to the starting position of the hand. This result did not confirm the frequently reported Weberlike increase of variability with movement extent (however, see Stelmach and Wilson, 1970).
Ellipsoids are cigarshaped when one semiaxis is significantly longer than the other two (A > B ≃C) and lensshaped when one semiaxis is significantly shorter than the other two (A ≃ B >C). Figure 6 is a plot of the ratio A/B against the ratioB/C for all 54 ellipsoids. In this plot, quasispherical ellipsoids (A/B ≃ 1.0;B/C ≃ 1.0) are close to the origin, whereas lensshaped and cigarshaped ellipsoids are close to the upper left (A/B ≃ 1.0;B/C > 1.0) and lower right (A/B > 1.0; B/C≃ 1.0) corner, respectively. Using this criterion and Anderson’s (1963) test of the equality of eigenvalues (see Morrison, 1976, page 294), we found that in 30 cases at least one of the ratios was significantly different from 1 (p < 0.05). By applying the same criterion to this group of nonspherical shapes, we identified 21 lensshaped and 7 cigarshaped ellipsoids (these figures correspond to the number of dots within the indicated regions in Fig.6). Therefore, there was a definite tendency for the ellipsoids to be lensshaped. Notice that the distributions of data points in conditions RR (solid symbols) and LR (empty symbols) overlapped completely, implying that the shape of the ellipsoid did not depend on the locating hand.
The dominant orientation of cigarshaped ellipsoids is that of their major axis (first eigenvector). Instead, the orientation of lensshaped ellipsoids is best characterized by the direction of their smallest semiaxis (third eigenvector) that is orthogonal to their flattest surface. The relationship between the center of the ellipsoids and their orientation was investigated by regressing independently the azimuth and the elevation of the third eigenvector against the spherical coordinates of the center of gravity. When all ellipsoids were included in the analysis, only the azimuth was significantly dependent on the center of gravity [F _{(3,50)} = 10.65; p < 0.001], the highest correlation being the one with the azimuth of the center of gravity (t _{50} = 5.08; p < 0.001; twotailed). Similar results were obtained when each condition was analyzed separately [RR, F _{(3,23)} = 7.25;p < 0.002; LR, F _{(3,23)} = 4.89;p < 0.01]. Moreover, all significant coefficients (p < 0.05) had the same sign and were of the same order of magnitude, implying that the relationship between position and dominant direction of the ellipsoids was similar in both conditions. In condition RR also, the elevation of the third eigenvector depended on the center of gravity [F _{(3,23)} = 3.80; p < 0.05]. However, this relationship was not confirmed in condition LR.
To identify the dominant direction of the ellipsoids, we collapsed all individual data for both conditions along the vertical and transversal directions (Fig. 7). In the top view, each ellipsoid pools data corresponding to three targets with the same [x,y] coordinates (e.g., the top left ellipsoid refers to targets 1, 2, and 3). In theside view, each ellipsoid pools data corresponding to three targets with the same [y,z] coordinates (e.g., the bottom right ellipsoid refers to targets 1, 4, and 7). All the resulting ellipsoids exhibited the tendency to be flat that was detected before pooling. For both views, the dominant orientation is indicated by lines starting at the center of gravity and stopping at the intersection with the frontal plane passing through the shoulders of the participant. The pattern emerging from the top view shows a clear tendency for the orientations to depend on the azimuth and distance of the targets, all lines converging approximately toward a region in front of the shoulder. As seen from the side view, the orientations were less dependent on elevation or distance. All lines intersected the frontoparallel plane at or below the right shoulder.
In summary, detailed statistical analysis of the endpoint distribution showed that all aspects of the variable error but the volume of the ellipsoids were independent of the locating arm. Moreover, even differences in volume were not large enough to be detected by the global comparison between distributions.
Relationship between arm posture and finger position
Target position (three degrees of freedom) does not determine uniquely the posture of the arm (four degrees of freedom). Thus, any principled relationship between target position and arm posture indicates the presence of constraints in the execution of the movement. To verify whether such a relationship did in fact exist, we took advantage of the fact that all postures compatible with a given target position are obtained by a rotation ψ of the elbow around the fixed shoulder–hand axis (see ). Thus, the angle ψ (theposture angle) concentrates the intrinsic indeterminacy of the target–posture relationship. In the following, first we use this representation of the extra degree of freedom to demonstrate a relationship between average arm posture and finger position. Then, we show that this relationship is largely independent of the arm used for locating.
For the left and right arm, the range of posture angles (in degrees) was [−90,−20] and [20,90], respectively. In each phase (locating and pointing) and each condition, the posture angle ψ was strongly dependent on the target (oneway ANOVA, target treated as a nominal variable; p < 0.001 in each phase and condition, for all participants). For pointing movements, we summarized this dependency by expressing the posture angle as a linear combination of the spherical coordinates of the finger: ψ =a _{0} + a _{η}η +a _{θ}θ + a_{R}R(Table 2). Across participants and conditions, the amount of variance accounted for by the regression was quite high (r _{1} ^{2} in Table 2). Arm posture depended most significantly on the azimuth η of the finger position (p < 0.001), the elbow rotating in the counterclockwise direction as the final position moved toward the left. The correlation with the elevation θ was weaker (t values were 10 times smaller than were those for the azimuth), the elbow rotating in the clockwise direction as the final position moved upward. Finally, the correlation with the distanceR was highly significant (the value of the coefficienta_{R} cannot be compared to the other two because different scales are involved). The dependency of the posture angle on finger position is consistent with the observation that arm orientation is a function of azimuth and elevation angles in straightarm pointing (Straumman et al. 1991; Miller et al., 1992) and ballthrowing (Hore et al., 1992, 1994) movements.
Next, we tested whether the relationship between the arm posture and finger position in the pointing phase depends on the locating arm. Because the finger position at the end of the pointing phase was not the same in both conditions and because these variations affected the posture angle ψ, the test required a preliminary processing. For each participant, condition, and phase separately, the posture angle ψ was fitted by thirdorder polynomial functions of the polar coordinates of the finger position (accuracy improved only marginally with higherdegree polynomials). Across subjects and conditions, the amount of variance accounted for by these fittings varied between 72 and 93% (r _{2} ^{2} in Table 2). The regression plots in Figure 8 demonstrate the accuracy of the fitting in one typical subject. The polynomials were then used to interpolate the posture angles at the target positions. Figure 9 shows the relationship between the interpolated posture angles at the end of the pointing phase in conditions LR and RR. In participant S4, the average posture of the right arm was virtually the same, irrespective of the arm used for locating. The correlation between posture angles was also very high in participants S2 and S3. In both cases, however, there was a significant, targetindependent tendency to rotate counterclockwise the right elbow in condition LR with respect to condition RR. Participant S1 introduced postural variations across targets, resulting in a weaker correlation. For all participants, the slope of the normal regression line through the data points was close to 1.
Postural variability across phases and conditions
The amount of postural variability across repetitions (i.e., unaccounted for by target position) was not negligible. This is illustrated by the plots of the residuals in Figure 8 showing the difference between the actual posture angle and the value predicted by the polynomial model for the corresponding endpoint. Here we describe how the correlation between the residuals was used to gauge the extent to which arm posture in the pointing phase could be explained by the corresponding posture in the locating phase. Both panels in Figure 10 are a plot for each target of the residual for the locating phase (xaxis) against the residual for the corresponding pointing (data for all subjects). Because of the large number of data points, correlation was significant in both conditions. However, the strength of the association was much higher in condition RR [F _{(1,815)} = 517.03;p ≃ 0; r ^{2} = 0.39] than in condition LR [F _{(1,770)} = 32.96;p ≃ 0; r ^{2} = 0.04].
The contrast between conditions was confirmed by performing a correlation analysis for each target and subject separately (Table3, values based on less than five measures were omitted). The number of significant (p < 0.05) correlations was 39 out of 85 (45%) in condition RR and 13 out of 87 in condition LR (15%), a significant difference (binomial test; p < 0.001). The same pattern was present in each participant (4 vs 64%, 29 vs 64%, 5 vs 17%, and 30 vs 44% for S1, S2, S3, and S4, respectively). When the data for all targets was pooled, the correlation for each participant was significant in both conditions, but again, the strength of the association was clearly different [for S1; RR,F _{(1,221)} = 278.15; LR,F _{(1,204)} = 4.14; for S2; RR,F _{(1,148)} = 35.31; LR,F _{(1,150)} = 28.51; for S3; RR,F _{(1,194)} = 47.41; LR,F _{(1,181)} = 5.39; and for S4; RR,F _{(1,246)} = 112.45; LR,F _{(1,229)} = 18.08; p < 0.05 in all cases]. Whenever the correlation differed significantly from 0, the slope of the regression line was positive in condition RR and negative in condition LR (a negative slope indicates that the two elbows rotated in opposite directions). Thus, all participants tended to adopt a pointing posture similar to that adopted for locating. Again, this tendency was much stronger when the same arm was used in both phases.
In the midsagittal plane, pointing in the condition LR could be achieved accurately by a posture matching strategy (see the introductory remarks). To ascertain whether these targets had a special status, we considered the strength of the correlation between locating and pointing postures for targets belonging to different planes. Table4 reports for both conditions the proportion of significant correlations for each group of nine targets lying in one plane (data from all participants). In condition RR, the proportion increased from the distal to the proximal plane and from the left to the right sagittal plane. There was no significant difference among the three horizontal planes. In short, the correlation was highest for targets close to the starting position of the right hand. In condition LR, the proportion was largest (22%) for targets in the midsagittal plane, for which pointing posture could be the mirror image of the locating posture.
Modeling average posture
Arm posture varied somewhat from trial to trial. Yet it depended strongly on target position. In this section, we argue that the systematic component of the posture selection process can be accounted for by a simple kinematic hypothesis. Specifically, we posit that the selection of one posture (i.e., the solution of the inverse kinematic problem) complies with an optimum principle. A posture can be described by a 4D vector P = [η,θ,α,β], where (η,θ) and (α,β) are the yaw and elevation angles of the arm and forearm, respectively (i.e., the socalled orientation angles; Soechting and Ross, 1984). Suppose that there exists one postureP ^{*} that the motor system construes as a fixed reference, much like the primary gaze position in oculomotor behavior. Suppose also that the distance of any one postureP from this reference is estimated with the Pythagorean metrics d(P,P ^{*}) = ‖P − P ^{*}‖ (scale factors in the computation of distances may be different for each component). The motorplanning hypothesis we are entertaining is that the average posture adopted both in locating and pointing is the only posture that, at the same time, is compatible with the observed endpoint and minimizes the distance from the reference (minimumdistance principle).
Testing the hypothesis involved the specification of the reference posture and of the scale factors. P ^{*} was estimated by the (componentwise) average of the locating postures over all targets. The scale factors [c _{η},c _{θ},c _{α},c _{β}] were selected to minimize the difference between actual and predicted posture angles across all targets. The average scale values across subjects were c _{η} = 0.289,c _{θ} = 0.158, c _{α} = 0.221, and c _{β} = 0.332. The provides the details of the fitting strategy. We tested the hypothesis for each subject and each condition separately. Numerical analysis confirmed that the minimumdistance posture for each endpoint could be identified unambiguously. Figure 11 summarizes the results for one participant by plotting for all conditions and each target the predicted posture angle ψ versus its observed average. Table 5 reports the actual and predicted correlations for the angles η, θ, α, β, and ψ in all conditions and all participants (intercept and slope parameters were close to 0 and 1, respectively). The high percent of variance accounted for in every case shows the excellent agreement between the data and the suggested solution to the inverse kinematic problem.
Modeling systematic errors
Systematic errors were large, yet regular. The pattern of errors was similar in conditions LR and RR, being the result of quasisymmetrical shifts with respect to the midsagittal plane. Because only the arm used for locating changed across conditions, this symmetric arrangement must reflect some aspect of the locating process. In fact, it is possible to predict the pattern of error in both conditions by assuming that the orientation angles of the locating arm are systematically biased and generate inaccurate target representations in a 3D spatial reference. Instead, the execution of the pointing movement toward the targets is not supposed to introduce further systematic distortions. Such a difference between the locating and pointing processes is supported by two observations. First, in the locating phase the target position was not known until the end of the movement, whereas the participant had a representation of the desired endpoint even before initiating the pointing movement. Second, inspection of the trajectories demonstrated the ballistic nature of the pointing movements. Only very rarely did we observe corrective movements at the end of this phase. Together, these two observations point to the conclusion that pointing, unlike locating, movements were controlled in the feedforward mode and involved very little processing of sensory cues.
Because the minimumdistance principle translates any configuration of endpoints into the corresponding configuration of arm postures, we can estimate the (mis)perceived orientation angles by computing the average arm posture that the left or right arm would take at the spatial location reached by the right hand at the end of the pointing movement [note that the predictions of the minimumdistance model for condition RR are highly correlated with their actual values (see Table 5)].
Figure 12 summarizes the results for the same participant of Figure 11. Each panel is a plot of the perceived versus the actual orientation angles measured in the locating phase. Collectively, these psychophysical functions convey all the information concerning the systematic pointing errors. Linear regressions provided an excellent fitting to the data points in all cases. To predict the systematic errors, we expressed each perceived angle as a linear combination of the four real angles. Using again data from the this same subject (Fig. 13), we illustrated the adequacy of the model by comparing the actual and predicted average endpoints with the same format described in Figure3.
Modeling variable errors
By necessity, the diverse postures observed in the locating phase for any one target identified the same endpoint (i.e., the one set by the robot). However, because we concluded above that perceived angles were biased, this manytoone correspondence between angles and endpoints was no longer guaranteed at the level of the internal representation. The idiosyncratic variations of the locating posture for each target are bound to introduce some variability also in the perceived hand position and, ultimately, in the planning of the pointing. We tried to estimate this perceptual contribution to the pointing variability. First, using the psychophysical functions described above, we computed the perceived hand position corresponding to the actual posture in each trial. Then, we computed the associated confidence ellipsoids with the same procedure that was used for the pointing data. Shape and orientation of these ellipsoids differed across conditions, whereas shape and orientation of the ellipsoids for pointings were similar (see above). However, this apparent contradiction disappears when one considers the size of the perceptual contribution. In fact, the volume of the ellipsoids describing the variability of the perceived hand position was found to be much smaller (<20%) than that of the pointings. Thus, the perceptual contribution to the pointing variability was masked by a larger additional source of noise. The similarity of shape and of orientation of the actual ellipsoids in both conditions strongly suggests that this additional contribution pertains to the planning and/or the execution phases.
We reasoned that if planning is the situation in which noise intervenes most, then it should affect the 3D spatial representation of the target generated by the locating arm. Alternatively, if noise intervenes mostly in the execution phase, it should emerge at the level of the intrinsic coordinates of the pointing arm. The problem of deciding between the two possibilities was addressed by finding the system of reference in which the coordinates of the endpoints are most uncorrelated (Gordon et al., 1994; McIntyre et al., 1997; Vindras and Viviani, 1998). Five systems of coordinates were considered. For the extrinsic (3D) reference, these were (1) spherical, centered on the right shoulder, (2) spherical, centered on the hand initial position, and (3) orthogonal cartesian. For the intrinsic (4D) reference, these were (4) orientation angles and (5) joint angles. The amount of coupling for each coordinate system was gauged using Bartlett’s (1954)statistics (see Morrison, 1976, page 118) that permits one to test whether a correlation matrix is significantly different from the identity matrix. For each system, target, and condition separately, we computed the correlation matrix R =Diag(1/✓s _{ii})^{T} S Diag(1/✓s _{ii}), wheres _{ii} values are the diagonal terms of the covariance matrix S. R and S are either three × three matrices (for the extrinsic systems) or four × four matrices (for the intrinsic systems). Finally, the degree of coupling was defined as the numberN_{c} of correlation matrices (out of 54) for which the null hypothesis was rejected at the 0.05 significance level. In order of increasing coupling, the five systems were ranked as follows: (1) spherical, shouldercentered (N_{c} = 6); (2) spherical, handcentered (N_{c} = 18); (3) orthogonal cartesian (N_{c} = 26); (4) orientation angles (N_{c} = 41); and (5) joint angles (N_{c} = 53). Note that the same ranking was obtained by taking into account separately the 27 ellipsoids for either condition. In conclusion, the analysis above strongly suggests that most of the endpoint variability originates within the 3D extrinsic representation of the target position on which motor planning is based.
Based on this clear result, we performed a numerical simulation of the distribution of the variable error. For each condition, uncorrelated Gaussian noise was added to the coordinates of the individual average endpoints expressed in a shouldercentered spherical system. The variance of the coordinates was estimated from the data for each condition and participant separately, pooling all targets. The simulated ellipsoids are shown in Figure14 with the same format of Figure 7. In keeping with the correlation analysis above, both the shape and the direction of these ellipsoids were similar to the experimental ones. Moreover, the average volume compared favorably with the data, the only discrepancy being a slight increase in the volume with movement extent that was not present in the experimental results.
DISCUSSION
We investigated the mechanisms that permit one to point to a position in space identified previously by kinesthetic cues. Absolute errors (average across participants and conditions, 119.75 mm) were larger than those typically observed in experiments on motor memory. The difference, however, is congruent with the observation that both constant and variable errors increase with the dimensionality of the movement. For instance, in the onedimensional leverpositioning task, absolute error ranged typically between 30 and 50 mm (Laszlo, 1992). Comparable values (18.5–40.2 mm) were observed in an experiment using the same switchedlimb technique of this study (Larish and Stelmach, 1982), as well as in a similar study by Wallace (1977). By contrast, for twodimensional (2D) movements, Larish and Stelmach (1982) found that absolute radial errors ranged from 34.9 to 70.0 mm. Also, with a hand apposition technique, Helms Tillery et al. (1994) found that the constant error ranged between 36.0 and 67.0 mm for 2D movements and between 73.0 and 101.0 mm for 3D movements. Finally, errors in our study were of the same order of magnitude as those reported bySoechting and Flanders (1989a, page 587) for pointing to (3D, remembered) visual targets (average absolute error, 116.6 mm). It should be stressed that the pattern of constant errors (Fig. 3) was quite similar to that reported for the visual case (see Soechting and Flanders, 1989b, their Fig. 4).
Direct mapping versus target position hypothesis
Initially, target position must be coded as a set of (at least four) articular variables. In the introductory remarks, we contrasted two alternative accounts of how this information is then transformed between the locating and pointing phases. One possible strategy could be to establish a direct mapping from the 4D manifold in which the posture of the locating arm is represented into the 4D manifold in which the posture of pointing arm is represented (Fig.15 A). Two arguments can be cited against the 4D → 4D mapping hypothesis.
First, in condition RR, direct mapping would reduce simply to reproducing the same postural angles adopted in the locating phase. Instead, the mapping would be much more complicated when left arm posture has to be translated into a right arm posture with the same endpoint. This asymmetry is difficult to reconcile with the following facts: (1) errors in both the single and switchedlimb cases were quite similar, the only distinguishing feature between conditions being the opposite shifts of the average endpoints; (2) variable errors were smaller when the right hand was used both for locating and pointing (Fig. 4), but shape (Fig. 6) and orientation (Fig. 7) of the confidence ellipsoids were instead similar; and (3) average pointing posture was fairly independent of which arm had been used for locating (Fig. 9). Thus, one would have to admit that, although radically different computations were involved in the two conditions, they nevertheless resulted in remarkably similar postures and patterns of error.
Second, postural variability (summarized by the angle ψ) was considerable, on the order of 40° in both phases (Fig. 8). However, the degree of correlation between left and right postural variability was weaker in condition LR than in condition RR. If indeed target identifications were mediated directly by postural information, one would expect a much tighter coupling between arms also in condition LR. A discussion of the coupling in condition RR is deferred until later.
The fact that kinesthetic information from the left arm could drive the right arm as reliably as information from the same right arm is much better in keeping with the alternative account mentioned in the introductory remarks, namely the one based on the target position hypothesis that the early intrinsic coding is followed by a stage in which target position is translated into an extrinsic 3D coordinate system (MacNeilage, 1970; Wallace, 1977; Larish and Stelmach, 1982). In fact, as shown schematically in Figure 15 B, within this conceptual framework, the sensorimotor transformation leading to the pointing movement is similar in both conditions. Moreover, the scheme is fully compatible with the fact that the average posture of the right arm was independent of the experimental condition.
The target position hypothesis was tested by Helms Tillery et al. (1991) by comparing pointing accuracy in two conditions. One was similar to our RR condition; in the other the target had to be identified with a stick held by the right hand. Errors with the stick were larger than were those with the hand. Moreover, they were unsystematic, subjectspecific, and dependent on the location of the target in the workspace, leading the authors to the conclusion that “subjects were unable to synthesize a reliable estimate of the location of their hands in space using only kinesthetic cues” (Helms Tillery et al., 1991, page 771). In a later study (Helms Tillery et al., 1994) on the basis of two switchedlimb experiments, the same authors concluded that targeted hand movements subserved by visual and somesthetic inputs are organized in fundamentally different frames of reference. The reason for the discrepancy between these results and ours is not clear. However, two methodological improvements in our experiment may be cited, the much higher density of points sampled in the workspace and the availability of repeated measures. Moreover, in both previous studies, individual performances were highly variable, making it difficult to compare the results of different participants tested in the single and twohand conditions.
The analysis of the distribution of the variable errors provided useful clues about the nature of the coordinate system in which target position is coded. The fact that shape and orientation of the ellipsoids were similar in both experimental conditions is congruent with the assumption that one frame of reference is involved irrespective of the locating hand. Moreover, the analysis of the coupling among coordinates confirmed that, as postulated by the target position hypothesis, this frame of reference is extrinsic. It also suggested that the variable error originates within the early stages of planning of the pointing rather than within the execution stage. Finally, correlation analysis permitted us to identify the spherical, shouldercentered system of coordinates as the most likely candidate for the extrinsic reference.
Solving the inverse kinematic problem
Pursuing the hypothesis that target position is eventually coded in an extrinsic 3D system of reference, one must address the question of how pointing posture is selected on the basis of this spatial information. Because >70% of the total variance of the posture angle ψ was explained by the endpoint position (see Results), we framed the selection problem as an inverse kinematic problem for the average posture. Several attempts have been made to account for the dissipation of the extra degrees of freedom involved in the solution to the inverse kinematic problem in terms of neural constraints. In particular, it was suggested that Donders’ law that holds for eye movements can be generalized to arm movements (Hore et al., 1992, 1994; Crawford and Vilis, 1995). One consequence of this assumption is that arm orientation should depend only on the azimuth and elevation of the target. By varying systematically the initial hand position, Soechting and Flanders (1995) showed that this constraint is not fully respected. Our results are also at variance with the proposed generalization of Donders’ law. First, we found that arm orientation, as described by the posture angle ψ, depended strongly on the target radial distance. Second, the orientation of the pointing arm for any given target was not constant, being related to the arm orientation during the previous locating movement. Our solution to the problem of dissipating the extra degrees of freedom takes into account the dependency from the target radial distance by adopting a biologically plausible cost function based only on differences between orientation angles. Moreover, by framing the problem in terms of average posture, this solution leaves room for accommodating the observed postural variability (see below).
Admittedly, the minimumdistance principle cannot be construed as a fully satisfactory solution to the problem, insofar as it does not take into account the initial position of the hand and the dynamics of the movement (Soechting and Flanders, 1995). It should also be noted that, although the principle accounts for the selection of one pointing posture among the infinite candidates compatible with one final finger position, it does not address the question of how this set of candidates is specified by the perceived position of the target. Despite these limitations, the very accurate predictions of average posture (Fig. 13) demonstrated that the general notion of optimal planning, originally developed for pointtopoint movements (Hogan, 1984; Flash and Hogan, 1985; Hasan, 1986; Uno et al., 1989; Viviani and Flash, 1995), leads to sensible predictions also in the case of a rather complex posture selection problem.
Although the target position hypothesis proved adequate for interpreting the similarity between the pattern of errors in both conditions, it seems to be too simple to accommodate also the results of the partial correlation analysis (Fig. 10). Indeed, in condition RR, a significant component of the postural variability around the mean in the pointing phase was accounted for by the corresponding variability of the locating posture. Although much weaker, a similar dependency was present also in condition LR. Across subjects and targets, all significant correlations between residuals were negative, indicating a tendency to raise and lower symmetrically right and left elbows (this was true also for most of the nonsignificant correlations, see Table3). Moreover, correlations were stronger for midsagittal targets (Table4). Because the locating posture cannot be recovered from the (misperceived) target position, these findings imply that a memory trace of this postural information is still available during the pointing phase.
The spontaneous tendency to match postural angles, even in condition LR in which this strategy was not a viable solution for reaching the desired position, suggests a functional dissociation between the processes that control the endpoint and specify the average pointing posture and those responsible for the trialbytrial deviations from this average. The former implement the minimumdistance algorithm, taking into account only the desired endpoint. The latter, which do not contribute to the specification of the endpoint, are influenced by the memory trace of the locating posture. A more complex cost function may take into account this trace, leading to a model for individual trials.
A functional scheme
Figure 15 C summarizes our view of the sequence of steps and factors involved in pointing to kinesthetic targets.
(1) The average locating posture adopted in the locating phase is selected by the minimumdistance principle. The variations around the average (all compatible with the finger position imposed by the robot) are primarily idiosyncratic. They do not have a significant influence on the accuracy of the subsequent pointing.
(2) Systematic biases in the perception of postural angles generate an erroneous representation of the target. Framing the description of the constant errors in psychophysical rather than motor terms afforded a natural explanation of the symmetric lateral shifts of the endpoints in conditions RR and LR. In fact, the shift would simply be a consequence of the fact that the pointing movement is made by two arms arranged symmetrically with respect to the midsagittal plane.
(3) Two processes are active during the pointing phase. The first one translates back (through the minimumdistance principle) the memorized (shouldercentered) 3D representation of the target into an average 4D postural configuration. The second process is a memory trace responsible for a portion of the trialbytrial postural variations around the average. The extent to which these variations reflect the locating posture depends on the task condition (RR vs LR). In either condition, however, the average pointing position is unaffected by the memory trace.
(4) An additional source of noise blurs the 3D representation of the target position subserving the planning of the pointing movement. Ultimately, pointing variability reflects mostly this source of noise.
In this scheme, the coding of the target position in an extrinsic frame of reference has the same central role that this coding does in current theorizing on visuomanual reaching. The fact that the scheme accounted fairly accurately for the experimental results brings further credence to the notion of an amodal system of spatial representation shared by visual and kinesthetic inputs. So far there is no direct neurophysiological evidence of such a system because the task of pointing to kinesthetic targets has not been investigated yet in monkeys. However, the presence in area 5 of cells tuned to the location of visual targets in a shouldercentered frame of reference (Lacquaniti et al., 1995) suggests that the spatial coding postulated by the target position hypothesis may be implemented in the parietal cortex. Future work involving a wider range of operating conditions should clarify the extent and nature of the factors specific to each perceptuomotor channel.
Appendix
We describe the characterization of the posture by the Cardan angles and the application of the minimumdistance principle to the experimental data. The following notation is adopted:
[x,y,z]: finger cartesian coordinates with respect to the shoulder.
[x_{e} ,y_{e} ,z_{e} ]: elbow cartesian coordinates with respect to the shoulder.
L_{a} : arm length.
L_{f} : forearm length.
R =
Posture angle
The Cardan system of reference is illustrated in Figure16. For any given hand (finger) and elbow position, an initial arm posture (S,E _{0}, H _{0}) is defined in which the finger and the elbow belong to the midsagittal plane, the finger points forward along the yaxis, and the elbow flexion is equal to that of the final posture. Then, a unique sequence of two rotations, φ and ζ, moves the finger from the initial to the final (H _{2} ≡H _{3}) position through the intermediate posture (S, E _{1},H _{1}). The same rotations move the elbow to a position (E _{2}) that corresponds, by definition, to a zero amount of rotation around the shoulder–finger axis. Finally, a third rotation, ψ, around this axis brings the elbow to its final (E _{3}) position. The triple [φ,ζ,ψ] are the Cardan angles in the zxyconvention.
The Cardan angles can be expressed as a function of the finger and elbow coordinates by the following formulae:
Conversely, finger and elbow coordinates are expressed as a function of the Cardan angles by the following formulae:
Applying the minimumdistance principle
To any finger position (endpoint) corresponds an infinite set of postures that are compatible with that endpoint. According to the minimumdistance principle, the indeterminacy of the inverse kinematic problem is eliminated by selecting, within this infinite set, the (unique) quadruple [η,θ,α,β] (euclidian distance) of orientation angles that minimizes the following cost function:
Computationally, the most convenient choice is to express the angles η, α, and β as functions of the coordinates [x,y,z] and of the arm elevation angle θ chosen as the parameter. The angle β is provided directly from Equation A.4c:
Footnotes

This research was supported in part by a grant from the Pôle RhôneAlpes de Sciences Cognitives and by VitaSalute University HSR Research Grant A2876. We thank the two anonymous reviewers for their critical reading of this article and their suggestions.
Correspondence should be addressed to Dr. Paolo Viviani, Department of Psychobiology, Faculty of Psychology and Educational Sciences, University of Geneva, 9, Route de Drize, 1227 Carouge, Switzerland.