Abstract
The CNS may use multimodal reference frames to combine proprioceptive, visual, and gravitational information. Indeed, spatial information could be encoded simultaneously with respect to egocentric and allocentric references such as the body axis and gravity, respectively. It has further been proposed that gravity might serve to align reference frames between different sensory modalities. We performed a series of experiments in which human subjects matched the orientation of a visual stimulus to a visual reference (visual–visual), a haptic stimulus to a haptic reference (haptic–haptic), or a visual stimulus to a haptic reference (visual–haptic). These tests were performed in a normal upright posture, with the body tilted with respect to gravity, and in the weightless environment of Earth orbit. We found systematic patterns of errors in the matching of stimulus orientations. For an upright posture on Earth, a classic oblique effect appeared in the visual–visual comparison, which was then amplified in the haptic–visual task. Leftward or rightward whole-body tilt on Earth abolished both of these effects, yet each persisted in the absence of gravity. Leftward and rightward tilt also produced asymmetric biases in the visual–haptic but not in the visual–visual or haptic–haptic responses. These results illustrate how spatial anisotropy can be molded by sensorimotor transformations in the CNS. Furthermore, the results indicate that gravity plays a significant, but nonessential role in defining the reference frames for these tasks. These results provide insight into how the nervous system processes spatial information between different sensory modalities.
Introduction
To understand how spatial information is shared between the different sensory modalities, one can examine patterns of error in the performance of a sensorimotor task. For instance, when asked to reproduce the orientation of a visual stimulus, the variability of responses to vertical or horizontal stimuli are lower than for any other orientation (the so-called “oblique effect”) (Appelle, 1972). Responses may also be systematically biased toward or away from the canonical axes, in processes known as “tilt normalization” or “tilt contrast” (Howard, 1982). But what defines “vertical” and “horizontal” for these phenomena? Does orientation anisotropy arise from intrinsic properties of the sensory organs or does it emerge from higher-order processing (Essock, 1980)? In the latter case, might the processes depend on an allocentric, possibly gravitational, reference frame?
Oblique effects for response variability have been observed in visual (Bauer et al., 1979; Essock, 1980; Heeley et al., 1997; Gentaz et al., 2001), motor (Baud-Bovy and Viviani, 2004; Smyrnis et al., 2007), and haptic (Lechelt et al., 1976; Baud-Bovy and Gentaz, 2006) perceptual tasks. The role of gravity in defining the vertical axes for oblique effects has been studied by tilting the head and body with respect to gravity (Buchanan-Smith and Heeley, 1993; Lipshits and McIntyre, 1999; Luyat et al., 2001; Luyat and Gentaz, 2002) and by performing these experiments in weightlessness (Lipshits and McIntyre, 1999, 2007; Lipshits et al., 2005). Tilting the subjects can abolish the preference for either the gravitationally defined or body-defined vertical, suggesting a multimodal interaction. In weightlessness, however, a body-centered visual oblique effect persists, indicating that gravity is not essential to the process.
Accuracy measurements in visual or motor-matching tasks typically reveal biases in responses away from the vertical and horizontal axes (tilt contrast) in both visual (Reese, 1953; Dick and Hochstein, 1989; Yakimoff et al., 1989) and motor (Gordon et al., 1994; Baud-Bovy and Viviani, 2004; Smyrnis et al., 2007) tasks. These effects have variously been ascribed to (1) an exaggeration of deviations away from the meridians (Gibson, 1966; Howard, 1982), (2) attraction of responses toward the diagonal (Yakimoff et al., 1989), or (3) artifacts related to an implicit projection of stimuli in depth (Dick and Hochstein, 1989).
The analysis of directional errors has been used to deduce properties of the neural resources underlying visuomotor coordination, and, conversely, physiological properties of neural systems have been invoked to explain the emergence of perceptual anisotropies such as those described above (Orban and Vandenbussche, 1979; Bonds, 1982; Sokol et al., 1987; Essock et al., 1992; Furmanski and Engel, 2000; Li et al., 2003; Westheimer, 2003; Huang et al., 2006). In the present investigation, we build on our previous results from unimodal comparisons of orientation (haptic and visual) to study the coordination of visual and haptic information. We used similar paradigms to study each modality separately and combined. Furthermore, we used body tilt and the weightless environment of Earth orbit to test for effects of graviceptor cues on the responses. These experiments were designed to tease apart effects related to modality-specific reference frames from those arising through the integration of multisensory information in the CNS.
Materials and Methods
During all experiments, the subject was secured in a seated position in a specially designed chair and looked at a video monitor through a form-fitting facemask (Fig. 1). The video monitor was attached to the chair and centered on the line of gaze at a distance of 60 cm from the eyes. The monitor was viewed through a cylindrical optical tunnel, thus removing any external visual reference. To the right of the subject was attached a two-dimensional force-feedback joystick, the end of which could be displaced in a range of ±16 cm in the fronto-parallel plane. The joystick was positioned in such a manner that when the subject placed the hand on the end of the joystick at the center position, the forearm was oriented horizontally (perpendicular to the fronto-parallel plane) and the angle in the elbow joint was close to 90°.
Experimental apparatus.
In the main task (the visual–haptic task), the joystick was programmed to allow movement only along a narrow slot having a prescribed orientation (Fig. 2). Movements of the hand perpendicular to this line were resisted by the joystick with very high stiffness (1500 Nm−1). Simultaneously, a line oriented at a different angle appeared on the video monitor. This line (33 mm long) was drawn from the center of an 18-cm-diameter circle. Viewed from a distance of 60 cm, the line subtended 3.15° of visual arc. The subject was instructed to move the joystick freely inside the slot with the right hand and, using a knob with the left hand, to adjust the line on the screen to the same orientation as the slot. When the subject perceived that both stimuli (haptic and visual) were at the same orientation, he or she pressed a button to end the trial and initiate the next.
Haptic and visual stimuli. A, Haptic stimulus: the hand moves the joystick back and forth inside an oriented slot. B, Visual stimulus: the subject adjusts the orientation of a line on a video monitor viewed through a circular tunnel.
In two control tasks, subjects compared two visual stimuli or two haptic stimuli presented sequentially. In the visual–visual task, a 33 mm line was presented on the screen at a given orientation, as in the visual–haptic task. The subject was instructed to observe and remember the orientation of this reference line. On the push of a button, the reference stimulus disappeared and a second line was presented at a different orientation. By turning the knob with the left hand, the subject rotated the variable response stimulus to match the remembered orientation of the reference stimulus. The subject had the option to switch back and forth between the reference and response stimuli. When satisfied that the two stimuli were aligned, the subject pressed a second button to record the response and continue to the next trial. Within a given trial, the subject could switch back and forth at will between the reference and response stimuli, but the time to record the final response was limited to 60 s. To erase after-images of either stimulus from the retina, a distracter screen comprised of many crossed lines at different orientations was presented for 1 s at each transition in either direction between the reference and the response stimuli.
At the beginning of a haptic–haptic trial, the joystick was programmed to allow movement only within a narrow slot in the fronto-parallel plane having a prescribed orientation (the reference stimulus, as in the visual–haptic task above). The subject was instructed to move the joystick freely inside the slot with the right hand and to remember the direction of this movement. When the subject pressed a button with the left hand, the joystick moved automatically to the center position and then the orientation of the slot was changed (response stimulus). Using a knob with the left hand, the subject could adjust the orientation of the response stimulus to the remembered orientation of the reference stimulus. By repeated pushes of the button, the subject had the option of switching back and forth between the reference and response stimuli. The number of transitions was left up to the subject, but the duration of each trial was limited to 60 s. When the subject was satisfied that the two slots, reference and response, were at the same orientation, they pressed a second button to indicate the end of one trial and to initiate the next.
In both the visual–haptic and the haptic–haptic tasks, subjects were instructed to perceive the orientation of the haptic stimulus by sliding along the constraint imposed by the joystick. Theoretically, one can imagine that the subjects use kinesthetic information and tactile information, as well as information about the motor command, to identify the slant of the virtual surface. We cannot rule out the possibility, however, that subjects based their judgment of haptic orientation on a comparison of the hand position at the two extreme limits of the virtual constraint.
For all three tasks, the reference stimulus was placed at one of seven different directions on each trial (−22.5, 0, 22.5, 45, 67.5, 90, or 112.5°, where 90° is aligned with the subject's head/body axis, and 0° points to the right). The sequence of reference stimuli was quasi-random, and each orientation was presented six times. Subjects were not aware of the limited number of possible reference orientations. The adjustable response stimulus was initially oriented at one of two possible initial positions (−40 or +130°) at the start of each trial (three times each for each orientation of the reference stimulus). For each trial, we recorded the final orientation of response stimulus and the time required to perform the task.
Data analysis.
We analyzed errors in alignment between reference and response as a means of studying the sensorimotor transformations in these tasks. We computed the constant error, the variable error, and average response time as a function of reference stimulus orientation for each subject. Constant error, computed as the average mean difference between the orientation of the reference and response stimuli, is a measure of the absolute accuracy of the responses relative to the reference angle. Variable error, measured as the SD about the mean error for multiple trials to the same reference stimulus, is a measure of the consistency of responses for repeated trials to the same reference angle. These two measures provide different information because it is possible to incorrectly match the true orientation of the stimulus in a consistent manner. Conversely, one might match correctly the orientation of the stimulus, on average, but with considerable variation from trial to trial. We then looked for particular directional anisotropies within each of these measures of error. In patterns of constant error, we looked for evidence of expansion or contraction of responses around particular directions. We measured the “local distortion” (McIntyre et al., 2000) to quantify these phenomena by computing the slope of the best-fit line relating constant error to reference angle around specific axes of interest (0°, 45°, 90°). This method can detect local compression or expansion of responses with respect to the range of reference values independent of any global rotational error common to all directions. A more detailed explanation of this method can be found in the Appendix. For the measures of variable error and for response time, we looked for reliable evidence of oblique effects in which one finds better performance (lower variability or shorter response times) for vertical and horizontal stimuli, compared with the other oblique angles.
In all experiments, the subjects gave informed consent before commencing the experiment and were free to terminate their participation at any time. Experiments were performed in accordance with international and local regulations concerning the conduct of scientific research with human subjects.
Experiment 1: upright posture.
In the initial experiment, we searched for anisotropy in the transfer of orientation information between the haptic and visual perceptual systems in a normal upright position on Earth. We measured patterns of errors and response times for the matching of visual to haptic stimulus orientations and compared these results to matching tasks performed between stimuli within each modality alone (visual–visual and haptic–haptic comparisons). We looked at the stability of the error patterns over time by testing the same subjects three times over a 4 week period.
Ten men and one woman (S1–S11) participated in this experiment. Subjects had previously been trained on the task and so had already performed a number of trials on each of the paradigms before the testing sessions reported here. Each experimental session was separated into two parts. In an initial sitting the subject performed the visual–haptic task. After a break of several hours, or on the next day, he or she performed the visual–visual and haptic–haptic tasks. Participants performed three such sessions at approximately 2 week intervals.
A two-factor ANOVA was performed on each of the three dependent variables (constant error, variable error, and response time) with reference stimulus orientation (seven levels) and experimental session (three levels) as independent factors. We further tested for the stability of constant error patterns by performing correlation analyses between pairs of sessions for a given sensory test (visual–haptic, visual–visual, and haptic–haptic), and we looked for commonality of error patterns by performing correlation analysis of haptic–visual versus visual–visual and haptic–visual versus haptic–haptic comparisons. These experiments were designed to test for the so-called oblique effect for measures of variable error and response time, based on both our previous experiments on visual–visual and haptic–haptic comparisons and on other works in the literature (see Introduction). To test this specific a priori hypothesis, we performed planned comparisons: responses to vertical and horizontal reference stimuli were pooled and compared with the pooled set of responses to the oblique reference orientations. We further tested for reliable differences between individual values of reference orientation by performing Newman–Keuls post hoc tests, where appropriate.
Experiment 2: whole-body tilt.
In the second experiment, we asked whether visual and haptic information might be encoded in a gravitational reference frame. We tested this hypothesis by tilting subjects with respect to gravity during the execution of the three experimental tasks.
In this experiment, subjects performed the same three protocols (haptic–visual, visual–visual, and haptic–haptic) using the same equipment (joystick, video monitor, tunnel, and face mask) as in the previous experiment, while seated in a specially constructed chair that allowed the body and head to be rotated in the fronto-parallel plane. The video screen and the joystick were attached to the chair such that the axes of the screen and the axes of the joystick remained at a constant orientation with respect to the subject's body whatever the orientation of the chair. Figure 3 shows the chair used to tilt the subject (A) and the orientation of each stimulus (B) relative to the head and screen (top row) or relative to gravity (bottom rows).
A, Chair used to tilt the body of the subject in experiment 2. The joystick (not shown) was attached to the chair to the right of the subject. B, Stimulus orientations as viewed by the subject with respect to the head/body axis and to gravity for experiments performed with whole-body tilt.
Subjects always took their place in the chair when it was in an upright position and held their face against the mask with the help of an elastic headband. The inclination of the chair was changed manually to the desired angle by the experimenter with the subject seated in place. Rotation speed of the chair was therefore variable but typically took between 5 and 15 s. While the chair was being moved, subjects could see only the video screen through the cylindrical tunnel and thus had no visual information about the angle of inclination. After reaching the new inclination, subjects started the experiment when they were ready, typically within 30 s.
Performance of one set of 42 trials took ∼15 min for the haptic–visual task. After the performance of this task, subjects were asked to indicate two perceived axes by orienting a visual line on the screen. The subject aligned the visual stimulus with the perceived axis of gravity, in the first case, or with the perceived axis of their body, in the second case. Subjects performed this task 10 times for each axis, starting from different initial positions of the indicator line for each trial. In most cases, subjects performed this task immediately after the haptic–visual coordination task, without moving the chair and without removing the head from its position in the facemask and tunnel. Several subjects performed this task in a separate experimental session. In the latter cases, the subjects waited 15 min after the chair was rotated to the desired orientation before starting the trials.
Twelve subjects (nine men and three women) performed the visual–haptic task once for each of three positions of the chair: upright (0°), inclined to the left (+22.5°), and inclined to the right (−22.5°). Eleven of these subjects performed the visual–visual task (one subject was eliminated because of missing data), and six of these subjects also performed the haptic–haptic task. All three chair inclinations were tested within a single experimental session, but the order of the chair positions within the session was different for different subjects. Only one task was performed in a given session. Subjects who performed more than one of the three tasks performed each task on a different day.
A two-factor ANOVA was performed for each task with each of the three measurements (constant error, variable error, and response time) as dependent variables. Chair inclination (three levels) and reference stimulus orientation (seven levels) were varied as independent variables. Correlation analyses were used to test the constancy of patterns of constant error across chair inclinations. For each chair inclination, planned comparisons were performed on variable error and response times to test for a body-referenced oblique effect, as in experiment 1. We also tested for a gravity-centered oblique effect in the left and right tilted position, and we used Newman–Keuls post hoc analysis as appropriate to further examine positive responses in the ANOVA tests that were not predicted a priori. Comparison of the indications of gravitational and body axes were performed with single-factor ANOVAs, with chair inclination (three levels) as the only independent factor.
Experiment 3: haptic–visual coordination in weightlessness.
The results from the chair-tilt experiments suggested that the encoding of sensory information with respect to the gravitational vertical may indeed play a role in the coordination of visual and haptic sensory modalities. To test whether gravity provides an essential reference frame, all three visual and haptic matching tasks were performed in the weightless environment of Earth orbit. This environment provides a unique opportunity to test the specific effect of gravity (or lack thereof) on the sensorimotor task; the subject can maintain a representation of “up” and “down” through haptic cues, body posture, and cognitive information about the surroundings while the direct action of gravity on the limb and on the vestibular system is removed. Experiments were performed as part of the NASA-MIR phase 1B missions NASA4 and NASA5, the Russian mission MIR23, and the Franco-Russian mission PERSEUS to the MIR space station.
Five of the subjects included in experiment 1 (S7–S11) continued this experiment in the weightless environment of Earth orbit and on the ground after spaceflight. These subjects performed the haptic–visual, haptic–haptic, and visual–visual tasks in the upright position both in normal Earth gravity and in the Russian MIR space station. When performing the experiments on orbit, subjects were secured by belts to a chair that was in turn fixed to the floor of the station (Fig. 1). Thus, subjects maintained the same posture in flight as on the ground and were held “upright” with respect to the stable reference provided by the station floor.
All cosmonauts spent ∼6 months onboard the station, and each cosmonaut was tested before, during, and after their space flights. Before flight, all cosmonaut subjects underwent training during which they learned to work with the equipment before the first session of data collection. All were then tested three times before flight (∼60, 45, and 15 d before launch). Subjects had performed the perceptual motor tasks during training and so where already practiced on these tasks before the first ground session reported here. In flight, operational constraints prevented us from testing all cosmonauts on the same flight day of each mission. Subject S7 performed the experiment two times on orbit, on flight days 21 and 58; subjects S8 and S9 performed the experiment one time, on flight day 24; and cosmonauts S10 and S11 performed the experiment eight times each, on flight days 5, 12, 27, 60, 90, 116, 147, and 175. After their return to Earth, subject S7 was tested three times (7, 20, and 40 d after landing), subjects S8 and S9 were tested two times (on postflight days 5 and 10), and subjects S10 and S11 were tested three times (2, 5, and 10 d after landing). In all cases, the experiment was separated into two parts. Subjects performed the visual–haptic task in the initial part. After several hours of rest or on the next day, they performed the visual–visual and haptic–haptic tasks.
Statistical analyses were similar to those performed in the previous two experiments. A two-factor ANOVA was performed for each task with each of the three measurements (constant error, variable error, and response time) as dependent variables. We considered three test sessions before flight and two sessions after flight for each cosmonaut. To compare data across subjects on orbit, we considered a single in-flight session for each cosmonaut closely grouped in time with respect to the start of the mission (flight day 21 for C1, flight day 24 for C2 and C3, and flight day 27 for C4 and C5). Because we were able to test only two subjects later in flight, these data will not be presented here. Test session (six levels) and reference stimulus orientation (seven levels) were varied as independent variables. Because only one of the six sessions included in the ANOVA was performed on orbit, the factor test session provides the effective test for differences attributable to the gravitational environment. To test whether the oblique effects that were observed in the upright position on the ground were maintained in weightlessness, the planned comparison of cardinal versus oblique angles (see experiment 1) was performed on the last preflight session, the in-flight session, and the first postflight session in experiment 3. Only one preflight and one postflight session were analyzed in this way to have the same number of trials as in the in-flight session. In addition, single-factor (reference orientation) ANOVA and accompanying post hoc analysis were performed on the 0 g data alone to test for any additional effects, aside from the a priori test of oblique versus cardinal orientations.
Results
The main results from all three experiments are presented in a common format in Figures 4, 7, and 10, showing from left to right the average constant error, the average variable error, and average response time for each of the seven possible reference orientations. Results are reported separately for the visual–haptic (top row), visual–visual (middle row), and haptic–haptic tasks (bottom row) in each figure. A positive constant error means that the response was oriented erroneously in the counterclockwise direction with respect to the reference stimulus. Results of the statistical analyses are reported in Tables 1⇓–3.
Measurements of constant error (left), variable error (middle), and response time (right) as a function of reference stimulus orientation for visual–haptic (top), visual–visual (middle), and haptic–haptic (bottom). Comparisons for three different experimental sessions are shown.
Note that data with inclined body orientation for the visual–visual task only and data for the visual–visual and haptic–haptic tasks performed in weightlessness were published in preliminary reports (Lipshits and McIntyre, 1999, 2007). We report here for the first time the results of the visual–haptic task, both with inclined body positions and during space flight, and its comparison with the visual–visual and haptic–haptic tasks.
Experiment 1: upright posture
The results of experiment 1 are shown in Figure 4, and corresponding statistical analyses are presented in Table 1. Consider first the results of the main task, that of visual haptic coordination (Fig. 4, top row). From the plot of constant error (Fig. 4, left), one can see that for all orientations of the reference haptic stimulus there was, on average, a positive (counterclockwise) error in the orientation of the visual response stimulus. In other words, the visual line was always perceived to be oriented in a more clockwise direction compared with the joystick movement. The average mean error for all orientations of the haptic stimulus was 6.1 ± 5.0°. The error depended on the orientation of this stimulus (p < 0.001) but did not depend significantly on the experimental session, nor was there a significant cross-effect. For haptic reference stimuli at 45 and 67.5°, the mean visual error was >10°; for other orientations, it was several times less. Although this pattern of errors does not indicate an oblique effect (there was no clear preference for vertical and horizontal stimuli), this reproducible pattern of error (i.e., stable across all three experimental sessions) indicates a directional anisotropy in the comparison of visual and haptic stimuli in which all directions are not equal.
Summary of statistical results for experiment 1
Pronounced oblique effects were, however, observed for measurements of variable error (Fig. 4, middle) and response time (Fig. 4, right). The variable error was in the range of 1.8–8.4° and strongly depended on the orientation of haptic stimulus (p < 0.01), whereas there was no significant main effect of experimental session and no significant cross-effect. Two notable minima occur on this curve: 3.4 ± 2.7° for horizontal stimuli (0°) and 2.3 ± 2.0° for vertical stimuli (90°). The planned comparison of vertical and horizontal stimuli versus oblique angles confirmed the oblique effect for vertical and horizontal reference stimuli (p < 0.001). In fact, the Newman–Keuls post hoc analysis revealed a very clear difference between the cardinal versus oblique angles: variability for vertical and horizontal reference stimuli were each lower than the variability for any of the other reference orientations (p < 0.01 for most comparisons, except for 67.5° vs 0°, where p = 0.025), and there was no significant difference between any pair of oblique angles (p > 0.1).
The average response time was 15.4 ± 3.5 s (Fig. 4, right). There was a main effect of stimulus orientation (p < 0.001), and the relationship also has two minima, one for vertical (11.8 ± 2.6 s) and the other for horizontal (13.7 ± 3.4 s). The planned comparison of vertical and horizontal versus oblique angles confirmed the existence of a highly significant oblique effect (p < 0.001). A Newman–Keuls post hoc test showed that response times for the vertical stimuli were lower than for any of the oblique angles (p < 0.001). The response times for horizontal stimuli were significantly different from the adjacent obliques (±22.5°) and from 45° (p < 0.05). Response times for the vertical stimuli were significantly lower than for horizontal stimulus (p < 0.05), and there was no significant difference between response times for horizontal stimuli (0°) and obliques near vertical (67.5 and 112.5°). These results indicate that vertical stimuli are even more salient than horizontal stimuli with regard to response time. One should emphasize that during our experiment subjects were not instructed to work quickly; the main task for them was to be as accurate as possible, and they were free to take the time they wished to perform each trial. Despite this, a remarkable oblique effect appeared for both response time and variable error.
Inspection of the results from the three different recording sessions made at 2 week intervals revealed no evidence of learning or adaptation for these participants; the results from each session were nearly the same. This was confirmed in the ANOVAs for which there were no main effects of the experimental session factor and no cross-effects between this factor and the reference stimulus orientation. Furthermore, correlation analyses of constant errors for all three pairings of experimental sessions (session 1 vs session 2, session 2 vs session 3, and session 3 vs session 1) were all significant (p < 0.01).
The lower panels of Figure 4 show the results of the visual–visual (middle row) and haptic–haptic (bottom row) tasks. Although certain similarities appear between the results from these unimodal matching tasks and the cross-modal visual–haptic task, we will show that the patterns of responses observed for visual–haptic comparisons cannot be entirely explained by either perceptual modality acting alone.
First, consider the average constant errors for the visual–visual and haptic–haptic tasks. The range of errors was much smaller across the different stimulus angles for the unimodal comparisons (±1° for visual, ±2° for haptic) compared with errors of up to 15° for the visual–haptic task. A single-factor ANOVA on the range of constant error values (maximum to minimum) showed a main effect of task (F(2,24) = 62.01; p < 0.001). Post hoc analysis showed that the range of errors was greater for the visual–haptic comparison, compared with either the visual–visual or haptic–haptic comparisons (p < 0.001). There was no significant difference in the range of errors between the visual–visual and haptic–haptic tasks (p > 0.3). Furthermore, whereas the errors fluctuated around zero for the visual–visual and haptic–haptic tasks, all errors were positive in the visual–haptic transfer task. The average constant error across subjects for all orientations was 6.1 ± 5.0° in the visual–haptic task versus 0.1 ± 0.4° for the visual–visual task and −0.0 ± 0.7° for the haptic–haptic task. Again, an ANOVA confirmed that average constant error differed as a function of task (F(2,24) = 23.6; p < 0.001); post hoc analyses showed that the haptic–visual task produced more positive errors overall, compared with either the visual–visual or the haptic–haptic task (p < 0.001), with no difference between the unimodal tasks (p > 0.9). Thus, although variations of constant error with respect to the reference stimulus orientation can be seen in all cases, patterns of constant error differed depending on the perceptual modality. An additional global rotation of all responses in the counterclockwise directions appeared only when subjects matched the orientation of stimuli across the two perceptual modalities. The global rotation cannot be attributed to biases in the haptic or visual perceptual systems alone.
We hypothesized that the global rotation of responses in the visual–haptic comparison might be attributable to a change of reference frame stemming from the delocalization of the visual and haptic stimuli. The visual stimulus was presented directly in front of the subject on the midline, whereas the haptic stimulus was located to the right of the seated subject. To test this hypothesis, we conducted a control experiment in which 10 subjects performed the visual–haptic comparison with the joystick located on the midline in the fronto-parallel plane, just below the visual display. Even in this location, the average constant error for each of the seven different reference orientations was positive. The average constant error across all reference orientations when the joystick was in the midline location (4.66 ± 4.72°) was not significantly different from the value obtained with the joystick located to the right of the subject (p > 0.3, unpaired t test).
In the visual–haptic task tested here, one cannot claim a bias toward or away from any of the three dominant axes (horizontal, vertical, diagonal), because constant errors were positive for all angles (the global rotation described above). If, for instance, the diagonal indeed acts as an attractor, one would predict positive constant error at 22.5° and negative constant error at 67.5°. It is, nevertheless, possible that responses are biased toward or away from the subjective horizontal, vertical, or diagonal axis (e.g., the responses for 22.5 and 67.5° could be biased toward or away from the response to a 45° reference stimulus). To test for such effects, we computed a measure of local distortion (McIntyre et al., 2000) that effectively compares the local spacing between responses to the spacing between the reference stimuli. A negative value of local distortion indicates that responses are closer together than the target angles (contraction). Conversely, a positive value indicates that responses are more spread out than the corresponding reference values (expansion). See the Appendix for details on the computation of local distortion used here.
Figure 5 shows the measures of local distortion computed for horizontal, vertical, and diagonal reference orientations. For the visual–visual matching task, the results indicate a local spreading of response (tilt contrast) around horizontal and vertical and local contraction (tilt normalization) around the diagonal, as has been reported in a number of other studies (Reese, 1953; Dick and Hochstein, 1989; Yakimoff et al., 1989; Baud-Bovy and Viviani, 2004). A 3 × 3 ANOVA (three sessions × three central axes) showed a significant difference (F(2,18) = 4.26; p < 0.05) in local distortions across the three tested axes and no main effect of session or cross-effect. Post hoc analysis indicated that the local distortion for 45° was significantly different from that at 0 and 90°. A similar trend appears for local distortion measurements for the haptic–haptic task, but the ANOVA did not show a significant effect for this task. Remarkably, however, the measures of local distortion for the visual–haptic task show a pattern opposite to that seen in the visual–visual comparisons: negative values at 0 and 90° and positive values at 45° indicate that the responses were closer together around the vertical and horizontal and more spread out around the diagonal. Local distortions were significantly different between the three main axes for this test (F(2,18) = 7.07; p < 0.01), and again the local distortion at 45° differed from that measured at 0 and 90°. A three-factor ANOVA comparison of results from the visual–haptic and visual–visual tasks (two tasks × three sessions × three axes) showed a very strong interaction between the factors task and axis (F(4,36) = 8.71; p < 0.001), indicating a clear difference in the patterns of local distortion for the two tasks. A comparison of the absolute values of local distortion revealed that these effects are much stronger in the bimodal visual–haptic comparison than in the unimodal visual–visual task.
Distortion of responses around the vertical, horizontal, and diagonal axes. Distortion is a dimensionless quantity relating the spread of responses compared with the spread of reference stimuli. Negative values mean that adjacent responses are attracted toward the response for the center axis, whereas positive distortion indicates repulsion away from the central value.
Some care must be taken when interpreting the results of the local distortion calculation. For instance, the pattern of constant errors in the visual–haptic task is not consistent with a symmetric attraction of responses around 0 or 90°; one would have expected lower values of constant error at 22.5 and 112.5°. A more symmetrical response will be seen in experiment 2. The maximum constant error at 45° suggests that responses to this reference angle are attracted toward the vertical as well. In fact, we did not test enough angles to ascertain whether 45° is the pertinent dividing line. Furthermore, our tests were limited to stimulus orientations within one quadrant of the circle, which might have added range or edge effects to the overall phenomena. Nevertheless, Figure 5 shows a clear contrast between the visual–visual and haptic–visual tasks: expansion around horizontal and vertical for visual–visual and contraction for visual–haptic.
A pronounced oblique effect is apparent in the pattern of variable errors for the visual–haptic task because response variability was lower for vertical and horizontal stimuli than for any of the oblique angles (Fig. 4, top row, middle). Such an effect can also be seen in the visual–visual task (Fig. 4, middle row). There was a significant effect of reference orientation (p < 0.001), with no main effect of session and no cross-effect. Post hoc analysis showed that variability was lower for vertical and horizontal stimuli compared with each of the other oblique angles (p < 0.01). Although the visual–visual oblique effect appears small in Figure 4 because it is plotted on the same scale as the visual–haptic data, the magnitude of the effect is similar to what we have observed in our previous studies on visual–visual orientation comparisons (Lipshits and McIntyre, 1999; McIntyre et al., 2001; Lipshits et al., 2005). Thus, although the visual–visual and visual–haptic tasks seem to share a common oblique effect, the effect is much stronger in the visual–haptic condition. For instance, the difference measured in degrees between the variable errors for vertical stimuli versus 45° was much greater for the visual–haptic task (3.9°) than for the visual–visual comparison (0.7°). When expressed as a ratio however, the magnitude of the oblique effect was similar between the two tasks (2.71 vs 1.95). In a previous experiment (Lipshits et al., 2005), the increase in variability for oblique angles also proved to be more consistent when expressed in terms of relative versus absolute terms, suggesting that the oblique effect results from a multiplicative, rather than additive, process.
Note that no oblique effect for variable errors can be observed for the haptic–haptic comparison (no main effect of reference orientation or session and no cross-effect). In contrast, all three tasks demonstrated an oblique effect in terms of response times. In all three cases, there was a significant effect of reference angle on response time (p < 0.001), with no significant effect of session and no cross-effects. Planned comparisons of vertical and horizontal versus oblique angles were significant for all three tasks (p < 0.001), and Newman–Keuls post hoc tests showed that response times for vertical and horizontal stimuli were each significantly different from two or more of the five other oblique angles (p < 0.02). Note that for the haptic–haptic task, all oblique angles did not produce the same response time; there were significant differences (p < 0.05) between two of the central orientations (45.0 and 67.5°) compared with each of the extreme orientations (−22.5 and 112.5°). Because the presentation of stimuli was sequential in two cases (visual–visual and haptic–haptic) and simultaneous in the other (visual–haptic), and because visual inspection intrinsically takes less time than haptic exploration, it does not make sense to compare the three tasks in terms of the overall time it takes to perform each comparison. However, all three tasks show systematic variations in the relative time it takes to perform the task as a function of the reference stimulus orientation. In all three tasks, subjects responded more quickly for vertically and horizontally oriented stimuli than for any of the other oblique angles.
Our primary interest in this experiment was to study how orientation information is coordinated between visual and haptic modalities. To this end, we allowed subjects to explore the haptic reference stimulus while observing and adjusting at the same time the visual response stimulus. For this task, we thus avoided any influence that memory might have on patterns of responses. It was not possible, however, to allow simultaneous presentation of the reference and response stimuli for the visual–visual or haptic–haptic tasks. To mitigate the effects of the short memory delay in these two unimodal tasks, we allowed subjects to switch back and forth between reference and response as many times as they liked when comparing the two stimuli. Nevertheless, memory delay could conceivably still play a role. To test for such effects, we conducted a control experiment in which four subjects passed sequentially from exploration of the haptic stimulus to observation and adjustment of the response and back again. Figure 6 shows the results of this experiment in terms of constant and variable errors. Patterns of error for these two measures were similar in the sequential task to those observed for the simultaneous comparison of haptic and visual stimuli. In particular, constant errors were all positive and varied according to reference orientation angle. The oblique effect for variable error was also similar for sequential and simultaneous comparisons. Thus, the comparison of results between the simultaneous presentation in the visual–haptic task with results from the sequential presentations in the visual–visual and haptic–haptic tasks does not appear to be an issue, at least for the principal phenomena discussed here. Note that Lechelt and Verenka (1980) found similar patterns of error for simultaneous versus sequential presentation of stimuli in visual–visual, haptic–haptic, and visual–haptic orientation tasks. Whereas errors increased for sequential versus simultaneous presentations in their visual–visual task, as might be expected, reproduction errors were actually lower for sequential presentations in the haptic–haptic and visual–haptic comparisons. Nevertheless, the similarity of oblique effect in all cases indicates that the two presentation methods are comparable. Unfortunately, these authors reported only the unsigned absolute error in their study, making a direct comparison with our results difficult.
Constant error and variable error for sequential presentation of the haptic reference and visual response stimulus. Results are similar to those observed for the simultaneous presentation of the bimodal stimuli.
Experiment 2: whole-body tilt
In experiment 2, we looked for evidence of a gravitational reference frame by asking subjects to perform the three matching tasks in a tilted chair. Results are presented in Figure 7 for each position of the chair (upright, tilted to the left, tilted to the right) and for each of the seven possible orientations of the haptic reference stimulus. Statistical analyses are reported in Table 2.
Measurements of constant error (left), variable error (middle), and response time (right) as a function of reference stimulus orientation for visual–haptic (top), visual–visual (middle), and haptic–haptic (bottom) comparisons for three values of whole-body tilt.
Summary of statistical results for experiment 2 (whole-body tilt)
As for the subjects in the first experiment, all constant errors in the visual–haptic task (Fig. 7, top row, left) had a directional character: constant errors varied systematically for different stimulus orientations, and the orientation of the visual response line was always positioned in a more counterclockwise (positive) orientation compared with the haptic reference stimulus. The most surprising observation from this experiment was the asymmetrical effect of leftward versus rightward whole-body tilt on measurements of constant error. The average constant error for all orientations across all subjects was 5.6 ± 1.9° for the upright position, 5.5 ± 2.3° for rightward inclination, and 11.4 ± 1.3° for leftward inclination. Whereas subjects produced constant errors that were practically the same, on average, when upright or tilted to the right, leftward inclination induced a global rotational error that was approximately two times greater. This extra rotation of responses for leftward tilt was confirmed in the ANOVA by a main effect of chair inclination, followed by a Scheffé's post hoc test that showed that constant error for leftward chair tilt differed significantly from both rightward tilt (p < 0.001) and upright (p < 0.001) positions; there was no significant difference between the latter two conditions (p > 0.99).
It is interesting to note the almost parallel shift in the plots of constant error versus stimulus orientation. Although responses for chair inclination to the left were shifted 6° counterclockwise with respect to the other two body postures, the way that constant error varied around the mean value as a function of stimulus orientation was the same for all chair inclinations. The consistency of this pattern across chair inclinations was confirmed by the lack of a cross-effect between the chair inclination and stimulus orientation factors within the ANOVA (p > 0.05). Furthermore, there is a highly significant correlation between responses from different chair tilts (p < 0.001). The invariance of this pattern across different chair inclinations and the stability across separate experimental sessions (see experiment 1) indicates that this is a stable characteristic of responses on this task. This fact gives increased confidence that the additional bias of constant error observed for leftward inclination of the subject was indeed a veritable effect of the whole-body tilt and thus reflects a specific property of the perceptual system.
Constant error depended on the orientation of the reference stimulus for both unimodal tasks in this experiment, with changes of sign depending on the reference stimulus. Inclining the subject had no effect on overall average constant error for either the visual–visual or the haptic–haptic tasks (no main effect of chair tilt). There was a statistically significant cross-effect between reference orientation and chair tilt for the visual–visual task (p < 0.05), because of the greater similarity between the upright and rightward chair inclinations than between either upright and left tilt or left and right tilt. No such cross-effect was seen for the haptic–haptic comparison.
In this experiment, we observed more clear-cut examples of responses being attracted or repelled by dominant orientations. The patterns of constant error around 0 and 90° were nearly linear, indicating a symmetric attraction for responses around these orientations. The measures of local distortion at reference orientations of 0, 45, and 90° are shown in Figure 8. ANOVA of each task showed significant differences in distortion values across the three reference orientations (visual–haptic: F(2,22) = 9.82, p < 0.001; visual–visual: F(2,20) = 20, p < 0.05; haptic–haptic: F(2,12) = 8.17, p < 0.01) but no differences between the three possible chair tilts [no main effect (p > 0.1) of tilt and no cross-effects for any of the three ANOVAs]. Post hoc analyses for each task showed that distortion values for the 45° axes were significantly different from the values at 0 and 90° (p < 0.01 for the visual–haptic task, p < 0.05 for the others). The patterns of distortion were not the same for each of the three tasks, however. Both the visual–haptic and haptic–haptic tasks showed deviations toward the vertical and horizontal axes and away from the diagonal. This is in contrast to most studies of motor or visuomotor pointing tasks (Baud-Bovy and Viviani, 2004; Baud-Bovy and Gentaz, 2006; Smyrnis et al., 2007), but consistent with what has been found in two studies of haptic orientation reproduction performed in the frontoparallel plane (Gentaz and Hatwell, 1996; Baud-Bovy and Gentaz, 2006). The visual–visual task produced the opposite effect (i.e., contraction at 45° and expansion at 0 and 90°, consistent with other reports in the literature) (Reese, 1953; Dick and Hochstein, 1989; Yakimoff et al., 1989; Baud-Bovy and Viviani, 2004). These observations are confirmed by correlation analyses showing that responses in the visual–haptic task are positively correlated with responses in the haptic–haptic task (r = 0.52; p < 0.02) and negatively correlated with responses in the visual–visual comparison (r = −0.52; p < 0.02).
Distortion of responses around the vertical, horizontal, and diagonal axes for experiments in the tilting chair.
In the visual–haptic task, we tested for an effect of the initial position of the visual stimulus at the start of each trial on response biases. When the reference haptic stimulus was oriented at +22.5°, there was a significant difference (p < 0.001) in the visual response orientation depending on the starting position; trials that started from −40° produced errors that were more negative than trials starting from +135°. All other reference angles showed the same constant error independent of the initial position of the visual line. This means that responses for 22.5° reference orientations were biased toward the horizontal when approached from a horizontal direction and biased toward the vertical when approached from the vertical side. The fact that responses for a 45° reference were always biased toward the vertical, and more so than for any other angle, suggests that the dividing point between deviations toward the horizontal and deviations toward the vertical lies somewhere between 0 and 45°, rather than being midway between the two attractors.
One might surmise from the similarity of patterns between the visual–haptic and haptic–haptic tasks that the effects in the former are attributable to the haptic component of the visual–haptic comparison. It should be noted, however, that in experiment 1, a clear pattern of contraction around the vertical and horizontal appeared in the visual–haptic task, whereas no such effect was detected for the haptic–haptic comparison (see also experiment 3). Furthermore, the magnitudes of the distortion were much stronger in the visual–haptic comparison, indicating that this is an effect brought on by the need to compare stimulus orientations across sensory modalities. The contrast between effects in the visual–visual and the visual–haptic comparison may reflect an intrinsic difference in the nature of these two tasks. Whereas bias away from the vertical are typically seen in orientation matching tasks (Reese, 1953; Dick and Hochstein, 1989; Yakimoff et al., 1989; Baud-Bovy and Viviani, 2004), the opposite effect can be observed in tasks of a more cognitive nature, such as the verbal estimation of stimulus orientation (Dick and Hochstein, 1989; Garrod et al., 2002). In any case, the patterns reported here were independent of chair tilt, indicating that these anisotropies are primarily egocentric in nature (i.e., relative to the body axis).
Figure 7 also presents the average variable error (top row, middle) and average response time (top row, right) for the haptic–visual task for each of the seven possible haptic stimulus orientations and for each position of the chair. In the upright position, variable error was in the range of 2.9–7.8° (mean value, 5.7 ± 0.7°) and depended significantly on the orientation of reference haptic stimulus (p < 0.001). As was the case for experiment 1, there were two minima, 4.5 ± 3.5° for horizontal and 2.9 ± 2.3° for vertical, demonstrating a classic oblique effect as evidenced by the planned comparison (p < 0.001). Note that the minimum for horizontal stimuli is less pronounced than that for vertical stimuli. A similar effect can be seen in the data from the first session in experiment 1. Thus, the saliency of horizontal stimuli may be reinforced with practice on the task. For inclined subjects, the mean value of variable errors was nearly the same for the leftward and rightward chair inclinations (6.7 ± 0.6 and 6.6 ± 0.7°, respectively). These values were slightly larger than for the upright position, reflecting an increase in variable error for vertical and horizontal stimuli. Note that there was a significant cross-effect between the reference orientation and chair inclination (p < 0.01) (i.e., the dependency of variable error on stimulus orientation changes with whole-body tilt). Indeed, there is no significant difference between cardinal versus oblique reference orientations for tilted subjects, as measured by the planned comparison (Table 2). To test whether the oblique effect could, in fact, be linked to the gravitationally defined vertical and horizontal, we computed separate planned comparisons for the left-tilted and right-tilted body orientations: we compared +67.5 and −22.5° to other angles when tilted to the left and +112.5 and + 22.5° versus the other angles for when tilted to the right. In neither case was there a statistically significant difference between the variable errors for the canonical gravitational axes and the other oblique angles. To further examine the patterns of error in the inclined position, we performed ANOVAs separately on the left- and right-tilt data. In both cases, although there was a significant effect of reference angle (p < 0.05), Newman–Keuls post hoc tests showed that there was not a preference for vertical or horizontal axes. Instead, the positive main effect of reference orientation could be attributed to increased variability at 22.5° for leftward tilt and at 45° for rightward tilt. There were no statistically significant differences in variable error between any of the other reference orientations. In summary, the clear oblique effect that existed for the upright position of the subjects disappeared when we inclined subjects to the left or to the right.
The variable error oblique effect was not observed for any chair orientation in the haptic–haptic task (Fig. 7, bottom row). Although the minimal variable error for the upright condition was, in fact, observed for the 0° (horizontal) stimulus orientation, the concomitant effect on stimuli at 90° was not observed. There was no significant effect of the planned comparison of horizontal and vertical versus oblique and no main effect of reference stimulus orientation. In this experiment, haptic perception of orientation appears to be isotropic with respect to stimulus orientation and cannot explain the oblique effect observed for the haptic–visual task.
There is a striking similarity in the behavior of the variable error oblique effect between the haptic–visual (Fig. 7, top row) and the visual–visual (Fig. 7, middle row) tasks in the face of whole-body tilt. In the visual–visual comparison, as in the visual–haptic task, tilting the subject with respect to gravity was sufficient to suppress the oblique effect that was evident in the upright position. This was confirmed with the same statistical analyses that were performed for the visual–haptic data above: no significant differences in a planned comparison expressed in a gravitational reference frame and no significant effect of reference angle in ANOVAs performed separately on data from leftward or rightward body tilt.
The disappearance of the oblique effect with body tilt is consistent with that observed by Buchanan-Smith and Heeley (1993) in which the just-noticeable difference for stimulus orientation in a forced-choice task also showed a distinct oblique effect in the upright posture that was abolished when the head was tilted.a The results on the visual–visual task are in contrast, however, with results reported by Orban et al. (1984), in which they found that orientation acuity showed an oblique effect that followed the retinal axis in the case of head tilt. The differences between studies may be explained by the length of the visual stimuli; the retinally anchored oblique effect reported by Orban et al. (1984) was present for long stimulus lines (15° of visual arc), but not for short lines (0.5°). The relatively short visual stimuli used here (3.15°) and those used by Buchanan-Smith and Heeley (1993) (4°) may have decreased the effects of retinally defined orientation anisotropy.
The tendency for the eyes to rotate contrary to the direction of head or body tilt (ocular counter-rolling) could conceivably explain why one does not observe an oblique effect aligned with the body in the tilted position. In a previous study, however, we tested subjects on reference stimulus spaced every 2.5° (McIntyre et al., 2001) and did not find an oblique effect for any other reference angle. More recently, it has been proposed that a subject's “perceived vertical” could serve as the dominant orientation (Luyat and Gentaz, 2002), but it is not clear why we would not have found the same effect in our previous study. Regardless, these studies argue for a multimodal process underlying the visual oblique effects. The disappearance of the oblique effect with whole-body tilt in both the visual–visual and the visual–haptic tasks described here suggests a common process involving a multimodal reference frame linked to the perception of both the body's axis and the gravitational vertical.
Mean response times for the visual–haptic task (Fig. 7, top row, right) were the same for any position of the chair: 15.5 ± 1.7 s for the upright position, 14.7 ± 0.8 s for leftward inclination, and 15.4 ± 0.7 s for rightward inclination. As was the case for variable error, we observed an oblique effect in the upright position: response times for vertical (13.4 ± 2.9 s) and for horizontal (13.9 ± 3.8 s) orientations of the haptic stimulus were shorter compared with the oblique angles (planned comparison, p < 0.05). The minimum for vertical stimuli was slightly lower than for horizontal stimuli, but this difference was not statistically significant. In both inclined positions (to the left and to the right), the oblique effect disappeared [i.e., the planned comparison in these conditions was not significant (p > 0.2), and no statistically significant differences appeared in the post hoc analyses]. Again, the results for the haptic–visual task more closely paralleled the visual–visual task (Fig. 7, middle row), where a clear oblique effect for response time was observed in the upright (p < 0.01) but not the inclined positions. In contrast, a clear oblique effect on response time was not found in the haptic–haptic task (Fig. 7, bottom row) for any inclination of the subject. There was a significant effect in the planned comparison for the left chair inclination (p < 0.05), but the effect was not significant in the upright posture. There was a nonsignificant tendency, however, for response times to be lower for 90° stimuli than for the other angles in all chair positions.
The haptic–visual oblique effect followed closely the characteristics of the visual–visual oblique effect for variable error in that both effects disappeared when the subject was inclined with respect to gravity. Lechelt and Verenka (1980) also found similar oblique effects in haptic–haptic, visual–visual, and haptic–visual comparisons. Note that Gentaz et al. (2001) found that although both haptic–haptic and visual–visual comparisons exhibited oblique effects, they found no correlations in errors between the two tasks. They did not, however, test cross-modal comparisons in their study. Patterns of constant error and local distortion reported here were, in contrast, more similar between the haptic–haptic and the visual–haptic tasks, as measured by patterns of local distortion. It is conceivable, therefore, that some patterns of error in the visual–haptic comparison are manifestations, albeit amplified, of effects inherent to each of the component sensory modalities.
As noted above, however, there is an additional directional anisotropy of constant errors in the haptic–visual task that is not evident in either the visual–visual or the haptic–haptic task. First, all constant errors were positive for the haptic–visual task, whereas constant error fluctuates around zero in the visual–visual and haptic–haptic tasks. Furthermore, when subjects were tilted 22.5° to the left, there was an additional +6° bias in the responses overall compared with the upright and right-tilted conditions. This asymmetrical effect of chair tilt is particularly interesting when compared with the subjects' perceived orientation of the gravitational and body axes (Fig. 9). Subjects correctly perceived the gravitational vertical for all chair inclinations. But chair inclination had a significant effect on indications of the body axis (F(2,18) = 14.95; p < 0.001): whereas inclination to the right side provoked no significant error in the subjective perception of the body axis, the error in perception of body axis with the chair inclined to the left (8.8° on average) was significant (p < 0.01). To the best of our knowledge, no data on a subject's ability to align a visual stimulus with his or her body axis have been reported in the literature, although studies have reported asymmetrical biases in tests of the subjective visual vertical (Bergenius et al., 1996; Tribukait et al., 1996). It is worth emphasizing, therefore, the generalization of this observation across subjects in our experiment; all 11 subjects had positive errors, on average, when indicating their body axis during leftward tilt, whereas positive and negative errors were equally likely for the upright and rightward-inclined positions.
Error in perceived body axis and vertical axis for whole-body tilts. The error in the indication of body axis for left body tilt is significantly different from the other values.
Experiment 3: weightlessness
To test for a critical role of gravity in defining the reference frames for orientation perception, astronaut subjects performed the three perception matching tasks on the ground and during space flight. Figure 10 show the main results, whereas corresponding statistical results are reported in Table 3. Descriptive averages in the following were calculated by combining sessions into three groups: preflight (three sessions), in-flight (one session), and postflight (two sessions).
Measurements of constant error (left), variable error (middle), and response time (right) as a function of reference stimulus orientation for visual–haptic (top), visual–visual (middle), and haptic–haptic (bottom) comparisons for cosmonaut participants measured for different sessions on the ground (open symbols) and in weightlessness (filled circles).
Summary of statistical results for experiment 3 (weightlessness)
The top row of Figure 10 shows the average constant error, the average variable error, and average response time (from left to right) for the haptic–visual task performed by this group of subjects, as a function of mission phase (three preflight, one in-flight, and two postflight sessions). From the plot of constant error, it is possible to see that, as was the case for two previous groups of subjects, the cosmonaut subjects produced positive errors for all orientations of the haptic reference. The average mean error across all orientations of the haptic stimulus was approximately the same for all conditions (preflight, 6.7 ± 3.6°; in-flight, 7.1 ± 4.0°; postflight, 6.8 ± 3.9°), and there was no significant main effect across sessions. The magnitude of the error depended on the orientation of reference stimulus (p < 0.001) in a manner similar to what was seen for subjects in the first two experiments. The pattern was the same across all gravity conditions (no cross-effect between the factors mission phase and stimulus orientation in the ANOVA). Response patterns were significantly correlated between all ground sessions and between ground and flight data (p < 0.01). The stable pattern of errors indicates a directional anisotropy in the comparison of haptic and visual stimulus orientations that persists even in the absence of gravity.
Patterns of variable errors were also very similar between preflight, in-flight, and postflight sessions. The lack of gravity had no effect on the average variable error across all stimulus orientations: in-flight, 5.4 ± 2.2° versus preflight, 5.3 ± 1.8°, and postflight, 5.5 ± 2.8°. The effect of reference angle on variable error that was obvious in preflight recordings persisted both in-flight and postflight (reference orientation main effect, p < 0.01; no significant cross-effect) Planned comparisons applied to the last preflight session, the single in-flight session, and the first postflight session were all significant (p < 0.05), confirming that the oblique effect was present on the ground and in 0 g. Post hoc tests showed no other significant differences between reference orientations. For this group of subjects, there was no apparent difference between vertical and horizontal stimuli in any phase of the mission.
The analysis of response time showed that for preflight sessions, cosmonauts responded in approximately the same amount of time, on average (15.1 ± 5.4 s), as subjects in the previous two experiments. In weightlessness, the mean response time was somewhat lower at 10.9 ± 4.3 s. Postflight, the mean response time (13.7 ± 4.8 s) was a little greater than in-flight but lower than preflight. There was a strong effect of stimulus orientation on response time (p < 0.001). In weightlessness and on the ground postflight, the oblique effect was present as evidenced by the planned comparisons (p < 0.05) and was more pronounced for vertical stimuli (8.8 ± 2.5 s) than for horizontal stimuli (10.4 ± 2.6). Response times for vertical stimuli preflight were also shorter for vertically than for horizontally oriented stimuli (11.9 ± 3.6 s vs 13.3 ± 3.9 s), as was the case for postflight tests (10.4 ± 3.7 and 12.9 ± 4.4 s, respectively).
Figure 10 also shows the results of the visual–visual (middle row) and haptic–haptic (bottom row) tasks, performed both in normal gravity and in weightlessness. Essentially no differences can be observed between the different gravitational levels for constant or variable errors. As reported previously (Lipshits and McIntyre, 1999; Lipshits et al., 2005), the oblique effect for variable error in the visual–visual task persisted in the absence of gravity, as measured by the planned comparison (p < 0.05), although the effect for 90° stimuli is less obvious for the data reported here (Fig. 10). The oblique effects observed for response times for the visual–visual task in the upright position on the ground (experiments 1 and 2) were not statistically significant in weightlessness, but we cannot rule out that this is caused by the lower number of subjects or to greater time pressures for activities performed on orbit. Thus, although the disappearance of the oblique effect in the chair-tilt experiments suggests that gravity plays a role in defining the orientation of a visual, gravity is not essential to the encoding process. As observed in experiments 1 and 2 on the ground, there was no oblique effect evident for the haptic–haptic task, and thus no particular reference frame could be identified for this task, either on the ground or in weightlessness.
Figure 11 summarizes the results of the measures of local distortion for experiment 3, showing the measured value for one preflight, one in-flight, and one postflight session for each task. As has been seen in the two previous experiments, the visual–haptic task showed a contraction of response for reference stimuli at 0 and 90° and an expansion at 45°. An ANOVA shows a significant effect of axis (F(2,8) = 9.71; p < 0.01) but no main effect or cross-effect of the test session. The axis factor also showed a significant effect on local distortion for the visual–visual task (F(2,8) = 4.62; p < 0.05) with no effect of test session and no cross-effect. No significant effect of axis or test session could be detected for the haptic–haptic comparison.
Distortion of responses around the vertical, horizontal, and diagonal axes for experiments performed on the ground and in weightlessness. Filled symbols indicate tests performed on orbit. Open symbols represent data taken on the ground before and after the spaceflight.
Overall, no effect of weightlessness can be observed on any of the orientation matching tasks described here. Subjects produced similar patterns of constant error, variable error, and response times both on the ground and in 0 g. This is in contrast to what might have been expected from the results of experiment 2, in which tilting the subject to the left caused an additional bias in constant error and caused the oblique effect for variable error and response time to disappear. Whereas gravity appears to play a role in the encoding of visual and haptic orientations (experiment 2), it seems that the presence of gravity is not essential (experiment 3).
Summary
When performing tasks in which subjects are required to match the orientations of two stimuli, patterns in errors and responses time varied as a function of the stimulus angle, whether the two stimuli are both perceived haptically, both perceived visually, or if the orientation of a visual stimulus is to be aligned with a haptic reference. In most cases, the patterns of responses were very reproducible, therefore reflecting intrinsic properties of the human perceptual system. The most pertinent observations about these response patterns were as follows:
-
Constant errors varied as a function of stimulus orientation for all three tasks. Variations of constant error around the mean were stable across repeated sessions and invariant with respect to whole-body tilt or the presence or absence of gravity.
-
In the visual–haptic task, responses tended to be grouped closer together for stimuli in the neighborhood of the vertical and horizontal axes, whereas they were further apart around the diagonal. A similar pattern was observed for the haptic–haptic task, but only in experiment 2. The visual–visual task showed the opposite effect, with an apparent attraction of responses toward the diagonals and repulsion from vertical and horizontal.
-
Constant errors for the haptic–visual task were strictly positive for all stimulus orientations, whereas constant errors were close to zero, on average, for the visual–visual and haptic–haptic tasks.
-
Also for the haptic–visual task, an asymmetrical effect of whole-body tilt on constant error was observed in which leftward whole-body tilt induced an additional counterclockwise bias of 6° in all responses compared with rightward tilt and the upright position. No such asymmetry was observed in the visual–visual or haptic–haptic tasks.
-
The asymmetry in constant error for the haptic–visual matching task was paralleled by erroneous alignment of a visual line with the perceived body axis. Subjects aligned correctly the visual stimulus to the body axis when upright and tilted to the right but produced an 8° counterclockwise error when tilted to the left. Subjects correctly aligned the visual stimulus with the gravitational vertical in all three tilt positions.
-
Haptic–visual and visual–visual comparisons produced oblique effects in terms of lower variable error and shorter response times for vertical and horizontal stimuli. These oblique effects were observed for the upright position and in weightlessness but disappeared when subjects were tilted to the right or to the left.
-
The magnitude of the oblique effect for variable error was much larger in the haptic–visual task than in the visual–visual comparison. Similarly, the magnitude of local distortions was much higher in the visual–haptic task than in either of the two unimodal tasks.
Discussion
We performed these experiments to better understand how haptic and visual information are encoded and compared. The fact that both visual–visual and visual–haptic comparisons produced oblique effects having similar characteristics (disappearance with whole-body tilt, persistence in 0 g) suggests that these are one and the same effects, stemming from the common visual modality in both tasks. Patterns of local distortion in the transformation from reference stimulus angle to the response were, in some cases, similar between the haptic–haptic and the haptic–visual comparisons but opposite that observed for the visual–visual comparisons. Nevertheless, both the local distortion and the oblique effects were magnified in the haptic–visual comparison, compared with the visual–visual and haptic–haptic tasks, indicating an effect of central processing in the bimodal comparison. What cannot be explained by either visual or haptic modalities alone is the overall counterclockwise rotation of visual responses compared with haptic stimuli. Such a global bias of responses was not seen for either visual–visual or haptic–haptic comparisons and thus reflects a property of the intermodal transfer.
A gravitational reference?
A motivating hypothesis of this study was that the direction of gravity could serve as a common reference to align visual and haptic perception (Howard, 1982). In the strictest sense, this would imply that both visual and haptic stimuli are encoded with respect to the gravitational vertical. Clearly, this was not the case for any of the three tasks performed here. In the cases in which an oblique effect was observed, minimal variable errors and response times were observed at 90° (aligned with the body axis) and 0° (perpendicular to the body axis). If orientations were encoded strictly in a gravitational reference frame, the same directional anisotropy would have appeared at 22.5 and 112.5° for −22.5° (rightward) tilt and at −22.5 and 68.5° for +22.5° (leftward tilt). Instead, body tilt abolished the oblique effects, as has been observed for other visual biases (Nicholls et al., 2006). Similarly, patterns of constant error should have been shifted ±22.5° when the body was tilted left or right. No such shift was apparent for any of the three tasks. Finally, if gravity alone defines the salient axis, oblique effects should have disappeared when gravity is removed.
The data are coherent with a hypothesized “multimodal” reference frame for the encoding of orientations (Luyat et al., 2005) and in good agreement with theories of “body scheme”(Gurfinkel' and Levik, 1979). The spatial orientation of specific sensory stimuli is interpreted in the context of an internal representation of the body's configuration, the latter being derived from the ensemble of available sensory cues. In the absence of gravity, the CNS easily reorganizes the coordination of haptic and visual systems, based on the internal representation of the body segments that are interposed between the hand and the eyes.
The asymmetric effects of whole-body tilt on the haptic–visual task indicate, nevertheless, that gravity is an important factor for aligning visual and haptic information. The fact that subjects correctly aligned the visual stimulus with the gravitational vertical means that, in these experiments, the visual reference frame was correctly aligned with the gravitational field (i.e., there was no rotational bias in the subjective vertical that could explain the results in the haptic–visual task). However, when subjects misidentified the orientation of their bodies within the visual reference frame, they also tended to deviate their responses in the haptic–visual task in the same direction. This makes sense if one considers the kinematic chain between the arm and the eyes. The arm movements used to haptically explore the stimulus with the hand are initially perceived through proprioceptors that measure relative positions of the different limb segments, and through knowledge of the motor commands (efference copy) that cause movements at the joints. One might reasonably conclude that haptic sensation could be encoded initially in an egocentric reference frame before being transformed to a more general, multimodal reference frame for comparison with the visual stimulus. A misrepresentation of the body axis within the multimodal reference frame would then naturally lead to a misalignment of the perceived orientation of the haptic stimulus as well. This interpretation of the experimental results provides a more concrete proposal for how the constant direction of gravity might be used to align haptic and visual reference frames (Howard, 1982).
Central or peripheral?
We return now to the question, Are the errors observed in the visual–haptic coordination task reflections of anisotropy inherent to the sensory apparatus themselves or a product of central processing of this information? The observed oblique effects for vision were robust and present in the absence of gravity. Their link to an egocentric reference frame thus suggests an intrinsic relationship to the visual apparatus (Essock's type 1 effects). Nevertheless, the visual oblique effects were sensitive to reorientation of the body with respect to gravity (type 2 effect). In the haptic–haptic experiments reported here, no oblique effect was observed in patterns of constant or variable error. This is congruent with studies of haptic perception (Gentaz and Hatwell, 1996) and force production (Toffin et al., 2003) in the horizontal plane but in contrast to studies in the fronto-parallel plane that showed oblique effects for haptic comparisons (Lechelt and Verenka, 1980; Gentaz et al., 2001). These studies may be differentiated by the degree of central processing involved. Tasks showing oblique effects rely more on memory storage, bimanual transfer, or visuomotor coordination, whereas the haptic–haptic comparison in our study involved a direct comparison with minimal involvement of memory. Thus, haptic oblique effects, when they do occur, appear to be the result of central processing (type 2). The haptic–visual task evoked all of these effects, and then some. Patterns of variable error from the visual–visual task appeared also in the haptic–visual task. But the amplification of the variable error oblique effect, the increased magnitude of local distortions, and the overall counterclockwise bias in constant errors indicate additional effects that may only be attributed to the sensorimotor transformations between the visual and haptic systems. Note that during the haptic–haptic and visual–visual experiments, both stimuli were at the same place, and it was easy to use the same simple reference system. In the visual–haptic comparison, the stimuli were spatially displaced in the vertical, horizontal, and depth directions, requiring a higher degree of processing.
The summation and amplification of spatial anisotropy through the different stages of the visuomotor pathway fits well the five-tiered hierarchy proposed by Bernstein (1947) for sensorimotor function: (1) involuntary proprioceptive reflexes based on muscle and joint proprioceptors; (2) synergies and proprioceptive corrections in a reference frame at the level of the limb and joints; (3) simple and complex synthesis of the visual–spatial field; (4) body scheme and goal-direction perception–action coupling; and (5) higher-level symbolic functions such as writing and talking. Expressed in this framework, haptic–haptic comparisons would fall in category 2, visual–visual comparisons would fall in category 3, and the haptic–visual comparison would fall in category 4. Bernstein (1947) recognized the need for greater and greater precision and more stable reference frames as information moves up in the hierarchy. The lack of oblique effects at the lowest level (haptic–haptic) compared with the prominent oblique effects at the intermediate level (visual–visual) and the amplification of the visual oblique effect at higher levels (haptic–visual) provides experimental confirmation of this concept.
The tracking of errors through the various stages of visual and motor tasks has implications for the neural circuits underlying these behaviors. The visual oblique effect has been attributed to early stages in the visual pathways based on, for instance, electrophysiological evidence of the width of tuning fields (Li et al., 2003) or the distribution of preferred orientations of cells in visual cortex (Mansfield and Ronner, 1978; Chapman and Bonhoeffer, 1998; Li et al., 2003). The disappearance of the visual–visual and visual–haptic oblique effects with whole-body tilt implies a modulation of early visual mechanisms by multisensory areas of the brain. The amplification of the oblique effect in visual–haptic comparisons is not an additive process and has yet to be explained; it could be a biproduct of a multiplicative process such as the vestibular and proprioceptive gain fields proposed for the lateral intraparietal area lateral intraparietal area and area 7a (Andersen, 1997). The bias of perceived body axis for leftward tilt observed here may be linked to asymmetries in vestibular function (Bergenius et al., 1996; Tribukait et al., 1996). In this case, vestibular cortex (parieto-insular cortex) is a prime candidate for the locus of visual–vestibular coordination, given the convergence of visual, vestibular, and somatosensory information in this region (Brandt and Dieterich, 1999). In contrast, posterior parietal sulcus (Kitada et al., 2006) and the claustrum (Hadjikhani and Roland, 1998) have been identified as areas selectively activated in the comparison of visual and tactile stimuli. Is the visual–haptic comparison performed in a multisensory region such as vestibular cortex, or might such an area simply establish the reference frame for the comparison to be performed elsewhere? Although the precise neural mechanisms underlying these effects remain to be elucidated, the psychophysical results reported here provide further knowledge about the sensory signals and reference frames used by these neuronal structures to mediate visuomotor coordination.
Appendix: local distortion
Here we quantified local distortion by computing the slope of the best-fit line relating constant error to reference orientation for three adjacent reference angles. This calculation can be used to characterize the expansion or contraction of responses compared with the range of reference orientations. The logic of this method is shown in Figure 12.
Simulated data illustrating the logic behind the measurement of local distortion.
In Figure 12A, simulated responses to oblique stimuli are biased away from the horizontal and vertical axes and toward the diagonals (solid lines). Figure 12B shows a plot of response versus reference angle for these data, and Figure 12C shows the resulting constant errors. One can see that when responses are deviated toward a central axis, the curve passing through the neighboring points on either side of the target direction has a negative slope (i.e., a negative slope around 45, 135, 225, and 315°). Conversely, the slope of the relationship is positive when neighboring responses are deviated away from the central target (0°, 90°, 180°, 270°). We therefore computed the measure of local distortion as the slope of the best-fit line through three neighboring points in a plot of constant error versus reference angle. A negative slope (e.g., at 225°) indicates that responses were locally attracted by the central axis, whereas a positive slope (e.g., at 90°) means they were effectively repelled.
Figure 12D–F shows how this measure of local distortion is independent of any constant bias (global rotation) common to all stimulus orientations. In Figure 12D, responses to all reference orientations are rotated 10° counterclockwise with respect to the reference value. Superimposed on this, the responses to oblique reference stimuli are biased toward the response for horizontal and vertical stimuli and away from the responses to the diagonals, as before. Figure 12E shows a plot of response versus reference angle, and Figure 12F shows the resulting patterns of variable errors. Despite the fact that all constant errors are positive in these hypothetical situations, the measurement of local distortion still reflects the relative spacing of response orientations compared with the reference values.
Footnotes
-
This work was supported by the French space agency (CNES), Russian Fund for Fundamental Research Grant 05-04-49401, the Gagarin Cosmonaut Training Center (Star City), and the NASA Johnson Space Center. We thank all our subjects, especially the participating cosmonauts: V. Afanasiev, N. Boudarin, M. Foale, C. Haignere, J. P. Haignere, A. Lazoutkin, J. Linenger, T. Mousabaev, S. Sharipov, V. Tsibliev, and J. Voss. We thank D. Chaput, M. Ehrette, V. Gratchev, D. McMahon, M. Pias, A. Polyakov, A. Shulenin, and J. Zilli for their substantial contributions to this experiment. We thank G. Leone, A. Berthoz, and V. Gurfinkel for fruitful discussions that inspired this work, and A. Bengoetxea and I. Viaud-Delmon for comments on the results. Line drawings were provided by F. Maloumian.
-
↵a Note that these results refer to the “two-interval” forced-choice task described by Buchanan-Smith and Heeley (1993) in which subjects indicated which orientation of one stimulus line was compared with the orientation of another. In the same publication, these authors reported a gravitationally aligned oblique effect for a “single-interval” test in which subjects indicated whether a single stimulus line was oriented clockwise or counterclockwise with respect to the “vertical,” “horizontal,” or “diagonal” axis. The stimulus reproduction tasks reported here are comparable with the two-interval task of Buchanan-Smith and Heeley (1993).
- Correspondence should be addressed to Dr. Joseph McIntyre, Laboratoire de Neurobiologie des Réseaux Sensorimoteur, Centre National de la Recherche Scientifique–Université Paris Descartes, 45 rue des Saints Pères, 75006 Paris, France. joe.mcintyre{at}univ-paris5.fr