Abstract
We have examined the underlying coordinate frame for pursuit learning by testing how broadly learning generalizes to different retinal loci and directions of target motion. Learned changes in pursuit were induced using double steps of target speed. Monkeys tracked a target that stepped obliquely away from the point of fixation, then moved smoothly either leftward or rightward. In each experimental session, we adapted the response to targets moving in one direction across one locus of the visual field by changing target speed during the initial catch-up saccade. Learning occurred in both presaccadic and postsaccadic eye velocity. The changes were specific to the adapted direction and did not generalize to the opposite direction of pursuit. To test the spatial scale of learning, we examined the responses to targets that moved across different parts of the visual field at the same velocity as the learning targets. Learning generalized partially to motion presented at untrained locations in the visual field, even those across the vertical meridian. Experiments with two sets of learning trials showed interference between learning at different sites in the visual field, suggesting that pursuit learning is not capable of spatial specificity. Our findings are consistent with the previous suggestions that pursuit learning is encoded in an intermediate representation that is neither strictly sensory nor strictly motor. Our data add the constraint that the site or sites of pursuit learning must process visual information on a fairly large spatial scale that extends across the horizontal and vertical meridians.
Motor responses must adapt to changing sensory conditions on a daily basis. One form of motor learning can be observed in smooth pursuit eye movements, which are used by primates to track moving targets (for review, see Lisberger et al., 1987; Keller and Heinen, 1991). Normally, the initial 100 msec of target motion causes brisk eye acceleration in a direction and at a rate determined precisely by the target motion (Lisberger et al., 1981). Learning occurs if targets move at an initial velocity for 100 msec and then step to a different velocity. The visual input from the initial target motion is unchanged, yet the eye acceleration response to this input gradually becomes more appropriate for the final, rather than the initial, target velocity (Carl and Gellman, 1987; Kahlon and Lisberger, 1996, 1999; Ogawa and Fujita, 1997).
We have been narrowing possible physiological sites for learning within the known circuitry for pursuit. Signals that guide pursuit eye movements arise in the middle temporal (MT) visual area and pass through a number of cortical and subcortical areas to the cerebellum, which relays them to the final oculomotor pathways in the brainstem. Our previous papers (Kahlon and Lisberger, 1996, 1999) implied that learning is in a coordinate system that is neither purely visual nor purely eye movement. They suggested a locus of learning downstream from MT and upstream from cerebellar outputs. Possible sites include the medial superior temporal area (MST), frontal pursuit area (FPA), dorsolateral and dorsomedial pontine nuclei (DLPN), the nucleus reticularis tegmenti pontis (NRTP), and the cerebellar cortex.
In the current study, we investigate the spatial scale of information processing at the site or sites of learning by asking how broadly pursuit learning generalizes across the visual field. Our experimental design was based on the fact that neurons in MT have small receptive fields primarily confined to the contralateral visual field (Van Essen et al., 1981; Desimone and Ungerleider, 1986), whereas MST neurons have a range of receptive field sizes, including large receptive fields that can extend far into the ipsilateral visual field (Komatsu and Wurtz, 1988). If learning were specific to a narrow region of visual field around the site of the adapting stimulus, for example, we would conclude that learning is induced at a site where visual space is represented in small receptive fields, such as the synapses from MT onto targets such as MST or FPA. Such a finding would call into serious doubt the common belief that motor learning for pursuit occurs in the cerebellum, as it seems to for other motor systems (Ito, 1976; Raymond et al., 1996).
Our approach was to induce learning with target motion across a given position in the visual field and test for generalization to targets that moved across other regions of the visual field. The data revealed that generalization was incomplete, but often extended to positions in the visual field across the vertical meridian. Our data support the conclusion that the site or sites of learning are downstream from MT, in areas that process information on a spatial scale including both visual hemifields.
Part of this work has been presented in a preliminary report (Chou and Lisberger, 2000).
MATERIALS AND METHODS
Five rhesus monkeys (Macaca mulatta) served as subjects. Three of the monkeys had participated in previous studies of pursuit learning. The remaining two were naive to pursuit learning paradigms. Monkeys were first trained to sit in a primate chair and to attend to spots of light. After training, head restraints and scleral search coils were implanted surgically (c.f. Judge et al., 1980). All surgeries were performed using sterile procedure with isofluorane anesthesia. Appropriate analgesic and antibiotic treatments were administered postoperatively. After recovery from surgery, the monkeys were trained to sit with their heads restrained facing a display screen and to fixate and track spots of light that stepped away from the point of fixation and/or moved across the visual field. Each experimental session lasted ∼2 hr, during which the animals worked for fluid reinforcement to satiation. All procedures conformed to theNational Institutes of Health Guide for the Care and Use of Laboratory Animals and had been approved in advance by the Institutional Animal Care and Use Committee at University of California San Francisco.
Moving and stationary visual targets were presented either on an analog oscilloscope or using a mirror-galvanometer projection system, with the same results. Targets presented on an analog oscilloscope (Hewlett Packard 1304a) appeared as bright 0.4° squares on a dark background. The nominal spatial resolution of the display, defined by the resolution of the 16-bit digital-to-analog converters that drove it, was 65,536 × 65,536 pixels. The display was positioned 35 cm in front of the monkey and subtended 36 × 30° of visual angle. Because the display was rectangular and the number of pixels used to create the output signals was the same in each dimension, the spatial increment of each pixel was slightly different in the horizontal and vertical dimensions. Targets presented with the mirror galvanometer system subtended ∼0.5°. They were created by imaging the light beam from a fiber optic light source onto a pair of mirrors and projecting the beam onto the back of a large tangent screen placed 114 cm from the monkeys' eyes, subtending ∼50 × 40°. Target position was controlled by setting the position of the mirrors with a pair of galvanometers (General Scanning). The movement of one eye was monitored using a scleral search coil system from CNC Engineering. All experiments were performed in dim ambient lighting.
Data acquisition and sequences of target motion were controlled by software running on a combination of a DEC Alpha UNIX workstation and a 500 MHz Pentium-based PC running Windows NT and VenturCom RTX. The PC performed all real-time operations and controlled the visual displays, whereas the UNIX workstation provided a user interface for easy programming and modification of the experiment. Analog signals proportional to horizontal and vertical eye position were differentiated by an analog circuit that provided differentiation for signals up to 25 Hz and rejected signals of higher frequencies (−20 db/decade). Position and velocity signals and other codes related to the timing of trial events were digitized at 1000 samples/sec on each channel and stored for later analysis.
Experimental design
Previous studies in monkeys have examined learning in the initiation phase of pursuit and used targets that appeared a few degrees eccentric to the position of fixation and moved toward the position of fixation (Kahlon and Lisberger, 1996, 2000). Such target configurations are not optimal for studying spatial generalization of learning because they require a strict relationship between the initial location of the target and the direction of target motion. Therefore, the first goal of the current study was to use targets that moved in a variety of directions relative to the position of fixation and to characterize the effects of learning on presaccadic and postsaccadic eye velocity. Experiments consisted of a series of trials, each of which lasted ∼2 sec. At the start of each trial, a stationary target appeared on the display, and monkeys were required to fixate within a 2 × 2° window for an interval that was randomized between 800 and 1000 msec. The target then underwent an oblique position step and began moving horizontally either toward or away from the vertical meridian. These step-ramp stimuli always required both saccadic and smooth tracking. The monkeys were allowed 350 msec to acquire the target and then had to maintain gaze within a 2 × 2° window centered on the target. If the monkeys kept their gaze within the window around the target throughout the duration of target motion, they received a fluid reinforcement.
Each experiment comprised three blocks of trials (Fig.1). The initial, “baseline” block delivered ∼200 trials designed to provide a prelearning assessment of pursuit by having the target move in the basic step-ramp manner described above for both rightward and leftward target motion. The second “learning” block consisted of 600–800 trials. For each experiment, 50% of the trials were “learning trials” that provided double steps of target velocity in a single direction chosen as the “learning direction.” We controlled the time of the second step of target velocity by having the computer sense the rapid deflection of eye velocity associated with the saccade as the time when eye velocity exceeded 50°/sec and invoke the change in target velocity at that time. The remaining trials in the learning block were “control trials” that provided single steps of target velocity in the nonadapted direction (25% of trials), and “probe trials” in which the target moved in the learning direction (25% of trials) without the intrasaccadic velocity step. The third and final “recovery” block provided control and probe trials to record the recovery from any learning. Experiments were designed to either increase or decrease the eye velocity at the initiation of pursuit. For increase-velocity experiments, the target had an initial velocity of 10°/sec, and the intrasaccadic step increased velocity to 30°/sec. For decrease-velocity experiments, the target had an initial velocity of 25°/sec, and the intrasaccadic step decreased velocity to 5°/sec.
Each daily experiment used one of several different learning paradigms, each customized to answer a specific question about the spatial generalization of learning.
Paradigm 1. The fixation target appeared at straight-ahead gaze. The target step was always 5° right and 5° up, and the target could then move to either the left or the right, toward or away from the vertical meridian. Learning trials provided an intrasaccadic change in target velocity for only one of the two directions. The configuration of trials for this paradigm is shown in Figure 1.
Paradigm 2. The fixation target appeared in one of four possible positions, located at the corners of a 10 × 10° square that was centered in front of the monkey. The target then underwent a position step to the center of the screen and began to move to the right or left. As a result, targets moved to the right or left across positions that were just over 7° eccentric in the four quadrants of the visual field. In the learning block, intrasaccadic changes in target velocity were provided for only one combination of position in the visual field and direction of target motion. Over the course of experiments on each monkey, however, all combinations of initial target position and direction of target motion were tested for effects of learning.
Paradigm 3. The fixation target appeared in one of 10 possible positions, located along a diagonal line passing through the center of the screen. As in paradigm two, the target stepped to the center and began to move to the left or right. Fixation positions were chosen so that pursuit targets moved across positions that were 3–11° eccentric in the visual field. In one set of experiments, there was only one learning block, in which an intrasaccadic change in target velocity occurred for targets at one position that was 5 or 7° eccentric in the visual field. In another set of experiments, two learning blocks were run. In the first, learning trials were presented at a single location 5° eccentric. In the second learning block, two types of learning trials presented targets at two different locations: for the learning trials, the moving stimulus started 5° in either the right/up or left/down quadrant of the visual field.
Data analysis
For each successfully completed trial, eye position and velocity traces were displayed on a computer screen, and the start and ending times of the first saccade after target motion onset were marked using a combination of software and visual inspection. We used a computer algorithm to detect two time points, first where eye velocity rose above and second where it fell below 50°/sec. We then defined the saccade onset and offset times as 15 msec before the first and 15 msec after the second time point. To confirm that the saccade onset and offset times determined by this automated algorithm corresponded closely to those defined by human users, the traces for each trial were checked individually by visual inspection and corrected if necessary. Trials were discarded if the saccade was initiated with a latency of <80 msec after the onset of target motion, because these were deemed to be anticipatory rather than responses to the motion of the tracking target. Trials containing anticipatory saccades were rare, constituting <1% of the sample. All analyses of eye velocity and statistics were performed using Matlab (The Mathworks Inc).
To analyze learning, presaccadic and postsaccadic horizontal eye velocity were estimated by calculating the average eye velocity over the 10 msec immediately before and after the saccade, respectively.Lisberger (1998) provided a detailed analysis of the filtering issues associated with making a measurement in the immediate wake of a saccade and verified that the techniques we used are appropriate and would be difficult to improve on. For each experimental session, both the significance and magnitude of learning were assessed in presaccadic and postsaccadic eye velocity. Two-tailed Student's t tests were used to compare the responses in probe trials in the baseline block and near the end of the learning blocks (criterion,p < 0.05). To assess the magnitude of changes in each experiment, we computed the “learning ratio,” defined as mean eye velocity in the last 20 probe trials of the learning block divided by the mean eye velocity in the baseline block. In analyses in which we compared the learning ratios for different conditions, we report geometric means and performed statistics on the logarithms of the learning ratios.
To analyze the latency of pursuit, we computed average eye velocity traces for responses to identical target motions, aligned on the onset of target motion. Data from intervals that contained a saccade were omitted from the average. Thus, there could be a different number of samples in each bin of the average, and some bins might contain very few samples: we did not compute an average eye velocity unless a bin included data from at least eight trials. We then used the technique ofCarl and Gellman (1987) to estimate the latency of pursuit. Regression lines were fit to two segments taken from the mean eye velocity traces. The first segment comprised the 40 msec surrounding the onset of target motion, when the eyes are essentially stationary; the second segment comprised the 40 msec after eye velocity rose 3 SD above the mean velocity in the first segment. The time of pursuit onset was taken to be the point where the two regression lines intersected.
RESULTS
Postsaccadic pursuit learning in a task designed for studying spatial generalization
Earlier work from our laboratory has studied pursuit learning only for targets that moved from eccentric positions toward the position of fixation (Kahlon and Lisberger, 1996). This condition produces saccade-free initial pursuit and allows easy analysis of the presaccadic initiation of pursuit. However, it has the drawbacks that it requires an unnatural combination of initial target position and motion and that it precludes analysis of the spatial generalization of learning for targets moving with a given speed and direction across different parts of the visual field. Therefore, we start by analyzing pursuit learning for targets with initial positions and motions that required combinations of saccades and smooth pursuit to achieve accurate tracking. These experiments were done with paradigm 1, as described in Materials and Methods.
Figure 2 shows example eye position and velocity traces from single trials recorded early (black traces) and late (gray traces) in the learning block of an increase-velocity experiment: target velocity increased from 10 to 30°/sec during the first tracking saccade for rightward target motion. During the first few learning trials (Fig.2A, black trace), post-saccadic eye velocity matched or exceeded only slightly the target velocity present before the saccade, although target velocity had stepped from 10 to 30°/sec during the saccade. At the time indicated by the vertical arrow on the eye velocity trace, ∼60 msec after the end of the first saccade, a rapid eye acceleration corrected the mismatch between eye and target velocity introduced by the velocity step. The visibility of this transition provides an example of a point that will be addressed quantitatively below: the first 60 msec of the post-saccadic response is driven by the target motion present before the saccade. In spite of the brisk correction of smooth eye velocity, there was still a residual position error that was then corrected by a small saccade (oblique arrow on position traces). After >100 repetitions of the learning stimulus (gray trace), postsaccadic eye velocity had grown and was closer to the final target velocity than in the first few learning trials. Because the smooth eye movements were larger than at the outset of learning, the catch-up saccade in the eye position record was later and smaller than in the earlier trial. We did not observe the appearance of anticipatory pursuit, probably because both the direction and onset time of the target were randomized.
Adaptive changes generalized to both presaccadic and postsaccadic eye velocity in the probe trials (Fig. 2B), which did not contain an intrasaccadic step of target velocity and continued to move at 10°/sec after the first tracking saccade. Postsaccadic velocity matched target velocity almost perfectly in the first few probe trials in the learning block (black trace). At the end of the learning block, however, postsaccadic eye velocity had increased and was almost twice target velocity. As a result of the large smooth eye velocity, eye position passed the target, and a backwards corrective saccade was needed to achieve accurate tracking (Fig.2B, downward arrow on position traces).
Throughout the paper, we examine how learning affects eye velocity in the 10 msec immediately after the end of the catch-up saccade. The analyses we have used are based on the assumption that postsaccadic eye velocity is driven by presaccadic visual signals. For each of 16 experiments, we tested this assumption by asking when postsaccadic visual stimuli have their first effect on eye velocity. This time was assessed by comparing the average eye velocity from probe trials with that from interleaved learning trials. These trials provide identical stimuli until the target changes velocity during the saccade; the time when the responses separate should indicate when postsaccadic visual stimuli first affect eye velocity. Figure3A shows an example from a single experiment. Data were analyzed by making separate averages of the last 200 msec of eye velocity before saccade onset and the first 200 msec of eye velocity after saccade end, for all trials in the learning block of each experiment. Inspection of Figure 3Agives the impression that the averages for trials with and without an intrasaccadic change in target velocity (gray vsblack traces) diverged at the time shown by the upward arrow, 56 msec after the end of the saccades. We verified this estimate by performing a running t test on each pair of averaged traces to determine the time point at which they diverged significantly. Across all 16 experiments, the mean time of divergence was 55 ± 9 msec. This confirms that the eye velocity in the first 10 msec after the end of the saccade is appropriate for assaying the pursuit response to visual inputs present before the saccade.
We studied learning in 18 experiments designed to increase eye velocity (Fig. 3B, squares) and 14 designed to decrease eye velocity (Fig. 3B, triangles) in five animals. Plotting mean postsaccadic eye velocity for the last 20 probe trials of the learning block as a function of that in the baseline block revealed that all experiments caused learning in the appropriate direction. Eye velocity was larger after than before the learning for increase-velocity learning trials (squares) and was smaller after the learning for decrease-velocity learning trials (triangles). The changes in postsaccadic eye velocity were statistically significant in almost all experiments (30 of 32) (Fig. 3B, filled symbols). In different experiments in this series, learning was induced for horizontal target motion that started either above or below the horizontal meridian and moved either toward or away from the vertical meridian. The magnitude and statistical significance of learning in postsaccadic eye velocity was not dependent on these parameters of the learning target motion.
Spatial generalization of learning
We next tested the generalization of learning to target motion in the same direction as the adapting stimulus, but starting in different visual quadrants. Paradigm 2, described in Materials and Methods, was used. As shown by the insets in the four panels of Figure4, the tracking target moved across the same eccentricity in the four quadrants of the visual field. Learning trials were presented for only one direction of target motion (rightward or leftward) at only one visual field position. Probe and control trials were presented in all four quadrants of the visual field and provided motion in either the learning or control direction, respectively.
The traces in Figure 4 are examples of responses in single trials, chosen to illustrate the results of an increase-velocity experiment for targets starting in the top right quadrant of the visual field. The eye velocity traces in Figure 4A show examples of the response in probe trials presenting rightward target motion from the adapting position in the visual field before (black traces) and after (gray traces) learning. Figure4B–D shows examples of the eye velocity responses to rightward probe trials for the three other target positions in the visual field, where the learning stimulus was not shown. For these examples, the effect of learning on the postsaccadic eye velocity was greatest when the probe trial provided target motion in the visual field position of the learning stimulus (Fig.4A), but was also present when the probe trial provided target motion in the other three quadrants of the visual field (Fig. 4B–D).
Learning generalized to targets presented in all of the test quadrants, but generalization was more nearly complete for increase-velocity than for decrease-velocity experiments. For each experiment, we quantified the generalization of learning across the visual field by computing the learning ratio for probe trials with targets starting in each of the four visual quadrants. In Figure 5, each graph shows the distribution of learning ratios for probe trials in each quadrant, where the quadrants of the visual field from each experiment have been rearranged so that results from the adapting quadrant are shown in the top left graph (Fig. 5A). Each histogram shows the results from increase-velocity and decrease-velocity experiments as upward and downward histogram bars, respectively. Of the 23 experiments, two have been omitted from this analysis because they did not provide statistically significant changes in postsaccadic eye velocity even with the target in the adapting quadrant.
For increase-velocity experiments, changes in postsaccadic velocity occurred in all quadrants, as can be seen from the distributions of learning ratios. The upward histograms in Figure 5 show geometric-mean learning ratios of 1.78 in the adapting quadrant (Fig. 5A) and of 1.52, 1.58, and 1.48 in the other three quadrants. All of these values were significantly >1.0 (t test; p< 0.01). ANOVA revealed that there was no statistically significant effect of quadrant on the magnitude of learning.
For decrease-velocity experiments, learning did not generalize completely to all test quadrants. The downward histograms in Figure 5reveal a learning ratio that averaged 0.65 for the adapting quadrant (Fig. 5A), but ratios of 0.88, 0.87, and 0.95 in the test quadrants. In the three test quadrants, the learning was significant in Figure 5, B and C (t test;p < 0.01), but not in Figure 5D, when the probe targets appeared in the opposite vertical and horizontal visual hemifield relative to the learning target. ANOVA revealed a significant effect (p < 0.01) of quadrant on the learning ratio when all quadrants were examined (adapting plus all test), because learning was attenuated in all of the test quadrants, relative to the adapting quadrant. When the analysis was repeated on just the test quadrants (i.e., compare Fig. 5B–D), there was no significant effect (p > 0.05).
Figure 6 provides a quantitative summary of the generalization of learning across quadrants within each experiment. Because the analyses described above showed no significant differences between the three individual test quadrants, the data from the three test quadrants were averaged for each daily experiment. Each point plots data from a single experiment and shows the arithmetic-mean learning ratio in the test quadrants as a function of the learning ratio in the adapting quadrant. If learning were independent of the quadrant in which the target was presented, then the learning ratios should be the same for all target motions in the learning direction, and the data from all our experiments should fall on the line of slope 1 (Fig. 6, “complete generalization”). If, on the other hand, learning were specific to the adapting quadrant, then learning should have been present only when the probe targets were presented in that quadrant and the learning ratio should be one in the test quadrants: all the data should fall along the horizontal line (Fig. 6, “no generalization”).
The actual data are not consistent with the predictions of either “complete generalization” or “no generalization.” For decrease-velocity experiments (gray triangles), the data fall somewhere between these two predictions. For increase-velocity experiments (dark squares), most of the data fall between the two lines, but in two experiments, the learning ratios were higher in the test quadrants than in the adapting quadrant, suggesting that in those experiments, generalization of learning was complete. Separate regression analyses were performed to find the best slope fit for increase and decrease-velocity experiments. For decrease experiments, the slope of the regression was 0.41, which was significantly different from both 0 and 1. For increase experiments, the data were not well described by a linear relationship, so the regression did not yield a meaningful description of the data.
Probe of spatial generalization on a finer scale
To examine whether learning declined smoothly or abruptly as a function of the distance between the starting retinal positions of the learning and test motions, we used paradigm 3 (see Materials and Methods) to induce learning at a single location in the visual field and probe learning with target motions at multiple locations. The trial configuration for these experiments is shown in Figure7A. As before, the target moved either leftward or rightward from the center of the screen, and its location in the visual field was varied by using different positions for the fixation spot (Fig. 7A, open circles).
The results from 10 velocity-increase and 10 velocity-decrease experiments were more consistent with a smooth change in spatial generalization as a function of the position of the target in the visual field. In Figure 7B, the abscissa represents the spatial separation between the location of the test stimulus and the learning stimulus, in degrees of visual angle. The ordinate plots the learning ratio for probe trials presented at each location. Unconnected points summarize individual experiments, whereas symbols connected by lines present grand averages accumulated across experiments that used identical stimuli. For both increase-velocity (black symbols) and decrease-velocity (gray symbols) experiments, the largest learning effects were observed at the location of the learning target motion (zero on the x-axis). There was considerable variability in the exact values of the learning ratios from day-to-day, but the averages show that learning declined gradually as a function of the separation of learning and test targets. As we also demonstrated in Figure 5, learning generalized more completely to the opposite quadrant (points plotting to theleft of the vertical dashed line) for increase-velocity learning than for decrease-velocity learning. Nonetheless, across all experiments, we observed some generalization to targets as far as 18° from the adapting location. The reliability of the impressions gained from the averaged data are supported by the separation in the distributions of learning ratios for increase-velocity and decrease-velocity learning at every location.
Interference of two learning stimuli at different visual field locations
To ask whether pursuit learning could be more specific spatially when given appropriate stimulus conditions, we next created a set of experiments that mixed learning trials designed to cause increase-velocity learning for targets at one location and decrease-velocity learning for targets at another location. We probed the generalization of learning with paradigm 3 (see Materials and Methods), which presented test target motions at 2° spatial intervals in the vicinity of the learning stimuli. To compare directly the spatial extent of generalization for one versus two learning stimuli, each experiment consisted of two learning blocks. As illustrated in the top diagram in Figure8A, the first learning block contained a single learning stimulus in a single location 5° eccentric in the visual quadrant, as in previous experiments. The second block added a second learning stimulus at location that was 5° eccentric in the opposite visual quadrant. Thus, this block contained trials that presented learning stimuli at two target locations separated by 10° (Fig. 8A, bottom diagram). Trials containing the two learning stimuli were presented with equal frequency during the second learning block, and the second learning block was twice as long as the first, so that the number of presentations of the second learning stimulus would match that of the first stimulus in the first block. The second learning stimulus always required the opposite change in velocity as the first. For example, if we provided an increase-velocity learning stimulus in the first learning block, then we added a decrease-velocity learning stimulus in the second learning block. We termed each experiment “increase-first” or “decrease-first” depending on the direction of the change in eye velocity produced by the first learning stimulus.
We observed interference between learning at visual field locations separated by 10°, at least when the two locations were in opposite quadrants of the visual field. Consider the “increase-first” experiment summarized in Figure 8B. The results of the first learning block, with increase-velocity learning trials (black symbols), were compatible with earlier graphs (Fig.7B, black symbols). Learning was excellent. It caused a large increase in postsaccadic eye velocity when tested at the location of the learning stimulus (downward arrow labeled “increase”), generalized well to nearby locations, and generalized more weakly when tested in the opposite visual quadrant (points plotted to the left of thevertical dashed line). In the second learning block, the competition caused by a decrease-velocity learning stimulus in the opposite visual quadrant caused a decrease in postsaccadic eye velocity in that quadrant (gray symbols to the leftof the vertical dashed line). It reduced, but did not eliminate the pre-existing learning in the quadrant that was the site of the increase-velocity learning stimuli in both learning blocks (symbols plotted to the right of thevertical dashed line).
Comparison of probe trials from learning blocks with two opposing stimuli versus learning blocks with just one learning stimulus reveals that learning was stronger when learning stimuli were applied at only one location. For example, the mean learning ratio for decrease-velocity learning applied after increase learning was 0.9 (Fig. 8B, gray symbol at upward vertical arrow labeled “decrease”), compared with 0.72 when decrease-velocity trials were presented without increase-velocity trials (Fig. 8C, black symbol at upward vertical arrow labeled “decrease”). In the companion decrease-first experiments (Fig. 8C), the mean learning ratio was 1.40 after the second learning block (Fig. 8C, gray symbol at downward arrow labeled “increase”) compared with 1.56 after increase-velocity learning trials alone in the same quadrant of the visual field (Fig. 8B, black symbol at downward arrow labeled “increase”). Furthermore, for both increase-first and decrease-first experiments, the competition from the second set of learning trials attenuated the learning at the location of the first set of learning trials. The latter attenuation can be appreciated by comparing the gray and black symbols at the downward arrow labeled “increase” in Figure8B and the upward arrow labeled decrease in Figure8C.
These experiments provide several general observations. (1) For both types of experiment, the largest effect of the second learning block was at or near the location of the second learning stimulus (Fig.8B,C, vertical arrows in left-hand side). (2) The second learning block affected the pre-existing learning at the visual field location of the first learning stimulus (positive values on the x-axis) in the right direction. (3) The effect was consistently smaller at the location of the first learning stimulus, because each learning stimulus predominated in the visual quadrant where it was presented. Thus, the spatial extent of the competition between two learning stimuli shows broad agreement with the spatial extent of generalization to the opposite quadrant, which is incomplete when learning stimuli are presented at a single location in the visual field (Fig. 5).
Generalization of learning to presaccadic eye velocity
Our experiments were designed to cause learning in postsaccadic pursuit eye velocity by providing a visual stimulus indicating the need for learning only after the saccade. We found that the learning also generalized to presaccadic eye velocity in many experiments. We studied learning in 18 experiments designed to increase eye velocity (Fig.9A, squares) and 14 designed to decrease eye velocity (Fig. 9A, triangles) in five animals. Plotting mean pre-saccadic eye velocity for the last probe 20 trials of the learning block as a function of that in the baseline block revealed that many experiments caused learning in the appropriate direction. For 20 of 32 (63%) experiments, there were statistically significant changes in presaccadic eye velocity (p < 0.05; filled symbols). All but one of the significant changes were in the direction expected for the learning conditions. Significant presaccadic changes were observed more often in experiments that provide learning trials with target motion toward (12 of 15; 80%) versus away from (8 of 17; 47%) the vertical meridian. Although presaccadic learning was less likely to be statistically significant, statistical comparison with ttests of the learning ratios for presaccadic and postsaccadic eye velocity failed to revealed significant differences (increase-velocity experiments: 1.37 and 1.66 for presaccadic and postsaccadic eye velocity, p > 0.05; decrease-velocity experiments: 0.72 and 0.68 for presaccadic and postsaccadic eye velocity,p > 0.05).
There were differences in the time course of presaccadic and postsaccadic learning. We assessed the time course by making graphs like Figure 9B, which shows presaccadic and postsaccadic eye velocity as a function of trial number for each learning trial delivered in the experiment shown in Figure 1. We quantified the time course of learning separately for presaccadic and postsaccadic eye velocity by using a least-squares procedure to fit a function that contained weighted exponential and linear components. As shown in the example in Figure 9, postsaccadic velocity tended to be described by exponential functions, and presaccadic by linear functions. Furthermore, learning occurred more rapidly in postsaccadic than in presaccadic eye velocity.
Absence of directional generalization of pursuit learning
Previous studies have shown that learning of presaccadic pursuit in one direction did not generalize to pursuit in the opposite direction (Kahlon and Lisberger, 1996, 1999). In those studies, however, target motion was always toward the position of fixation so that changes in direction required changes in the initial position of the tracking target in the visual field. Because we have shown that pursuit learning generalizes incompletely to different locations in the visual field, it was important to retest direction generalization with our paradigm, which allowed us to use target motion in all directions from a single initial position in the visual field.
We tested direction generalization by comparing the effects of pursuit learning in one direction on pursuit of control target motions in the opposite direction, where the two target motions started from the same location in the visual field. Each symbol in Figure10 shows the results of a single experiment and plots learning ratio in the control direction as a function of that in the learning direction. Each point was obtained by measuring the mean postsaccadic eye velocity from probe trials in the learning and baseline blocks in both the adapting and control directions for each experiment. Along the abscissa, which shows data for the adapting direction, the symbols form a distinct bimodal distribution, with learning ratios centered above and below one for increase-velocity and decrease-velocity experiments, respectively. The geometric means of the learning ratios in increase-velocity and decrease-velocity experiments were 1.66 and 0.68, and both were significantly different from 1 (one-sided t test;p < 0.01). Along the ordinate, which shows the learning ratios in the control direction, the symbols for different experiments are tightly distributed around 1 for both increase-velocity and decrease-velocity experiments. The geometric means of the learning ratios in the control direction were 1.05 for increase experiments and 1.06 for decrease experiments, and neither was significantly different from 1 (one-sided t test; p > 0.05).
We also conducted two experiments in each of two animals to assess the bandwidth of the directional generalization. Learning procedures were the same as in the previous experiments, but now the probe trials started from the same position in the visual field and moved in 12 different directions at 30° intervals relative to the adapted direction. We found that postsaccadic learning generalized only partially to nearby directions. To quantify the direction generalization, we fit a Gaussian function using the Levenberg–Marquardt method (Press et al., 1988) to the learning ratios for all 12 directions. Across all experiments, the mean bandwidth at half-height of the Gaussian fits was 62°. Similar results were obtained in two additional experiments on a third monkey; however, we suspected the accuracy of vertical velocity recordings in this animal so the data were not included in the mean. These findings are consistent with previous findings showing that learned changes generalized to motions within 60° of the trained motion (Kahlon and Lisberger, 1999), but add the important feature that all directions of target motion were delivered at the same location in the visual field.
Absence of generalization of pursuit learning to saccades
Comparison of prelearning and postlearning saccades during probe trials did not reveal any significant changes in mean saccade amplitude, direction, or latency in our experiments (t test;p > 0.05). This is consistent with one (Ogawa and Fujita, 1997) but not another (Nagao and Kitazawa, 1998) previous study. Those two studies used different target conditions from each other and from this current study: saccadic adaptation seems dependent on the exact target configuration selected for the pursuit learning. The lack of saccadic adaptation in our paradigm may seem surprising, given that the saccadic system takes target velocity into account when programming a saccade to a moving target (Keller and Heinen, 1991). However, the drive for saccadic learning is a position error (Wallman and Fuchs, 1998). For our learning paradigm, we calculated the position error resulting from the change in target velocity by comparing the difference between the distance traveled by the target during the saccade at the first target velocity versus the distance traveled at the second velocity. The position errors present during learning trials averaged 1°, and therefore were small in comparison with target step sizes typically used to evoke saccadic adaptation (Straube et al., 1997). Thus, our paradigm may not be effective in evoking saccadic adaptation or the changes may be too small to detect given the variability of saccades evoked to moving targets.
DISCUSSION
The central goal of our study was to constrain the possible sites of learning in pursuit eye movements by understanding how pursuit learning generalizes for the visual field location and direction of moving targets. Generalization for the direction of target motion at a given retinal locus was quite restricted and did not extend to the opposite direction. Generalization for initial target position was partial, but often included targets that were in the opposite visual hemifield and as far as 18° of visual angle from the position of the adapting stimulus. The large spatial spread of learning effects seems to be an obligate feature of the neural circuits that produce pursuit learning, because greater spatial specificity did not emerge when two opposing learning stimuli were presented in different retinal locations.
Relationship to previous pursuit learning data
Our data provide an important extension to previous studies on pursuit learning (Nagao and Kitazawa, 1998; Ogawa and Fujita, 1997), which have shown that learning to a particular direction of target motion can generalize to nearby target positions to some degree. First, we have shown that learning can generalize to spatial locations as far as 18° away, including sites in the opposite hemifield and that learning cannot be forced to be spatially more specific. This means that the expression of learning for motion in a particular direction must be described as somewhere between position-specific and position-independent. Thus, our data support and extend the previous conclusion of Kahlon and Lisberger (1996): learning occurs in a intermediate reference frame that integrates signals related to multiple aspects of both the visual stimulus and the evoked eye motion, and may involve multiple sites.
Comparison of learning for presaccadic and postsaccadic eye velocity
Our learning paradigm was carefully contrived so that the signals that guide learning were available only after the saccade. Yet, learning caused changes in the component of postsaccadic eye velocity that is driven by visual inputs present before the saccade. For learning to work correctly, the system must work this way. Visual inputs present after the saccade constitute an error signal that reports the failure to pursue correctly based on visual inputs present 100 msec earlier. The error signals presented near the fovea after the saccade therefore cause learning in the response to earlier visual signals presented in peripheral vision. This idea has been treated in the analysis of learning in the vestibulo-ocular reflex (Raymond and Lisberger, 1996), and appears as well in other examples of visual-motor learning, such as in saccades (Shafer et al., 2000) or post-saccadic fixations (Optican and Miles, 1985).
In our paradigm, learning was induced with complete spatial and temporal separation between the presaccadic inputs that were subject to learning and the postsaccadic signals that guided learning. In spite of this separation, learning could generalize to a large region of the visual field surrounding the presaccadic visual stimulus. Learning was also able to generalize to pursuit that preceded the saccade in approximately two-thirds of our experiments, but the time course of learning was slower for presaccadic than postsaccadic pursuit. This could imply that the systems responsible for these two components of pursuit are heavily shared and are both subject to a single learning mechanism. Perhaps the neural systems that guide presaccadic and postsaccadic overlap only partially, but both are subject to learning. Alternatively, the interactions that drive learning with presaccadic and postsaccadic visual signals may access the same mechanisms with different efficacies. In this regard, it is interesting that the postsaccadic learning is faster, because this is the situation that would obtain most often with natural tracking using a combination of saccades and smooth pursuit.
Constraints on the neural site of pursuit learning
To shed light on the neural sites of pursuit learning, we now relate the properties of the generalization of learning to the properties of signals processed at different stages of the neural circuit for pursuit. Suppose, for example, we had found that learning for a target motion to the right at one position in the visual field caused changes in pursuit only for probe target motions that took the target to the right starting within a few degrees of the position of the adapting stimulus. Then, we would conclude that the site of learning was early in the visual pathways, at a site where image motion was represented by cells with small receptive fields. Suppose, at the other extreme, that learning for rightward target motion at one position in the visual field caused the same changes in rightward pursuit to targets that appeared anywhere in the visual field. Then we would conclude that the site of learning was probably deep in the motor system, at a site where neural signals represented pursuit eye motion along the horizontal axis.
Our data fall intermediate between the predictions at the motor and sensory extremes, and so does our conclusion. When we tested the generalization of pursuit to target motion in the learning direction across different parts of the visual field, we found generalization that was incomplete, but significant, extending in many cases to sites across the horizontal or vertical meridian from the visual field location of the targets used for learning. The fact that generalization occurs to retinal locations as much as 18° away in the opposite visual quadrant makes it unlikely that learning occurs at the level of areas V1 or MT, where the receptive fields are small and are confined primarily to the contralateral visual field. At <10°, which is where we placed our learning stimuli, V1 neurons have receptive fields of ≤1° in diameter (Gattass et al., 1981; Van Essen et al., 1984). MT neurons with receptive fields at these eccentricities have been reported to have receptive fields ranging 5–10° in diameter (Gattass and Gross, 1981; Van Essen et al., 1984; Desimone and Ungerleider, 1986; Komatsu and Wurtz, 1988; Ferrera and Lisberger, 1997). In MT, although there is some ipsilateral visual representation, the majority of the neurons have receptive fields confined to the contralateral hemifield (Van Essen et al., 1981; Desimone and Ungerleider, 1986). For both V1 and MT, the size of the receptive fields is smaller than the spatial scale of the generalization.
Two cortical areas in the pursuit pathway operate on spatial scales that would make them good candidates for sites of learning. MST contains cells with large receptive fields, some extending well into the ipsilateral hemifield (Komatsu and Wurtz, 1988). At 7° eccentricity, some MST neurons have receptive fields >15° in diameter (Desimone and Ungerleider, 1986; Komatsu and Wurtz, 1988;Ferrera and Lisberger, 1997). Furthermore, neurons in the dorsal region of MST integrate both retinal image motion and extraretinal information that may signal the direction of eye motion (Newsome et al., 1988) and would satisfy the criterion previously established by (Kahlon and Lisberger, 1996) for a site of learning where there is an interaction of signals related to image motion and eye motion. The arcuate FPA, which receives visual motion signals from both MT and MST (Tian and Lynch, 1996), is also a plausible site for learning. FPA has a spatial scale and an interaction of image motion and eye motion signals similar to area MST (MacAvoy et al., 1991; Gottlieb et al., 1993, 1994). Furthermore, FPA has the capacity to modulate the gain of the pursuit response to target motion (Tanaka and Lisberger, 2001), and could use this capacity during pursuit learning. Another area that contains an appropriate mix of visual signals, large receptive fields, and eye motion signals is the DLPN (Suzuki and Keller, 1984; Mustari et al., 1988).
Finally, the fact that learning generalizes incompletely for target position in the visual field raises an obstacle to the conclusion that pursuit learning occurs in the cerebellum. Several forms of motor learning are thought to reside in the cerebellum: for example, learning the metrics of arm movements (Gilbert and Thach, 1977; Ojakangas and Ebner, 1992), changing the gain of the vestibulo-ocular reflex, (Ito, 1982; Raymond et al., 1996), and changing the gain of saccades (Optican and Robinson, 1980). For pursuit, it is tempting to come to the same conclusion: lesions of the vermis (Takagi et al., 2000) or the entire cerebellum (Nagao and Kitazawa, 2000) may affect pursuit learning. Recordings from Purkinje cells during pursuit learning are consistent with learning either within or upstream from the floccular complex of the cerebellum (Kahlon and Lisberger, 2000). However, floccular firing during pursuit generalizes fully across the visual field: Purkinje cell responses show a single, unifying relationship to eye acceleration for targets presented up to 20° eccentric in either visual hemifield (Krauzlis and Lisberger, 1991). Thus, the floccular complex of the cerebellum does not have discharge properties that would make it appropriate as a sole site for pursuit learning. Further work will be needed to determine whether the smooth eye movement parts of the cerebellar vermis play a special role in pursuit learning, or if pursuit learning resides in the pontine nuclei, the cerebral cortex, or hitherto unexplored regions such as the basal ganglia. Finally, the complex dependence of pursuit learning on many features of image, target, and eye motion raises the possibility that learning results from multiple components of the pursuit circuit, each with different signaling properties, rather than from any single area encoding all the signals.
Footnotes
This work was supported by National Institutes of Health Grant NS34835 and the Howard Hughes Medical Institute. We thank members of the Lisberger laboratory for helpful comments on this manuscript, Karen MacLeod, Elizabeth Montgomery, and Stefanie Tokiyama for excellent technical assistance, and Scott Ruffner for computer programming.
Correspondence should be addressed to Dr. I-han Chou, Department of Physiology, Box 0444, University of California, San Francisco, 513 Parnassus Avenue, Room 762-S, San Francisco, CA 94143-0444. E-mail:ihan{at}phy.ucsf.edu.