## Abstract

Sensory inputs control motor behavior with a strength, or gain, that can be modulated according to the movement conditions. In smooth pursuit eye movements, the response to a brief perturbation of target motion is larger during pursuit of a moving target than during fixation of a stationary target. As a step toward identifying the locus and mechanism of gain modulation, we test whether it acts on signals that are in visual or motor coordinates. Monkeys tracked targets that moved at 15°/s in one of eight directions, including left, right, up, down, and the four oblique directions. In eight-ninths of the trials, the target underwent a brief perturbation that consisted of a single cycle of a 10 Hz sine wave of amplitude ±5°/s in one of the same eight directions. Even for oblique directions of baseline target motion, the magnitude of the eye velocity response to the perturbation was largest for a perturbation near the axis of target motion and smallest for a perturbation along the orthogonal axis. Computational modeling reveals that our data are reproduced when the strength of visual–motor transmission is modulated in sensory coordinates, and there is a static motor bias that favors horizontal eye movements. A network model shows how the output from the smooth eye movement region of the frontal eye fields (FEF_{SEM}) could implement gain control by shifting the peak of a visual population response along the axes of preferred image speed and direction.

## Introduction

To function as sentient creatures, we cannot respond in exactly the same way for each occurrence of a given sensory event. We must modulate the impact of sensory stimuli, sometimes responding strongly and sometimes ignoring them altogether. In motor control, for example, the strength, and even the sign, of sensory–motor transmission is subject to modulation in many systems in the spinal cord (Hultborn, 2001) and the cerebral cortex (Chapin and Woodward, 1982; Seki and Fetz, 2012). In the model system we study, smooth pursuit eye movements, motor commands are derived from visual signals created by image motion (Rashbass, 1961; Lisberger and Westbrook, 1985). Yet, visual sensory signals do not enjoy unfettered access to the oculomotor pathways. Instead, the strength, or gain, of visual–motor transmission is subject to modulation depending on the behavioral paradigm and experimental conditions (Luebke and Robinson, 1988; Schwartz and Lisberger, 1994; Lisberger, 1998; Schoppik and Lisberger, 2006).

Our prior work (Schwartz and Lisberger, 1994; Carey and Lisberger, 2004) used behavioral methods to demonstrate the existence and some properties of gain control. We found that pursuit generated larger eye velocity responses when a given brief target motion occurred during pursuit versus during fixation. The modulation was direction selective: when the eye was tracking horizontal target motion, a horizontal perturbation evoked a larger eye velocity response than did a vertical perturbation, and vice versa. Previous work also suggested a causal role for the smooth eye movement region of the frontal eye fields, or FEF_{SEM}, in control of visual–motor gain for pursuit (Missal and Heinen, 2001; Tanaka and Lisberger, 2001; Nuding et al., 2008, 2009). Stimulation of the saccadic region of the frontal eye fields induces similar modulation of sensory transmission to the visual cortex (Moore and Armstrong, 2003).

We know little about the location or mechanisms of modulation of the strength of sensory–motor transmission. Does, for example, modulation occur at the level of sensory representations, motor commands, or somewhere in between? One approach to this question is to ask about the coordinate system for gain control. If gain control for pursuit occurs in horizontal and vertical coordinates aligned with the approximate pulling direction of the eye muscles, then we would conclude that it is operating on commands for horizontal and vertical eye velocity and that the mechanism resides in the motor system (Fig. 1*A*). If gain control is organized according to any arbitrary direction of pursuit, then we would conclude that it is operating on visual signals, before population decoding has created commands for horizontal and vertical eye movement (Fig. 1*B*). Our prior analysis (Schwartz and Lisberger, 1994) used target motion only along the horizontal and vertical axes, and the results were equally compatible with the alternative hypotheses that gain control occurs in visual versus motor coordinates.

The present study provides evidence that gain control occurs while signals are still encoded in a visual coordinate system. We present a network model that shows how gain control could work by shifting the peak of a neural population response in sensory coordinates, rather than by changing the size of motor commands. We suggest that a similar mechanism might explain other examples of shifting population responses, e.g., spatial remapping in the parietal cortex in relation to saccadic eye movements (Duhamel et al., 1992).

## Materials and Methods

We performed experiments on three male monkeys that had been trained to fixate and track spots that appeared on a video screen. Before experiments, we had performed two surgeries to implant (1) a socket on the skull for head restraint and (2) a coil of wire on one eye to allow recordings of eye position with the scleral search coil system (Fuchs and Robinson, 1966). We sutured the coil to the sclera to promote longevity, as well as precision and accuracy of the measurements (Ramachandran and Lisberger, 2005). The eye coil provided voltages related to eye position, low-pass filtered at 330 Hz. We obtained voltages related to eye velocity with an analog differentiator that included a filter with a low-pass cutoff at 25 Hz. Horizontal and vertical eye position and eye velocity traces were sampled during the experiment at 1 kHz on each channel and saved for later data analysis.

Visual targets consisted of 0.5° spots that appeared on a video screen with a refresh rate of 80 Hz and a spatial resolution of 0.02°. The luminance of the fixation spot and the tracking target were 0.7 and 6 cd/m^{2}. The background was essentially black and did not register a finite luminance on the photometer. The monitor was placed 60 cm from the monkey and subtended a visual angle of 44 × 29°. Monkeys received droplets of fluid in exchange for keeping their eyes within a window around target position for the duration of a behavioral trial. Experiments were performed at the University of California, San Francisco (UCSF). All methods had been approved in advance by the Institutional Animal Care and Use Committees at UCSF or Duke University and were in full compliance with the NIH *Guide for the Care and Use of Laboratory Animals*.

##### Experimental design and data analysis.

Each experiment comprised a sequence of trials, each with the structure illustrated in Figure 2*A*. Each trial began with the appearance of a fixation spot for a random interval from 400 to 800 ms. At the time defined as “zero,” the target (Fig. 2, dashed traces) displaced in one direction and moved at a constant speed of 15°/s in the opposite direction. In eight of nine of the trials, the target underwent a small, brief perturbation that began 500 ms after the onset of target motion. The perturbation was a single cycle of a 10 Hz sine wave with a peak-to-peak amplitude of 10°/s. Monkeys initiated pursuit when the target started to move, matched target velocity quickly, and showed a brief response to the perturbation of target motion when it was present (Fig. 2*A*, red vs black traces). In Figure 2*A*, for example, the horizontal eye velocity component of the response to the perturbation had a peak-to-peak amplitude of 5.3°/s; the vertical response was a much smaller 0.9°/s. Although monkeys could predict when the perturbation would occur, the randomness in the direction of the perturbation prevented them from making anticipatory smooth eye movements.

The overall experimental design appears in Figure 2*B*. Each daily session contained 64 different combinations of the direction of target and perturbation motion in randomly interleaved trials, plus eight additional directions of target motion without perturbations. Target motion was along the four cardinal directions and the four oblique directions. Baseline pursuit directions are indicated by the eight arrows in Figure 2*B*, which are centered on points that represent the speed and direction of baseline pursuit in polar coordinates. Perturbations were in one of the eight directions used for the continuous target motion. In the example in Figure 2*A*, for example, target motion was rightward, and the perturbation caused deflection of target velocity that was first up and right and then down and left. Such a perturbation caused a shift in target position of 0.5° to the right and up, small enough so that it did not push the monkey's eye out of the window used as a criterion for reward delivery.

We analyzed the data by dividing the full set of trials into groups according to the direction of target motion and perturbation, eliminating trials that had saccades in the interval from 50 ms before to 250 ms after the onset of the perturbation, and averaging horizontal and vertical eye velocity as a function of time. To isolate the responses to the perturbation from the baseline pursuit eye velocity, we subtracted the average eye velocity traces for target motions without perturbations (“control”) from those with perturbations for each of the 64 target motion conditions. Depending on the purpose of the analysis, we subtracted the control traces from average traces with perturbations or from individual traces with perturbations.

As a prelude to statistical analysis, we applied boxcar smoothing with a 5 ms time window to the control-corrected velocity traces of individual trials. We then conducted principal component analysis (PCA) on eye velocity in the interval from 100 to 200 ms after the onset of the perturbation. We used the peak-to-peak excursion of the projection of the first principal component onto each individual response to estimate the magnitude of the response; the first principal component captures most of the variance of the eye velocity. We determined the direction of the response to the perturbation using the principal component coefficients and the sign of the first peak of the projection of the first principle component, unless the amplitude was <1.5°/s, in which case we assumed that the response was in the direction of perturbation. The use of PCA facilitated statistical analysis, but the same results appeared when we simply measured response amplitudes and directions from averaged traces.

As shown by vertices of the solid octagons in Figure 2*B* surrounding the arrows used to represent the direction of each baseline target motion, we would have seen a symmetric response pattern if the monkey had produced equal-magnitude responses exactly in the direction of the perturbation for each of the eight directions of perturbation. In fact, the perturbation response functions were clearly elliptical (Fig. 2*C*, solid curves) and were aligned more or less with the direction of baseline target motion and pursuit. For each direction of baseline pursuit, we fitted an ellipse to the eight points that described the direction and magnitude of the responses to the eight perturbations. We used a conic equation and converted the fitted parameters into the length of the major axis, the length of the minor axis, the angle of the major axis, and the center of the fitted ellipse. We also fitted a circle to the eight points to determine whether an ellipse provided a better account of the data than did a circle. For obtaining fits for a circle (Pratt, 1987) and an ellipse (Fitzgibbon et al., 1999), we used algebraic methods that minimize errors in the least-square sense.

We used a bootstrap procedure to generate empirical distributions for the shape of the perturbation response functions. For each direction of baseline pursuit, we drew 1000 samples of eight perturbation responses, one sample from each direction of perturbation, with substitution. For each draw, we fitted the eight points with an ellipse and a circle. We then performed an *F* test (Motulsky and Christopoulos, 2003) on the summed squared error for each draw to test whether an ellipse is a significantly better description of the data than is a circle:
where *SS*_{c,e} are the sum of the squared errors for each fit and *DF*_{c,e} are degrees of freedom for each fit. The equation for a circle has three free parameters, and the conic equation for an ellipse has six free parameters, so *DF*_{c} is 8 − 3 = 5 and *DF*_{e} is 8 − 6 = 2. If the *p* value of the *F test* was <0.05, we judged that the ellipse was a better description of the data, and we computed the ratio of the major and minor axes, or *b*/*a*, for future analysis. If the *p* value was >0.05, then we assigned *b*/*a* the value of 1. We used the values of *b*/*a* from the 1000 draws for each pursuit direction to create distributions of *b*/*a*, and then we obtained 95% confidence intervals from the empirical distributions.

##### Tests of coordinate system hypotheses.

We tested the data against the two main hypotheses for the coordinate system of gain control by comparing the data to the predictions of models of gain control in the two coordinate systems.

The motor coordinate model assumes that gain modulation occurs in Cartesian coordinates after the representation of target motion in area MT has been decoded to create commands for horizontal and vertical eye velocity. We chose to model Cartesian coordinates as an approximation to the horizontal and vertical pulling directions of the eye muscles and to approximate the preferred directions of the two main groups of Purkinje cells that control smooth eye velocity in the floccular complex of the cerebellum (Krauzlis and Lisberger, 1996). In rough compliance with the data in the study by Schwartz and Lisberger (1994), we assumed that gains applied to the horizontal and vertical components of the response vector for the perturbation *r⃗ _{i,j}* will be proportional to the eye velocity in the horizontal and vertical components of the ongoing pursuit vector (

*p⃗*): where

_{i}*i*and

*j*represent the directions of pursuit and perturbation, respectively;

*p⃗*is a unit vector in the pursuit direction

_{i}*I*;

*v⃗*is a unit vector in perturbation direction

_{j}*j*;

*g*

_{i}is the gain of visual–motor transmission during pursuit in direction

*i*; and

*k*is an offset that prevents

*r⃗*from being zero. We used the parameter

_{i,j}*b*to create a static motor bias toward horizontal or vertical eye movement downstream from the location of gain modulation as follows: The retinal coordinate model assumes that gain modulation occurs before the visual inputs have been decoded into horizontal and vertical components to drive the response to a perturbation. In the retinal coordinate model, gain control should be strongest for perturbations in the direction of target motion. The axis of the strongest gain modulation should be able to rotate freely in polar coordinates that represent the direction of the image motion from the perturbation. To implement the gain modulation that is stronger along the axis parallel versus perpendicular to ongoing pursuit, we used a matrix

*R*to rotate the axis of the unit vectors that represented the direction of pursuit and the perturbation based on the angle θ

_{i}_{i}of the pursuit direction

*i*: where the superscript

*R*denotes rotated vectors and the subscripts

*p*and

*o*indicate the vector components parallel and orthogonal to the direction of baseline pursuit, respectively. We then computed the response vector in the coordinate system defined by the direction of pursuit: and used the inverse of the rotation matrix to return the result to the standard Cartesian coordinate system used to measure pursuit eye movements: Finally, we reused Equation 5 to apply a static horizontal–vertical bias and obtain a prediction of the eye velocity in response to each perturbation.

For the 64 combinations of pursuit direction and perturbation direction in each daily experiment for each of the three monkeys, we used a least-squares procedure to optimize the parameters of each model to obtain the best fit to the data. To represent the data, we used the amplitude of the projection of the first principle component from PCA onto the averages of eye velocity, and we asked whether the shape and orientation of the ellipses/circles in each experiment could be better predicted in a statistical sense by gain control in horizontal/vertical coordinates versus in polar coordinates defined by the direction of target motion. In evaluating the models, we considered only the four oblique directions of baseline pursuit because the model predictions are identical for horizontal and vertical baseline pursuit. Each model has six free parameters (four gains, one static bias, one offset).

To compare the fits provided by the two models, we used the following corrected Akaike's Information Criterion (AICc) (Motulsky and Christopoulos, 2003):
where *N* is the number of data points, *K* is the number of free parameters plus one, and *SS* is the sum of squared errors. If Δ*AIC*_{C} is the value of *AIC*_{C} for the retinal coordinate model minus that for the motor coordinate model, then the probability that the retinal model is correct is given by the following:

##### Computer simulations.

We performed computations using a model that included populations of MT neurons, FEF_{SEM} neurons, and what we will call “multiplier” neurons. Each model population comprised neurons with 120 preferred directions and 60 preferred speeds. We modeled MT neurons as follows:
where *G*(*x*, *M*, σ) represents a Gaussian function on *x* with a mean of *M* and a SD of σ. *S* and *D* represent the speed and direction of target motion, respectively; *ps* and *pd* are the preferred speed and direction of each MT unit, respectively; and σ_{S} and σ_{D} are the SDs of the respective Gaussian functions, with values of 2.5 log_{2} units of speed and 30°. These values are close to those of MT neurons [see discussion by Yang et al. (2012)], and the results of the simulation would not have changed with tuning widths that were larger or smaller by factors of 2. We have modeled the tuning curves as functions of log_{2}(speed) to provide a close characterization of the symmetry on a log axis of the speed-tuning curves of MT neurons (Lisberger and Movshon, 1999).

In our model, we simulated the population of FEF_{SEM} neurons during fixation as a Gaussian function in log_{2} speed:
where *S* was 0.5°/s and σ_{S} was 1.25 log_{2} units of speed. The resulting population response was not tuned for direction and peaked for FEF_{SEM} units intended to represent fixation neurons (Izawa et al., 2009). We simulated the population of FEF_{SEM} neurons during pursuit as a Gaussian function along the direction axis with the fixation neurons now silent in a directional manner:
where σ_{D} was 45° (Tanaka and Lisberger, 2002b).

Equation 14 creates a population of model FEF_{SEM} neurons in which the neurons with very low preferred speeds are active during fixation, whereas other neurons are silent. The neurons with low preferred speeds would respond like fixation neurons, such as those found in the rostral pole of the superior colliculus (Munoz et al., 1991) or the frontal eye fields (Izawa et al., 2009). Equation 15 creates a population of model FEF_{SEM} neurons that are direction tuned, but not speed tuned, to match the responses of real FEF_{SEM} neurons (Tanaka and Lisberger, 2002b). Note that the model FEF_{SEM} neurons are labeled with a “preferred speed” in terms of how they exert their influence on the multiplier neurons (see below), but their responses are not tuned for speed in the same sense as are those of MT neurons. Equation 15 also proposes that the fixation neurons have direction tuning and that they show reduced tuning in a direction-selective manner during pursuit; this feature of fixation neurons in the frontal eye fields is a prediction that has not been tested yet. The nature of the population responses can be appreciated by viewing the appropriate image in Figure 11.

We computed the response of each multiplier unit as the product of the responses in corresponding model MT and FEF_{SEM}. We contrived for modulation of the gain of visual–motor transmission to be organized in terms of the axis of motion rather than the direction of motion by providing gain control from the sum of the responses of model FEF_{SEM} neurons that preferred opposite directions:
Finally, we used a standard vector averaging decoder to readout desired eye direction and speed from the population of model multiplier neurons:
In Equation 17, the use of the trigonometric relationships obviates problems created by discontinuities in the value of pd_{D} at 0/360°. We ran the simulations for a perturbation that provided image motion at 10°/s in directions from zero to 360° in steps of 45°. Pursuit direction was leftward, or 180°.

The model operates under a number of simplifying assumptions. First, it makes no effort to simulate the eye velocity of baseline pursuit. Baseline pursuit is represented only as a modulatory signal, and only in the responses of a population of model FEF_{SEM} neurons that is tuned for the direction of ongoing pursuit. Thus, the model predicts only the magnitude of the response to perturbations, defined in our experiments by subtracting the eye velocity responses from companion control trials that provided baseline pursuit without perturbations. Second, the model does not include temporal dynamics and instead receives inputs that define the eye velocity of baseline pursuit and the image motion of a perturbation as scalars in polar coordinates. This restricted modeling approach seems valid because the goal of the model was to show how gain could be modulated at a level when the responses to a perturbation are still represented in visual coordinates, and not to reproduce the detailed time course of eye velocity. The same approach could be scaled up in a model that also included temporal dynamics, but we think the larger model would introduce complexity that is not needed to demonstrate the basic principles of gain control in visual coordinates.

## Results

The critical test of whether gain control is organized in retinal versus motor coordinates lies in the shapes of the perturbation response functions for target motion along the oblique axes.

If gain is controlled along the horizontal and vertical (motor) axes and gain is equal for the two axes, then the response functions should be circular for the oblique directions of baseline pursuit.

If gain is controlled along the horizontal and vertical (motor) axes but is larger for the horizontal axis, then the response functions should be elliptical for all directions of baseline pursuit, always with a horizontal major axis.

If gain is controlled in visual coordinates, along the directions orthogonal versus parallel to the direction of baseline pursuit, then the response functions should be elliptical with an oblique major axis.

We illustrate the responses to perturbations with response functions like those shown in Figure 2*C* for one experiment in monkey U. Each response function reports the average responses to perturbations in eight different directions during pursuit in the given baseline direction. The graph contains eight response functions, one for each direction of baseline pursuit. The response functions are plotted in polar coordinates, and each point shows both the direction and the amplitude of the response to a given perturbation. Each response function is centered on a point that represents the speed and direction of baseline target motion in polar coordinates. The data in Figure 3 are plotted in the same way, except that the points are averages across all the experiments done on each monkey. Figure 3 is based on three, seven, and five daily experiments in monkeys U, J, and R, providing a total of 3042, 8901, and 3392 trials. Thus, in monkeys U, R, and J, each point is based on an average of 42, 123, and 47 repetitions of each combination of the directions of a perturbation and baseline pursuit.

In the data for an individual experiment (Fig. 2*C*), and in the averages across all experiments (Fig. 3), the perturbation response functions are mainly elliptical for pursuit in all eight different baseline directions. The only exceptions are two or three purely vertical directions where the perturbation responses are very small. For the oblique directions that pose the main test of whether the coordinate system of gain control is motor or retinal, the response functions are elliptical for all three monkeys, and the major axis of the ellipse appears to be aligned more or less with the axis of baseline pursuit. Because the perturbation responses tend to be quite small for upward pursuit in monkeys J and R, the elliptical nature of the perturbation response functions for up–right and up–left baseline pursuit directions are not as impressive visually as those for down–right and down–left baseline pursuit.

The data in Figures 2*C* and 3 appear to support the third alternative listed above, which is that gain control is in visual coordinates. In the next three sections below, we verify that appearance by performing statistical evaluation of (1) the elliptical versus circular nature of the response functions, (2) the alignment (or not) of the major axis with the axis of baseline pursuit, and (3) the fit of the data to models of the implementation of gain control in motor versus retinal coordinate systems.

### Elliptical versus circular perturbation response functions

We pooled all the individual trial results for a given monkey and used a bootstrap procedure based on multiple draws of eight individual responses for the eight different directions of perturbation during baseline pursuit in each of the eight directions. For each draw of eight perturbation responses, we fitted the data with both an ellipse and a circle, and we asked which fit was better (details in Materials and Methods). For monkey U (Fig. 4*A*), histograms of the ratio of the lengths of the major and minor axes of the best-fitting ellipse demonstrated that the response functions were significant elliptical for almost all draws during baseline pursuit in all eight directions. The values of *b*/*a* could not be less than 1 but were assigned a value of 1 if the statistical tests did not indicate that the fit was significantly better for an ellipse versus a circle. We computed the median of each distribution (Fig. 4*A*, central plot, circles) and used the distributions to identify the 95% confidence interval separately for values of *b*/*a* that were above and below the median. For all eight directions of pursuit, the confidence intervals (Fig. 4*A*, error bars in central plot) extended quite far toward positive values in the outward direction and did not touch the unity circle in the inward direction.

Statistical analysis of all trials from the other two monkeys showed that the 95% confidence intervals touched the unity circle only for upward baseline pursuit in monkeys J and R. The 95% confidence intervals do not touch the unity circle for any of the 12 points for baseline pursuit in oblique directions in the three monkeys. For a baseline pursuit direction up and to the left in monkey J, the error bar comes very close to the unity circle, but the 95% confidence limit is at 1.12. We conclude that the perturbation response functions are elliptical for pursuit in all directions, with the exception of upward pursuit in monkeys R and J, who showed weak responses to perturbations in all directions during upward pursuit.

The analysis in Figure 4 excludes the first of the three options given at the beginning of Results. Our data do not support the conclusion that gain control occurs in motor coordinates with equal gains for horizontal and vertical eye motion. However, the analysis presented so far does not address the question of whether or not the major axes of the elliptical response functions lie along the axis of baseline pursuit.

### Orientation of major axis of elliptical response functions

The major axis of the response function was a free parameter in the equation we used to fit an ellipse to each draw of eight perturbation responses from individual trials. We fitted ellipses to the 1000 draws from the data for each set of eight perturbations during a single baseline pursuit direction and pooled the major angles for the 1000 fitted ellipses to obtain distributions of the angle of the major axis. The distributions allowed us to assess the mean as well as the statistical veracity of any deviations of the major axis from the axis of the baseline pursuit. Figure 5 shows the distributions of the major axis of the ellipses for baseline pursuit in the four cardinal directions. For baseline pursuit to the left, right, or down (Fig. 5, rows 1, 3, and 4), the distributions of the major angle were quite tight and peaked very close to the axis of baseline pursuit, indicated by the vertical dashed lines. As expected given the small perturbation responses during upward pursuit, the distributions of the axis of the major angle were much broader (Fig. 5, row 2). However, the peak of the distributions still lay near the upward direction of baseline pursuit for monkey U and deviated somewhat toward rightward for monkeys J and R.

For baseline pursuit in the four oblique directions, the angle of the major axis of the response ellipse was always oblique, and the distributions in Figure 6 would make it difficult to conclude that the major axis was horizontal (shown by the vertical lines with shorter dashes). However, most of the distributions in Figure 6 imply that the axis was deviated toward horizontal, relative to the direction of baseline pursuit. The deviations toward the horizontal axis were larger for oblique baseline pursuit with upward components (Fig. 6, rows 1 and 2), and the peaks of the distributions deviated as much as halfway between the directions of baseline pursuit (Fig. 6, vertical lines with longer dashes) and horizontal (Fig. 6, vertical lines with shorter dashes). The deviations toward the horizontal axis were generally smaller, and were present for only four of the six histograms for oblique baseline pursuit with downward components (Fig. 6, rows 3 and 4). Based on the distributions in Figure 6, it is difficult to argue that the angle of the major axis of the perturbation response ellipses was either equal to the angle of baseline pursuit or purely horizontal. We conclude that the angle of the major axis was truly intermediate and deviated more toward horizontal for oblique baseline pursuit with an upward versus a downward component.

### Quantitative tests of retinal versus motor coordinate system models

If the angle of the major axis of the ellipses fitted to the perturbation response fields had been aligned uniformly with the axis of oblique baseline pursuit, then we could have concluded without further analysis that gain control operates on signals in retinal coordinates, rather than in motor or muscle coordinates. Because the major axis is neither horizontal nor aligned perfectly with the axis of baseline pursuit, further analysis is necessary. Therefore, we developed quantitative models to describe the hypotheses that gain control occurs on signals that are in retinal versus motor coordinates (details in Materials and Methods), and we tested both models against the full set of data for each of our 3 monkeys. The two models differed qualitatively in how gain control was applied to modulate the response to a perturbation, but they both had a motor bias that was positioned downstream from the implementation of gain control. We started by running generative models to understand their basic predictions, using oblique baseline pursuit up and to the right.

The retinal model (see Materials and Methods, Eqs. 6–10) predicts that the perturbation response ellipse during up and right pursuit will have a major axis that is aligned with the axis of baseline pursuit (Fig. 7*A*, black symbols). When we added a motor bias toward horizontal eye movement downstream from gain control, the ellipse was stretched horizontally and compressed vertically so that it deviated toward horizontal. The resulting major axis was intermediate between the axis of baseline pursuit and pure horizontal (Fig. 7*A*, red symbols). The motor model (Materials and Methods, Eqs. 2–5), in contrast, predicts that the perturbation response function in the absence of a motor bias will be circular during baseline pursuit in an oblique direction (Fig. 7*B*, black symbols). Adding a motor bias toward horizontal eye movement downstream from gain control stretches the circle horizontally, so that the perturbation response function is elliptical. However, the model with gain control in motor coordinates obligates the major axis of the elliptical perturbation response function to be horizontal, even during oblique baseline pursuit.

We compared the predictions of the two models to the perturbation response functions in the data by fitting each model to the average response functions for each day of experiments. For each fit, we used the approach outlined in Materials and Methods to compute the probability that the retinal model provided the best description of the data. We fitted the models only to the data for baseline pursuit in oblique directions. This approach provided the best discrimination between the two models, because it excluded the data for the cardinal directions, which would have been described equally well by the two models. Each model had six free parameters: the overall gain of the perturbation response for each of the four oblique directions, a downstream motor bias, and an offset term (details in Materials and Methods).

The best fit to the data was provided by the model that implemented gain control in retinal coordinates. The retinal model predicted response ellipses (Fig. 7*C*, red traces) that provided a good fit to the actual data (Fig. 7*C*, black traces) and that appeared to have oblique major axes that aligned well with the perturbation response functions from the data. In contrast, the motor model predicted response ellipses (Fig. 7*D*, blue traces) that always had a horizontal major axis. Across all experiments, the probability that the retinal model was the better model averaged 0.9999, 0.9958, and 0.9997 across the daily experiments performed in monkeys U, R, and J, respectively (statistical approach detailed in Materials and Methods).

The better performance of the retinal model can be attributed to its ability to produce response ellipses that are tilted relative to the horizontal axis. At the same time, the motor bias in the model (“*b*”), which averaged 0.2, 0.3, and 0.05 in monkeys U, R, and J, respectively, allows the major axes of the fitted response functions to be intermediate between the axis of baseline pursuit and the horizontal axis. We verified this feature of the model predictions by finding the ellipses that provided the best fit to the perturbation response functions that emerged from the best model for each monkey.

As illustrated in Figure 8 (open symbols), the major axes of the model's ellipses deviated toward the horizontal axis in monkeys U and R (Fig. 8*A*,*B*) and were aligned well with the axis of baseline pursuit for monkey J (Fig. 8*C*). The major axes of the ellipses fitted to the data (Fig. 8, filled symbols) varied considerably as a function of the oblique direction of the baseline pursuit but agreed generally with the major axes obtained from fitting the predicted response ellipses from the model. The model was not able to follow the direction-by-direction variation in the data because we used a single value of motor bias for all baseline pursuit directions (Fig. 8), whereas the biases in the monkeys' responses varied across quadrants.

### Pursuit asymmetries

Our monkeys showed asymmetries in pursuit initiation and steady-state eye velocity that agreed qualitatively with the directional asymmetries in perturbation response gains (Fig. 3) and with the inclusion of a motor bias toward horizontal eye motion in the models (Figs. 7, 8). The polar plots in Figure 9 represent the eye velocity 100 ms after the onset of pursuit (open symbols) and the eye velocity at the time of the perturbation (filled symbols) as a function of the direction of baseline pursuit. Each point represents the end of a vector that shows both the speed and direction of eye velocity. Note that the direction of pursuit is very close to the direction of target motion both 100 ms after the onset of pursuit and at the time of the perturbations.

The main feature of the data for all three monkeys is a horizontal–vertical asymmetry that favors horizontal eye movement. Pursuit initiation is somewhat weaker for vertical pursuit compared with horizontal pursuit, with the smallest asymmetry in monkey U (Fig. 9*A*) and the largest in monkey R. There also was an up–down bias that favored downward pursuit: in monkeys U, J, and R, the ratio of the upward-to-downward eye velocity 100 ms after the onset of target motion was 0.81, 0.89, and 0.68, respectively. Figure 9 also reveals substantial horizontal–vertical asymmetries in the steady-state eye velocity at the time of the perturbation (filled symbols) in all three monkeys. In monkey U, steady-state eye velocity was only slightly lower than target velocity for upward and downward pursuit: 12.0 and 14.1°/s for target motion at 15°/s. For monkey J, steady-state eye velocity was 9.6 versus 13.0°/s for upward versus downward pursuit, and for monkey R it was 11.0 versus 10.3°/s.

The low values of steady-state eye velocity during upward pursuit in monkey J and during both upward and downward pursuit in monkey R provide an explanation for the relatively small size of the responses to perturbations under those conditions (Fig. 3). Weak eye speed at the time of the perturbation causes a residual image motion at the time when a perturbation is delivered. MT neurons respond poorly to modulation of image motion on top of that much baseline image motion (Lisberger and Movshon, 1999). As a result, the image motion caused by the perturbation would be signaled weakly to the pursuit system.

### Deviations of individual perturbation response directions

The analysis presented so far has characterized the angles of the major axes for the full response function for eight directions of perturbation at each baseline pursuit direction. These angles are based on fits to the collective responses to eight stimuli and were not intended to evaluate any consistent biases in the directions of the individual responses to perturbations. On the basis of predictions from our preliminary efforts to create a model that reproduced our data (see below), we also looked for, and found, a consistent bias in the direction of the responses to individual combinations of baseline pursuit and perturbation.

We used the average time-varying eye velocity of the responses to individual perturbations to analyze the direction of the response as a function of both the direction of baseline pursuit and the direction of the perturbation. We performed the analysis only for perturbations that were at an angle of 45° relative to the direction of target motion, and we computed the amount by which the direction of the response to the perturbation deviated away from the direction of the perturbation toward the direction of baseline pursuit. As shown by the preponderance of positive values of bias in the distributions shown in Figure 10*A–C*, the eye velocity response to individual perturbations almost always was biased toward the direction of target motion in all three monkeys. The bias was present whether we sorted the responses according to the direction of target motion (Figs. 10*D–F*) or the direction of the perturbation (Figs. 10*G–I*).

The bias of perturbation responses toward the axis of baseline pursuit is consistent with the prediction of the retinal coordinate model we used in Figure 7 and also turns out to be an emergent property of the network model we analyze in the next section. In the model shown in Figure 7, each perturbed response is pulled toward the pursuit axis because the gain along the baseline pursuit axis is larger than the gain along the perpendicular axis (Fig. 7*A*). Depending on the axis of pursuit relative to the axis of the static motor bias toward horizontal, the forces pulling the direction of the perturbation response toward the axis of pursuit versus toward the horizontal axis can be antagonistic or synergistic, resulting in variation in the magnitude of the bias across different baseline pursuit directions (Figs. 10*D–F*). When the axes of baseline pursuit and the motor bias are orthogonal with each other, the motor bias will pull the responses away from the baseline pursuit axis. When the axes for baseline pursuit and motor bias are aligned, the bias toward the axis of baseline pursuit will get stronger. For example, perturbation responses to upward target motion in monkey J showed clear bias away from the baseline pursuit direction (Fig. 10*D*, point plotted at 90° on the *x*-axis). This unique situation can be explained by the antagonism of a small gain in the response to perturbations and a relatively strong horizontal bias.

### A network model of gain control in visual coordinates

If gain control indeed occurs in visual coordinates, then it might operate by shifting the peak of a visual population response, as suggested by the network model outlined in Materials and Methods and analyzed in Figure 11. In the model, multiplier units receive sensory inputs related to image motion from area MT and gain control inputs related to baseline pursuit from the smooth eye movement region of the frontal eye fields (FEF_{SEM}). The activity of each model multiplier unit is equal to the product of the activity of a model MT unit and the output from the model FEF_{SEM}. To allow gain control to modulate responses to perturbations in the two directions along each axis, the model multiplies the output of one model MT neuron by the sum of the two model FEF_{SEM} units that have the same preferred speed and preferred directions either aligned with or opposite to the preferred direction of the MT neuron. Note that each model FEF_{SEM} neuron has a preferred speed determined by the preferred speed of the model multiplier neurons it controls, even though neurons in the FEF_{SEM} (and their models in Fig. 11) are tuned for the direction but not the speed of ongoing pursuit (Tanaka and Lisberger, 2002b). The model in Figure 11 is similar to a model we have proposed recently to implement Bayesian priors in the initiation of pursuit through modulation of the strength of visual–motor transmission (Yang et al., 2012).

In the colored images of Figure 11, each pixel shows the response of one model neuron and is plotted as a function of the preferred speed and preferred direction of that model neuron. We are modeling only the response to a brief perturbation of a moving or stationary target, and not the baseline pursuit. Thus, the response of the model MT population represents only the image motion produced by the perturbation; the population response peaks for model neurons that prefer the speed and direction of the perturbation. Neurons in the real FEF_{SEM} respond mainly to the ongoing eye movement and only weakly to image motion (Tanaka and Lisberger, 2002b), so the model FEF_{SEM} population represents only the eye motion of the baseline pursuit. The population response comprises a horizontal stripe the surrounds the model neurons that prefer the direction of the baseline pursuit (Fig. 11*C*,*D*). The activity in the model FEF_{SEM} can be thought of as a corollary discharge related to the baseline pursuit. Through its multiplicative action on the multiplier neurons, the model FEF_{SEM} can shift the location of the peak in the population response in the model multiplier neurons, relative to the location of the peak in the population of model MT neurons (Fig. 11, black filled circles).

If a perturbation occurs during fixation (Fig. 11*A*), we assume that only the fixation neurons along the left edge of the FEF_{SEM} image are strongly active in the frontal eye fields (Izawa et al., 2009). The population response in the model multiplier neurons has a peak in the direction of motion provided by the perturbation, but at a much lower speed than in the model MT population. As a consequence, the response to a perturbation during fixation would be small but properly directed.

If a perturbation occurs during pursuit (Fig. 11*C*,*D*), we assume the direction-tuned neurons in the FEF_{SEM} are highly active in relation to the baseline pursuit (Tanaka and Lisberger, 2002b) and that the fixation neurons in the FEF_{SEM} show a decrease in firing. Even though such neurons have not been documented in the FEF_{SEM} yet, we assume that the fixation neurons have direction preferences, so that only the model neurons that prefer directions near that of baseline pursuit show decreased firing. Multiplication of the model MT and FEF_{SEM} populations leads to a population response with a peak at the same location as in MT.

When we vary the direction of motion provided by a perturbation of target motion, the model in Figure 11 predicts the elliptical response pattern also found in the responses to perturbations in our data. The model also predicts (Fig. 11*B*) that the direction of responses to perturbations will have a small bias toward the direction of ongoing pursuit, which is consistent with our observations summarized in Figure 10.

## Discussion

There are many examples in humans (Matsunaga et al., 2004) and monkeys (Moore and Armstrong, 2003) of modulation of the strength of sensory signals by activity in the motor cortex. Our goal is to use pursuit eye movements as a model system to go further, by understanding the location and neural mechanism of modulation of sensory transmission for motor control. Before the present study, we knew that initial conditions could modulate the strength, or gain, of visual–motor transmission for pursuit eye movements. The response to a perturbation of target motion is stronger when a monkey tracks a moving spot versus fixates a stationary spot (Luebke and Robinson, 1988; Schwartz and Lisberger, 1994; Carey and Lisberger, 2004). The accurate eye velocity immediately after a saccade can be attributed to elevation of the gain of visual–motor processing for the sensory stimuli present just before the saccade (Lisberger, 1998; Gardner and Lisberger, 2001; Schoppik and Lisberger, 2006). The smooth eye movement region of the frontal eye fields and the supplementary eye fields are involved in modulating the gain of visual–motor transmission (Missal and Heinen, 2001; Tanaka and Lisberger, 2001; Nuding et al., 2008, 2009).

We also knew that the enhancement of the response to a perturbation is stronger along the axis of ongoing pursuit, resulting in elliptical perturbation response functions during baseline pursuit in the horizontal or vertical directions. That said, we are aware of one report that failed to find the same elongation, using a more complicated experimental design that is hard to compare with ours because it may engage more components of pursuit (Kerrigan and Soechting, 2007).

Because we had studied gain control only during horizontal and vertical pursuit, we had assumed that gain control modulates motor signals in horizontal and vertical coordinates. The present study tests that assumption, finds that it is incorrect, and proposes a new way to implement gain control. The new implementation is relevant to the observation in other systems that topographic representations of a sensory stimulus can, under specific circumstances, shift across the surface of a cortical area (Duhamel et al., 1992).

### Gain control in visual coordinates

Our data in the present study provide evidence that modulation of the gain of visual–motor transmission operates on signals that are still in visual coordinates. Our analysis raises the possibility that control of visual–motor gain might work in a completely different way from how we had imagined. We suggest that gain control may operate by shifting the peak of a sensory population response along the axes of preferred speed and direction, rather than by simply modulating the strength of motor commands. Gain modulation in visual coordinates can account most easily for the dynamic rotation of the perturbation response function toward the direction of target motion.

We do not think that the exact structure of “motor coordinates” changes our conclusions. If gain modulation occurs equally along axes that are cardinal or nearly cardinal, then the direction tuning of the response to a brief perturbation of target motion should be circular for base target motions along the oblique axes, contrary to what we have found. If gain modulation occurs along the cardinal axes but is stronger for horizontal than for vertical target motion, then the direction tuning of the response to a brief perturbation should be elliptical with a horizontal major axis, again contrary to what we have found. Motor coordinates with a larger number of axes would predict a more circular response pattern for perturbations presented during all directions of pursuit, even cardinal directions. Our logic would not be altered if gain control occurred along the horizontal and slightly off-vertical axes of the large majority of the Purkinje cells in the floccular complex of the cerebellum (Krauzlis and Lisberger, 1996).

If gain control occurs in visual coordinates, then it should occur before the population decoding that transforms visual signals into a motor coordinate system. Some of our prior work supports the same conclusion. Kahlon and Lisberger (1999) argued that gain control and pursuit learning share a site and showed that the site is upstream from the site where vector averaging combines the visual motion signals that are present when two targets are moving. Tanaka and Lisberger (2002a) provided separate evidence that gain control is upstream from vector averaging. Thus, previous evidence is consistent with our conclusion that gain modulation for pursuit occurs in visual coordinates.

### Models of the implementation of gain control

One virtue of our network model is that it takes the discussion one step beyond the rubric of “retinal” versus “motor” coordinates. Instead, it proposes a specific implementation that uses largely known responses of two key brain areas in the neural circuit for pursuit: area MT and the FEF_{SEM}. The important point of the network model is that gain control could be implemented by shifting the peak of a population response rather than by scaling motor commands. Of course, alternative models are possible. For example, it would be possible to devise a set of equations that impose gain modulation on signals that are already converted to the coordinates of horizontal and vertical eye velocity. However, the equations would require a complex interaction of the horizontal and vertical terms to achieve elliptical perturbation response functions with major axes deviated away from horizontal and toward the axis of baseline pursuit.

Several key elements in the network model in Figure 11 need to be tested experimentally. First, sites and mechanisms of multiplication need to be found, for gain modulation in either motor or visual coordinates. Note that the mechanism of multiplication could be hidden in the subthreshold cellular properties of a population of neurons (Chaisanguanthum and Lisberger, 2011). Or, it might be possible to make the model operate through addition rather than multiplication by changing the population responses in MT and the FEF_{SEM} to distributions of log probability (Ma et al., 2006). Second, fixation neurons that show reduced firing in a direction-selective way during pursuit are needed to create the elliptical pattern of responses to perturbations in different directions. Fixation neurons exist in the frontal eye fields (Izawa et al., 2009) but have not been characterized in a way that tests our model. Finally, it is necessary to test the idea that gain control is based on combining the responses of neurons in the FEF_{SEM} that prefer opposite directions of pursuit, to achieve axial modulation from neurons that are selective for a direction not an axis.

There has been considerable discussion of models of pursuit that are based on retinal versus extraretinal signals. For example, we have favored models that used retinal signals to guide visually driven changes in eye velocity (Krauzlis and Lisberger, 1994; Churchland and Lisberger, 2001). Our models used extraretinal signals to sustain steady-state eye velocity during accurate tracking and to keep the gain of visual–motor transmission high during steady-state pursuit (Lisberger, 2010). Others (Robinson et al., 1986; Ringach, 1995) have suggested models that use complex feedback architectures based mainly on extraretinal signals. Studies by Goldreich et al. (1992) and Churchland and Lisberger (2001) have contradicted the predictions of these two latter models, even though we still favor a role for extraretinal signals in pursuit. Indeed, the network model proposed in Figure 11 uses a combination of retinal signals related to image motion to drive changes in eye velocity through extrastriate area MT and extraretinal signals related to eye motion to modulate the gain of visual–motor transmission through the FEF_{SEM}.

### Implications for coordinate transformations in the brain

We know that ours is an unconventional suggestion, but we take refuge in other unconventional conclusions in analysis of coordinate transformations for motor control. For example, remapping of receptive fields before a saccadic eye movement is another example of a shift in the peak of a population response (Duhamel et al., 1992). Zipser and Andersen (1988) demonstrated that the location of a target in space could be represented through the population response of neurons that individually represented target position relative to the retina. Batista et al. (1999) showed that the parietal cortex represents commands for hand position in eye coordinates. Perhaps the challenge of coordinate transformations leads to unexpected neural mechanisms for many different brain functions, including for modulation of the strength of sensory–motor transmission as suggested here.

## Footnotes

This work was supported by the Howard Hughes Medical Institute and NIH Grant EY03878. We thank K. MacLeod, E. Montgomery, S. Tokiyama, S. Ruffner, D. Kleinhesselink, D. Wolfgang-Kimball, D. Floyd, S. Happel, and K. McGary for technical assistance.

- Correspondence should be addressed to Stephen G. Lisberger, Department of Neurobiology, Duke University, 412 Research Drive, Room 101H, Durham, NC 27710-0001. LISBERGER{at}neuro.duke.edu