Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

Motion Extrapolation for Eye Movements Predicts Perceived Motion-Induced Position Shifts

Elle van Heusden, Martin Rolfs, Patrick Cavanagh and Hinze Hogendoorn
Journal of Neuroscience 19 September 2018, 38 (38) 8243-8250; DOI: https://doi.org/10.1523/JNEUROSCI.0736-18.2018
Elle van Heusden
1Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, Australia 3010,
2Helmholtz Institute, Department of Experimental Psychology, Utrecht University, 3512 Utrecht, the Netherlands,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Elle van Heusden
Martin Rolfs
3Department of Psychology, Humboldt-Universität zu Berlin, 10099 Berlin, Germany,
4Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Martin Rolfs
Patrick Cavanagh
5Department of Psychological and Brain Sciences, Dartmouth College, 03755 Hanover, NH, and
6Department of Psychology, Glendon College, 2275 Toronto, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Hinze Hogendoorn
1Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, Australia 3010,
2Helmholtz Institute, Department of Experimental Psychology, Utrecht University, 3512 Utrecht, the Netherlands,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Hinze Hogendoorn
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

Transmission delays in the nervous system pose challenges for the accurate localization of moving objects as the brain must rely on outdated information to determine their position in space. Acting effectively in the present requires that the brain compensates not only for the time lost in the transmission and processing of sensory information, but also for the expected time that will be spent preparing and executing motor programs. Failure to account for these delays will result in the mislocalization and mistargeting of moving objects. In the visuomotor system, where sensory and motor processes are tightly coupled, this predicts that the perceived position of an object should be related to the latency of saccadic eye movements aimed at it. Here we use the flash-grab effect, a mislocalization of briefly flashed stimuli in the direction of a reversing moving background, to induce shifts of perceived visual position in human observers (male and female). We find a linear relationship between saccade latency and perceived position shift, challenging the classic dissociation between “vision for action” and “vision for perception” for tasks of this kind and showing that oculomotor position representations are either shared with or tightly coupled to perceptual position representations. Altogether, we show that the visual system uses both the spatial and temporal characteristics of an upcoming saccade to localize visual objects for both action and perception.

SIGNIFICANCE STATEMENT Accurately localizing moving objects is a computational challenge for the brain due to the inevitable delays that result from neural transmission. To solve this, the brain might implement motion extrapolation, predicting where an object ought to be at the present moment. Here, we use the flash-grab effect to induce perceptual position shifts and show that the latency of imminent saccades predicts the perceived position of the objects they target. This counterintuitive finding is important because it not only shows that motion extrapolation mechanisms indeed work to reduce the behavioral impact of neural transmission delays in the human brain, but also that these mechanisms are closely matched in the perceptual and oculomotor systems.

  • extrapolation
  • eye movements
  • latency
  • motion
  • motion-induced position shifts
  • saccades

Introduction

When we open our eyes, we experience seeing and acting in the present. However, due to the delays inherent in neuronal transmission, the brain needs time to process what we see. Our awareness of visual events therefore lags behind the occurrence of those events in the world. Nevertheless, we are usually unaware of this delay, and are able to interact with even rapidly moving objects with surprising accuracy (Smeets et al., 1998). One explanation for how the brain might achieve this is that it overcomes its own delays through prediction. By using what it knows about how objects move in the world, the brain can work ahead to compensate for known delays, essentially predicting the present. In visual motion, for example, the future position of a moving object can be extrapolated based on previous samples (Nijhawan, 1994), and we have recently demonstrated that such neural mechanisms do indeed reduce the lag with which the brain represents the position of a moving object (Hogendoorn and Burkitt, 2018). A rapidly moving ball, which would be mislocalized if the brain did not compensate for processing delays, can be caught because its future location can be extrapolated given enough information about its past trajectory. Accurately catching the moving ball further requires that the brain compensates not only for the delays inherent in the processing of the incoming visual information, but also for the additional delays incurred by the planning and execution of the hand and arm movement. Effectively functioning in the present therefore requires that a predictive mechanism accurately encodes the time lost in the transmission and processing of sensory information, as well as the expected time that will be lost in preparing the next motor program, transmitting the associated motor commands, and actually moving the corresponding effectors.

That the brain is able to solve this computational challenge is readily apparent in the saccadic eye-movement system. Short duration, saccadic eye movements in the healthy observer are effectively open-loop, ballistic motor acts that can bring moving objects into foveal vision with remarkable precision (Becker, 1989; van Beers, 2007). Although the saccadic system is thought to program upcoming saccades based on target locations defined by retinal input, our ability to make saccades to moving objects reveals that target encoding in the saccadic system incorporates additional information about the target's anticipated position (Robinson, 1965; Barmack, 1970; Keller and Johnsen, 1990; Cassanello et al., 2008). Even when the moving object is only very briefly presented, monkeys trained to make an eye-movement to the target make saccades that land at or close to the location where the target would have been, had it still been visible (Quinet and Goffart, 2015). This shows that the additional information used by the saccadic system is predictive, and that the brain is capable of using that information to direct the eyes toward a moving object's future location.

Of course, the execution of eye movements has consequences for the visual information that lands on the retina and accordingly for what we see; that is their primary purpose. Nevertheless, our visual experience is stable across eye movements, and one of the neural mechanism responsible for integrating successive fixations is saccadic remapping (Duhamel et al., 1992). In saccadic remapping, directly before a saccade, visual cells start responding to stimuli that will soon fall in their receptive fields, anticipating the future positions of objects on the retina. Essentially, these cells respond as if the eyes had already moved. An efference copy signal that encodes the magnitude and direction of the intended eye movement allows the visual system to predict the spatial consequences of the saccade (Sommer and Wurtz, 2002, 2006, 2008). Similarly, just before a saccade, visual attention shifts to those locations on the retina that attended objects will occupy after the movement (Rolfs et al., 2011; Jonikaitis et al., 2013), a process that might give rise to a continuous visual experience.

Importantly, the study of saccadic remapping has focused on the spatial parameters of the eye movement and the consequences for static stimuli. However, when executing a saccade to a moving object, the direction of an accurate saccade necessarily depends on its timing: a saccade made with a long latency must be directed further along the anticipated trajectory than a saccade made with a short latency. The fact that we are generally very good at making saccades to rapidly moving objects suggests that the efference copy signal that informs the visual system about imminent saccades encodes not only the direction and amplitude of those eye movements, but also their anticipated timing. The oculomotor system could then use the expected timing and duration of an imminent saccade to extrapolate the locations of moving objects at saccade landing.

Like the oculomotor system, perception also acts as if it extrapolates the position of moving objects, possibly to keep perception aligned with eye movements, or perhaps because perception depends on the eye movement system for target locations. Indeed, there is a whole class of motion-induced position illusions that has been argued to be a direct or indirect consequence of motion extrapolation, including the flash-lag (Nijhawan, 1994), flash-drag (Krekelberg et al., 2000; Whitney and Cavanagh, 2000), flash-jump (Cai and Schlag, 2001), and flash-grab (Cavanagh and Anstis, 2013) effects, as well as the Fröhlich effect (for review, see Kerzel, 2010). We recently investigated the neural basis of the flash-grab effect, and reported a strikingly early locus of interaction between visual motion and position information (Hogendoorn et al., 2015). In the flash-grab effect, an object is briefly flashed on a moving background that abruptly reverses direction. When the object is flashed concurrently with the motion reversal of the background, the result is a large shift of the flashed object's perceived position in the direction of the background's new direction of motion (Cavanagh and Anstis, 2013). One interpretation of this illusion is that the unexpected reversal of the background violates its predicted trajectory, necessitating a corrective signal of some kind. Because the object is flashed concurrently with the reversal, the object is also shifted by this corrective signal (Cavanagh and Anstis, 2013; Hogendoorn et al., 2015). We have previously postulated that this prediction-correction might occur in the superior colliculus (SC), because SC is known to play a crucial role in the preparation and execution of saccadic eye movements (Lee et al., 1988), and is specifically involved in extrapolating the future positions of moving saccade targets (Fleuriet and Goffart, 2012; Goffart et al., 2017). Although the cortical frontal eye fields have also been implicated in extrapolation (Cassanello et al., 2008) we observed the neural signature of extrapolation at posterior, rather than frontal electrodes in our EEG study (Hogendoorn et al., 2015). The hypothesis is therefore that this perceptual illusion (in which no actual eye movements are made) recruits the same neural mechanisms that are responsible for extrapolating the future positions of saccade targets.

This hypothesis makes the intriguing prediction that the timing of an imminent saccade can affect the perceived position of a moving object that the saccade is targeting. Although this prediction might seem to violate intuitive causality (i.e., we know where to move our eyes because we see where the object is), it is a logical consequence of a shared neural extrapolation mechanism that compensates for both sensory and motor delays: we perceive a moving object in the position that it will occupy by the time we have made an eye movement to it.

In support of this hypothesis, it has been reported that when observers execute saccades to objects that are perceptually shifted due to the flash-drag illusion (Whitney and Cavanagh, 2000), the degree of shift depends on the latency of the saccade (de'Sperati and Baud-Bovy, 2008). Although the authors interpret the results in terms of a perception-action dissociation (Goodale and Milner, 1992; Goodale and Westwood, 2004), with early saccades driven by an accurate dorsal “vision for action” system, and later saccades drawing on ventral “vision for perception” representations that are fooled by the illusion, the results are also consistent with a predictive signal that compensates for anticipated saccade latency.

Here, we test the hypothesis that the perceived position of an object is correlated with the latency of saccades aimed at it. Using the flash-grab effect, we first replicate the relationship between saccade latency and saccade landing previously reported for the flash-drag illusion (de'Sperati and Baud-Bovy, 2008). We show that the pattern of results is explained equally well, and with fewer free parameters, by a direct, linear relationship between shift in the landing position and saccade latency than by a gradual transition from an accurate vision for action system to a vision for perception system that is susceptible to the motion-induced position shift. Altogether, we show that the visuomotor system uses both the spatial and temporal characteristics of the upcoming saccade to localize visual objects.

Materials and Methods

Observers.

Eight healthy human observers participated in the experiment (age: 20–24, 3 male). All observers had normal or corrected-to-normal vision and gave informed consent before participation. Observers received either monetary reward or course credit.

Materials.

Stimuli were presented using a 27 inch ASUS ROG Swift LCD monitor running at 100 Hz with a resolution of 1440 × 2560 pixels, controlled by a Dell Precision T3610 computer. The experiment was presented using MATLAB (MathWorks) and Psychtoolbox (Brainard, 1997; Pelli, 1997). Distance from the screen was kept constant at 55 cm with observers' heads stabilized in a chinrest. Eye movements were recorded using an Eyelink 1000 eye tracker (SR Research) with a sampling rate of 1000 Hz. The eye tracker was calibrated before each block using the standard 9-point Eyelink calibration procedure. We also recorded 64-channel EEG data during the experiment, but this dataset was not analyzed here.

Stimuli.

The stimulus used in this experiment was an annulus compromised of 16 alternating black (0.34 cd/m2) and white (309 cd/m2) segments (Fig. 1). The annulus had an inner radius of 8.5° and an outer radius of 12.5° and rotated at an angular velocity of 200°/s. After 1000, 1100, 1200, 1300, 1400, or 1500 ms, the motion direction of the annulus reversed. On 90% of the trials, a target stimulus was presented at reversal. On the remaining 10% of trials, no target was presented. These trials served as catch-trials. The target was a red (62.6 cd/m2) disk with a diameter of 2.85° and was presented for 10 ms (i.e., one frame). The target was always presented 10.5° from fixation, at either 160, 180, or 200 degrees of polar angle offset from the top of the annulus. The target always appeared at the border between two segments, with equal probability that the right segment was black or white. To give the observers some reference as to where they observed the target, eight blue lines (6.1 cd/m2; width: 0.23°, height: 2.3°) were drawn around the annulus (at 0, 45, 90, 135, 180, 225, 270, and 315 polar offset from the top of the annulus). Two smaller black lines were presented between every two blue lines (width: 0.19°, height: 1.37°). All stimuli were presented on a gray background (76.2 cd/m2) with a white (309 cd/m2) fixation dot in the center (diameter: 0.46°).

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Experimental procedure. (1) The annulus rotates in the clockwise (CW) or counter-clockwise (CCW) direction for either 1000, 1100, 1200, 1300, 1400, or 1500 ms. (2) Reversal and target onset. The annulus reverses its motion direction and the target is presented for 10 ms (i.e., one frame). Here, the target is presented at the central position (i.e., at 180°). The two other possible target locations are depicted as the two transparent disks at 160° and 200°. Targets are always presented at the border of two adjacent segments. (3) Perception trials: the annulus continues rotating for another 400 ms, after which it gradually starts to turn dark gray. It is fully gray 100 ms later. The annulus is displayed until the observer gives a response. Saccade trials: the fixation dot is removed from the screen. The annulus continues rotating, dimming to invisibility after 500 ms as in the perception trials. A blank screen is presented for 1000 ms, after which a new trial starts.

Procedure.

The experiment consisted of 35 blocks that were of two types. First, in the Perception blocks (20 blocks of 120 trials each), at the end of each trial observers were asked to indicate the perceived position of the target using the computer mouse, while maintaining fixation. On these trials, the annulus of black and white segments continued moving but changed gradually to uniform dark-gray (26.1 cd/m2) starting at 400 ms after reversal and becoming fully gray 100 ms later. This was done to ensure that the moving segments did not distract the observers in their answer. Observers were told that they could start to move the mouse as soon as the annulus was fully gray. The image of the target was drawn at the cursor location and moved with it across the screen. Observers had unlimited time to respond. A new trial started as soon as a response was given. Observers were instructed to click at the location of the fixation dot if they did not perceive the target. A Perception block lasted ∼6 min. Second, in the Saccade blocks (15 blocks of 160 trials each) observers were asked to make an eye-movement to the target. On these trials, the fixation dot disappeared at the time of reversal. All stimuli disappeared from the screen 500 ms after reversal, and the screen remained blank for 1000 ms before a new trial started. A Saccade block lasted ∼8 min. The order in which the blocks were presented was randomized across observers. At the start of each block either the word “mouse” or “eyes” was presented, to inform the observers of an upcoming Perception or Saccade block, respectively. The 35 blocks were divided over multiple sessions spanning 3–5 separate days.

Experimental design and statistical analysis.

The experiment consisted of a 3 × 2 × 2 factorial design, with location (160°, 180°, and 200°), reversal direction (clockwise or counterclockwise) and response (perceptual report or saccade) as factors. Differences between means in each combination of condition were tested using a repeated-measures ANOVA, and individual comparisons were tested using two-tailed paired-sample t tests. In subsequent analyses of saccade latency, bivariate correlations were tested using Pearson's r.

Results

Data were analyzed offline using MATLAB (MathWorks). During Perception blocks, observers were instructed to maintain fixation at all times. Trials on which observers' gaze direction deviated >2.85° from fixation (i.e., size of the target) were discarded (15.3%). During Saccade blocks, observers were instructed to make an eye-movement toward the target. Trials on which participants failed to do so (28.0%) or on which eye-tracking data were missing (17.7%) were discarded from the analysis. On every trial, epochs of 500 ms (starting at target onset) were extracted for analysis.

Saccades were detected by calculating a two-dimensional velocity space (Engbert and Kliegl, 2003; Engbert and Mergenthaler, 2006). Data points were marked as a saccade if their velocity exceeded the 2 SD threshold for at least five consecutive samples (i.e., 5 ms) and the total shift in gaze exceeded 1.0 degree of visual angle. 9.7% and 11.2% of catch-trials were wrongfully detected in the Perception and Saccade blocks, respectively. In Perception blocks, the polar angle between the real position of the target and the position reported by the observer was taken as the position shift measure. In Saccade blocks, the polar angle between the real position of the target and the position of the saccade landing was taken as the position shift measure, with saccade landing defined as the last sample of the saccade.

In both trial types, we observed large shifts in the perceived position of the target contingent on the reversal direction of the annulus: the flash-grab effect (Cavanagh and Anstis, 2013). Figure 2 shows distributions of perceived position across all observers for both Perception and Saccade blocks. Separate one-sample t tests revealed the absolute magnitude of the shift away from veridical to be significantly different from zero for both the Perception (M = 23.6, SD = 7.9, t(7) = 8.5, p < 0.001) and the Saccade blocks (M = 18.8, SD = 10.0, t(7) = 5.3, p < 0.005). Furthermore, a 2 (reversal direction) × 3 (target location) × 2 (block type) repeated-measures ANOVA revealed a significant main effect of target location (loc) on absolute shift (loc 160° M = 20.2, SD = 9.0, loc 180°: M = 22.5 SD = 11.0, loc 200°: M = 20.7 SD = 9.6, F(2) = 3.77, p < 0.05). Individual post hoc t tests revealed that absolute shift of the target at 180° was significantly greater than the absolute shift of the target that was presented at 160° (t = 2.85, p < 0.01). Furthermore, the ANOVA revealed a significant three-way interaction effect of reversal direction, block type and target location (F(2) = 13.63, p < 0.001). This interaction is best understood as the difference between Perception and Saccade blocks being largest for the off-center locations with illusory shifts toward the midline, and smallest for off-center locations with illusory shifts away from the midline.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Shifts in perceived target location (Perception blocks; red) and saccade landing position (Saccade blocks; blue) Plots show means for each condition across all eight observers. Filled and open bars represent clockwise (CW) and counter-clockwise (CCW) reversals, respectively, with error bars indicating the SEM. Dashed lines indicate the physical location of the target.

To further investigate the relation between performance on the Perception and Saccade blocks, we analyzed variance in both measures across observers (Fig. 3). This showed that although the strength of the illusion varied considerably across observers, the two shift measures were highly correlated. The larger a participant's perceptual shift, the further their saccades land in the direction of the reversal (or, vice versa). Moreover, the shifts were not only strongly correlated in magnitude, they were virtually identical in size: the slopes relating Perception and Saccade shifts across participants at the three locations did not differ significantly from 1.00 [means and 95% confidence intervals for each condition: 1.13 (0.89–1.36), 1.03 (0.77–1.29), 1.15 (0.92–1.38)].

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Target shifts for both Perception and Saccade blocks. Each circle denotes an individual observer's mean ± SEM. Filled circles indicate clockwise reversals, whereas open circles indicate counter-clockwise reversals. Equality lines are plotted as dotted lines. The black line shows a linear best-fit function.

Subsequently, we analyzed how trial-by-trial variation in saccade latency related to shifts in target landing location (deviation of saccade endpoint from location of flashed target) in the Saccade blocks. Figure 4 shows a plot of landing shifts as a function of saccadic onset for individual observers. The figure reveals a significant positive correlation for each individual observer: shift in the saccade landing increases with saccadic latency. Absolute correlations are small, but this can be attributed to large variability in the signal caused by visual uncertainty, motor noise, and other factors (van Beers, 2007). Figure 5 shows the relationship between mean saccadic latency and mean shift in saccade landing across observers. Again, we observe a positive correlation: the observers with the fastest saccades have the lowest shift in saccade landing and vice versa.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Shift in saccade landing from target location as a function of saccade onset for eight observers. Data are collapsed across all conditions (i.e., target location, reversal direction). Note that here positive values indicate a perceived position shifted in the reversal direction, whereas negative values indicate a perceived position shifted in the opposite direction. Linear and logistic fits of the data were calculated and are overlaid as solid and dashed lines, respectively. The dotted line indicates the extrapolated position of the target: its position had it rotated with the annulus.

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Mean target shift for Perception blocks (a) and Saccade blocks (b) as a function of mean saccadic latency, with each circle representing an individual observer. Shaded regions represent SD. c, Shift in saccade landing as a function of saccadic latency across all individual Saccade trials across all observers. In each panel, linear and logistic fits of the data were calculated and overlaid in solid and dashed lines, respectively. Correlation coefficients for the linear fit (r) are indicated in each panel, along with corresponding p values. The dotted line indicates the position of the target, had it rotated with the annulus.

de'Sperati and Baud-Bovy (2008) performed a similar experiment and performed a similar analysis procedure in which logistic functions were fitted to the data, consistent with their dual pathway hypothesis. These logistic functions related saccade latency to saccadic mislocalization and had four parameters: one asymptote for the mislocalization evident for each pathway, the latency at which the visuomotor system shifts from relying on one pathway to relying on the other, and the slope or sharpness of the transition. Here, we provide an alternative explanation: a linear relationship between saccadic onset and shift in the saccade landing, reflecting predictive compensation. To evaluate both hypotheses, we fitted both linear and logistic functions to the data (Figs. 4, Fig. 5, solid and dashed lines, respectively). Subsequently, we evaluated the explanatory performance of these fits using tenfold cross validation. In doing so, we divided each dataset presented in Figures 4 and 5c into 10 subsets (with a random assignment of trials), after which nine subsets were used to calculate both linear and logistic fits. Then, we calculated their ability to explain the variance in the remaining subset. This process was performed 10 times, so that every subset of data was tested once. This showed that, even though the logistic function included two additional free parameters (4 rather than 2), it did not explain significantly more variance than the linear fit, either at the level of individual observers (all p > 0.29) or with all trials collapsed across observers (t = 0.993, p = 0.33).

To directly compare the linear model and the logistic model, we subsequently compared the Bayes information criterion (BIC) for the two models (Schwarz, 1978). The BIC weighs explained variance against the number of parameters, with a lower BIC representing a better model. We evaluated BIC for cross-validated model fits for individual observers (as plotted in Fig. 4). The linear model had a lower BIC for each individual observer (mean difference 10.9, range 0.7–24.2). A BIC difference of 10 or more represents “very strong evidence” in favor of the model with the lower BIC (Kass and Raftery, 1995), indicating that our data strongly support a linear, rather than logistic model.

Discussion

In localizing moving objects, the brain must somehow anticipate the delays incurred during transmission and processing of visual information. One way it might do so is through motion extrapolation, and such mechanisms have been argued to underlie a range of motion-position illusions. Here, we used the flash-grab illusion (Cavanagh and Anstis, 2013) to test the hypothesis that these perceptual extrapolation mechanisms are shared with the oculomotor system, such that the timing of imminent saccades is linearly related to the perceived position of the objects they target.

We show that when observers make saccades to objects that are extrapolated along the motion path due to the flash-grab illusion, the degree of shift is directly proportional to the latency with which saccades are made. We subsequently compared this interpretation against a previously presented alternative interpretation (de'Sperati and Baud-Bovy, 2008) in terms of dissociable dorsal vision for action and ventral vision for perception mechanisms (Goodale and Milner, 1992). de'Sperati and Baud-Bovy (2008) presented a model in which the relationship between saccade latency and mislocalization was characterized by an S-shape, representing the premise that rapid saccades depend on an accurate vision for action mechanism, whereas later saccades depend on vision for perception mechanisms that are (more) susceptible to mislocalization illusions. Because the rationale is a transition between two discrete mechanisms, the function relating saccade latency and mislocalization flattens out for both very early and very late saccades. Whereas our data could in principle be accounted for by a transition from one mode of vision to another, we show that the relationship between saccade latency and saccade landing is equally well explained, and more parsimoniously, by a direct linear relationship consistent with a shared extrapolation mechanism. Furthermore, the characteristic S-shape with asymptotic plateaus that de'Sperati and Baud-Bovy (2008) used to describe the shift from one mode to the other was not evident in the present data (Fig. 5c).

Regardless of the precise mechanism, the link between saccade latency and saccade landing position is clear. What is the case then for perception? Here we could not measure saccade latency before each perceptual decision as no saccades were made in the perceptual report blocks. Nevertheless, there were very strong links between perceptual judgments and saccade landings: the average size of the two shifts was virtually identical across participants (Fig. 3) and the average saccade latency for each participant in Saccade blocks was a good predictor of their perceived shifts in the Perception blocks (Fig. 5a). As such, we believe that the prediction of the position of the target in this illusion uses either the same neural mechanism that is responsible for the planning and execution of a saccade that is aimed at it, or a similar mechanism that is calibrated to give approximately the same amount of shift (for purposes of keeping perception and saccades in alignment).

Because perceptual reports and saccadic responses were obtained in separate trials, we could not correlate saccade latency and perceptual position shift across trials, but only across conditions and across observers. If observers had made both responses in the same trial, any resulting correlation could be attributed to shared processes at the level of response, rather than reflecting a shared underlying mechanism for extracting the position of a moving stimulus. That is, if asked to make a saccade to the target, and subsequently report the perceived position of the target using a mouse click, observers might simply report the position they fixate at the end of the trial. Such a tendency to align responses would yield a correlation between saccade latency and perceptual position shift without necessarily requiring a shared mechanism for location. By acquiring measures in separate blocks, we eliminated this alternative explanation. The fact that correlations were evident on measures acquired on different trials therefore strengthens the evidence that the two response types draw on the same neural mechanisms.

The idea that the mechanisms subserving extrapolative localization are shared by the perceptual and visuomotor systems closely parallels a similar proposal in the domain of saccadic adaptation (the progressive correction of systematic saccade targeting errors). A number of studies have shown that saccadic adaptation influences the perceptual localization of objects presented near the saccade target (Awater et al., 2005; Georg and Lappe, 2009). This has prompted the proposal that saccade metrics involved in shaping motor space also contribute to the metric used for the organization of perceptual space (Collins et al., 2007). Our findings clearly align with that proposal.

More broadly, our findings contribute to an ongoing discussion about the degree to which action and perception systems are shared CQ dissociable (Goodale and Milner, 1992). Two decades ago, Aglioti et al. (1995) reported that the Ebbinghaus (or Titchener) illusion, in which the size of a central circle is perceived as larger or smaller depending on the size of circles surrounding it, affects perceptual judgments without affecting the hand aperture of grasping movements aimed at that circle. This finding has since been challenged on the basis of methodological differences between perceptual and grasping tasks (Pavani et al., 1999; Franz et al., 2000; Franz and Gegenfurtner, 2008). Comparable studies using other size illusions, such as the Muller-Lyer illusion (Elliott and Lee, 1995) and the size-weight illusion (Brenner and Smeets, 1996) have similarly yielded contradictory results. To date, the degree to which action and perception systems are differentially affected by illusions of size (and related object properties) remains contentious.

Conversely, evidence with regard to the localization of objects shows several examples where the perception and action systems are closely linked. For example, illusions of perceived speed affect both perception and action (Smeets and Brenner, 1995). Illusions of motion direction also affect both perception and action, although the effects on saccadic eye movements and smooth pursuit were subtly different (Zivotofsky et al., 1996, 1998). Factors affecting motion perception, including direction (Beutter and Stone, 1998; Krukowski et al., 2003), occlusion (Stone et al., 2000), motion coherence (Beutter and Stone, 2000), and trial-by-trial variability (Stone and Krauzlis, 2003) similarly affect smooth pursuit eye movements. Although dissociations have been reported (Zivotofsky, 2005; Spering and Carrasco, 2015), Spering and Montagnini (2011) conclude in a review that the majority of results points to similarities between perception and pursuit. Our findings align and extend this literature, showing that motion extrapolation mechanisms are shared not only with smooth pursuit eye movements, but also with ballistic saccades.

There is one motion-induced position shift (double drift; Lisi and Cavanagh, 2015) that does show a large dissociation: there is a large effect on perception, but no effect on immediate saccades. However, it is not a simple perception-action dissociation, because the illusion is present for ballistic pointing (Lisi and Cavanagh, 2017) as well as for memory-based saccades (Massendari et al., 2018). This double-drift illusion differs from other motion-induced position illusions in a number of ways. First, the position deviations are orthogonal to the motion path, not along it as in the effects studied by de'Sperati and Baud-Bovy (2008) and in the present study. Furthermore, the integration period is at least a second, as opposed to ∼100 ms for extrapolation along the motion path (Cavanagh and Anstis, 2013; Lisi and Cavanagh, 2015), and the position deviations are several times larger.

It is evident from Figures 4 and 5 that saccades never actually landed where the target would have been, had it continued to rotate with the annulus. The perceptual shift similarly falls short of perfect compensation. There are many reasons why the gain of this system might be low, including how the oculomotor system might adjust to the target being systematically absent when the saccade lands. Importantly, the size of the illusion is stable across Perception and Saccade modalities despite this low gain, consistent with a shared or linked extrapolation mechanism.

Finally, we return to the alternative explanation for the results described above wherein extrapolation for saccades and perception are computed independently, but that their estimates are kept similar so that perception and action are always in agreement (Greenwood et al., 2017). To differentiate between these hypotheses of common versus aligned but independent mechanisms, future research will need to specifically alter motion-induced perceptual shifts and monitor the consequences for saccadic targeting, or vice versa.

Footnotes

  • This work was supported by the Deutsche Forschungsgemeinschaft (Grants RO3579/2-1, RO3579/8-1, and RO3579/9-1) to M.R., the European Research Council under the European Union's Seventh Framework Program (FP7/2007-2013)/ERC Grant Agreement AG324070 and from the Department of Psychological and Brain Sciences of Dartmouth College, and the Australian Government through the Australian Research Council's Discovery Projects funding scheme (project DP180102268) to E.v.H. and H.H.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Dr. Hinze Hogendoorn, The University of Melbourne, Melbourne, Australia 3010. hhogendoorn{at}unimelb.edu.au

References

  1. ↵
    1. Aglioti S,
    2. DeSouza JF,
    3. Goodale MA
    (1995) Size-contrast illusions deceive the eye but not the hand. Curr Biol 5:679–685. doi:10.1016/S0960-9822(95)00133-3 pmid:7552179
    OpenUrlCrossRefPubMed
  2. ↵
    1. Awater H,
    2. Burr D,
    3. Lappe M,
    4. Morrone MC,
    5. Goldberg ME
    (2005) The effect of saccadic adaptation on the localization of visual targets. J Neurophysiol 93:3605–3614. doi:10.1152/jn.01013.2003 pmid:15843478
    OpenUrlCrossRefPubMed
  3. ↵
    1. Barmack NH
    (1970) Modification of eye movements by instantaneous changes in the velocity of visual targets. Vision Res 10:1431–1441. doi:10.1016/0042-6989(70)90093-3 pmid:4252623
    OpenUrlCrossRefPubMed
  4. ↵
    1. Becker W
    (1989) The neurobiology of saccadic eye movements. Metrics. Rev Oculomot Res 3:13–67. pmid:2486323
    OpenUrlPubMed
  5. ↵
    1. Beutter BR,
    2. Stone LS
    (1998) Human motion perception and smooth eye movements slow similar directional biases for elongated apertures. Vision Res 38:1273–1286. doi:10.1016/S0042-6989(97)00276-9 pmid:9666995
    OpenUrlCrossRefPubMed
  6. ↵
    1. Beutter BR,
    2. Stone LS
    (2000) Motion coherence affects human perception and pursuit similarly. Vis Neurosci 17:139–153. pmid:10750835
    OpenUrlCrossRefPubMed
  7. ↵
    1. Brainard DH
    (1997) The Psychophysics Toolbox. Spat Vis 10:433–436.
    OpenUrlCrossRefPubMed
  8. ↵
    1. Brenner E,
    2. Smeets JB
    (1996) Size illusion influences how we lift but not how we grasp an object. Exp Brain Res 111:473–476. pmid:8911942
    OpenUrlPubMed
  9. ↵
    1. Cai RH,
    2. Schlag J
    (2001) Asynchronous feature binding and the flash-lag illusion. Invest Ophthalmol Vis Sci 42:S711.
    OpenUrl
  10. ↵
    1. Cassanello CR,
    2. Nihalani AT,
    3. Ferrera VP
    (2008) Neuronal responses to moving targets in monkey frontal eye fields. J Neurophysiol 100:1544–1556. doi:10.1152/jn.01401.2007 pmid:18632886
    OpenUrlCrossRefPubMed
  11. ↵
    1. Cavanagh P,
    2. Anstis S
    (2013) The flash grab effect. Vision Res 91:8–20. doi:10.1016/j.visres.2013.07.007 pmid:23872166
    OpenUrlCrossRefPubMed
  12. ↵
    1. Collins T,
    2. Doré-Mazars K,
    3. Lappe M
    (2007) Motor space structures perceptual space: evidence from human saccadic adaptation. Brain Res 1172:32–39. doi:10.1016/j.brainres.2007.07.040 pmid:17803970
    OpenUrlCrossRefPubMed
  13. ↵
    1. de'Sperati C,
    2. Baud-Bovy G
    (2008) Blind saccades: an asynchrony between seeing and looking. J Neurosci 28:4317–4321. doi:10.1523/JNEUROSCI.0352-08.2008 pmid:18434509
    OpenUrlAbstract/FREE Full Text
  14. ↵
    1. Duhamel JR,
    2. Colby CL,
    3. Goldberg ME
    (1992) The updating of the representation of visual space in parietal cortex by intended eye movements. Science 255:90–92. doi:10.1126/science.1553535 pmid:1553535
    OpenUrlAbstract/FREE Full Text
  15. ↵
    1. Elliott D,
    2. Lee TD
    (1995) The role of target information on manual-aiming bias. Psychol Res 58:2–9. doi:10.1007/BF00447084
    OpenUrlCrossRef
  16. ↵
    1. Engbert R,
    2. Kliegl R
    (2003) Microsaccades uncover the orientation of covert attention. Vision Res 43:1035–1045. doi:10.1016/S0042-6989(03)00084-1 pmid:12676246
    OpenUrlCrossRefPubMed
  17. ↵
    1. Engbert R,
    2. Mergenthaler K
    (2006) Microsaccades are triggered by low retinal image slip. Proc Natl Acad Sci U S A 103:7192–7197. doi:10.1073/pnas.0509557103 pmid:16632611
    OpenUrlAbstract/FREE Full Text
  18. ↵
    1. Fleuriet J,
    2. Goffart L
    (2012) Saccadic interception of a moving visual target after a spatiotemporal perturbation. J Neurosci 32:452–461. doi:10.1523/JNEUROSCI.3896-11.2012 pmid:22238081
    OpenUrlAbstract/FREE Full Text
  19. ↵
    1. Franz V,
    2. Gegenfurtner K
    (2008) Grasping visual illusions: consistent data and no dissociation. Cogn Neuropsychol 25:920–950. doi:10.1080/02643290701862449 pmid:19378412
    OpenUrlCrossRefPubMed
  20. ↵
    1. Franz V,
    2. Gegenfurtner KR,
    3. Bülthoff HH,
    4. Fahle M
    (2000) Grasping visual illusions: no evidence for a dissociation between perception and action. Psychol Sci 11:20–25. doi:10.1111/1467-9280.00209 pmid:11228838
    OpenUrlCrossRefPubMed
  21. ↵
    1. Georg K,
    2. Lappe M
    (2009) Effects of saccadic adaptation on visual localization before and during saccades. Exp Brain Res 192:9–23. doi:10.1007/s00221-008-1546-y pmid:18716763
    OpenUrlCrossRefPubMed
  22. ↵
    1. Goffart L,
    2. Cecala AL,
    3. Gandhi NJ
    (2017) The superior colliculus and the steering of saccades toward a moving visual target. J Neurophysiol 118:2890–2901. doi:10.1152/jn.00506.2017 pmid:28904104
    OpenUrlCrossRefPubMed
  23. ↵
    1. Goodale MA,
    2. Milner AD
    (1992) Separate visual pathways for perception and action. Trends Neurosci 15:20–25. doi:10.1016/0166-2236(92)90344-8 pmid:1374953
    OpenUrlCrossRefPubMed
  24. ↵
    1. Goodale MA,
    2. Westwood DA
    (2004) An evolving view of duplex vision: separate but interacting cortical pathways for perception and action. Curr Opin Neurobiol 14:203–211. doi:10.1016/j.conb.2004.03.002 pmid:15082326
    OpenUrlCrossRefPubMed
  25. ↵
    1. Greenwood JA,
    2. Szinte M,
    3. Sayim B,
    4. Cavanagh P
    (2017) Variations in crowding, saccadic precision, and spatial localization reveal the shared topology of spatial vision. Proc Natl Acad Sci U S A 114:E3573–E3582. doi:10.1073/pnas.1615504114 pmid:28396415
    OpenUrlAbstract/FREE Full Text
  26. ↵
    1. Hogendoorn H,
    2. Burkitt AN
    (2018) Predictive coding of visual object position ahead of moving objects revealed by time-resolved EEG decoding. Neuroimage 171:55–61. doi:10.1016/j.neuroimage.2017.12.063 pmid:29277651
    OpenUrlCrossRefPubMed
  27. ↵
    1. Hogendoorn H,
    2. Verstraten FA,
    3. Cavanagh P
    (2015) Strikingly rapid neural basis of motion-induced position shifts revealed by high temporal-resolution EEG pattern classification. Vision Res 113:1–10. doi:10.1016/j.visres.2015.05.005 pmid:26021721
    OpenUrlCrossRefPubMed
  28. ↵
    1. Jonikaitis D,
    2. Szinte M,
    3. Rolfs M,
    4. Cavanagh P
    (2013) Allocation of attention across saccades. J Neurophysiol 109:1425–1434. doi:10.1152/jn.00656.2012 pmid:23221410
    OpenUrlCrossRefPubMed
  29. ↵
    1. Kass RE,
    2. Raftery AE
    (1995) Bayes factors. J Am Stat Assoc 90:773. doi:10.1080/01621459.1995.10476572
    OpenUrlCrossRefPubMed
  30. ↵
    1. Keller E,
    2. Johnsen SD
    (1990) Velocity prediction in corrective saccades during smooth-pursuit eye movements in monkey. Exp Brain Res 80:525–531. pmid:2387352
    OpenUrlCrossRefPubMed
  31. ↵
    1. Kerzel D
    (2010) The Fröhlich effect: past and present. In: Space and time in perception and action (Nijhawan R, Khurana B, eds), pp 321–337. Cambridge, UK; New York: Cambridge UP.
  32. ↵
    1. Krekelberg B,
    2. Lappe M,
    3. Whitney D,
    4. Cavanagh P,
    5. Eagleman DM,
    6. Sejnowski TJ
    (2000) The position of moving objects. Science 289:1107a. doi:10.1126/science.289.5482.1107a pmid:17833394
    OpenUrlFREE Full Text
  33. ↵
    1. Krukowski AE,
    2. Pirog KA,
    3. Beutter BR,
    4. Brooks KR,
    5. Stone LS
    (2003) Human discrimination of visual direction of motion with and without smooth pursuit eye movements. J Vis 3(11):16. doi:10.1167/3.11.16 pmid:14765965
    OpenUrlAbstract
  34. ↵
    1. Lee C,
    2. Rohrer WH,
    3. Sparks DL
    (1988) Population coding of saccadic eye movements by neurons in the superior colliculus. Nature 332:357–360. doi:10.1038/332357a0 pmid:3352733
    OpenUrlCrossRefPubMed
  35. ↵
    1. Lisi M,
    2. Cavanagh P
    (2015) Dissociation between the perceptual and saccadic localization of moving objects. Curr Biol 25:2535–2540. doi:10.1016/j.cub.2015.08.021 pmid:26412133
    OpenUrlCrossRefPubMed
  36. ↵
    1. Lisi M,
    2. Cavanagh P
    (2017) Different spatial representations guide eye and hand movements. J Vis 17(2):12 1–12. doi:10.1167/17.2.12 pmid:28245493
    OpenUrlCrossRefPubMed
  37. ↵
    1. Massendari D,
    2. Lisi M,
    3. Collins T,
    4. Cavanagh P
    (2018) Memory-guided saccades show effect of a perceptual illusion whereas visually guided saccades do not. J Neurophysiol 119:62–72. doi:10.1152/jn.00229.2017 pmid:28954892
    OpenUrlCrossRefPubMed
  38. ↵
    1. Nijhawan R
    (1994) Motion extrapolation in catching. Nature 370:256–257. doi:10.1038/370256b0 pmid:8035873
    OpenUrlCrossRefPubMed
  39. ↵
    1. Pavani F,
    2. Boscagli I,
    3. Benvenuti F,
    4. Rabuffetti M,
    5. Farnè A
    (1999) Are perception and action affected differently by the titchener circles illusion? Exp Brain Res 127:95–101. doi:10.1007/s002210050777 pmid:10424418
    OpenUrlCrossRefPubMed
  40. ↵
    1. Pelli DG
    (1997) The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spat Vis 10:437–442.
    OpenUrlCrossRefPubMed
  41. ↵
    1. Quinet J,
    2. Goffart L
    (2015) Does the brain extrapolate the position of a transient moving target? J Neurosci 35:11780–11790. doi:10.1523/JNEUROSCI.1212-15.2015 pmid:26311763
    OpenUrlAbstract/FREE Full Text
  42. ↵
    1. Robinson DA
    (1965) The mechanics of human smooth pursuit eye movement. J Physiol 180:569–591. doi:10.1113/jphysiol.1965.sp007718 pmid:5846794
    OpenUrlCrossRefPubMed
  43. ↵
    1. Rolfs M,
    2. Jonikaitis D,
    3. Deubel H,
    4. Cavanagh P
    (2011) Predictive remapping of attention across eye movements. Nat Neurosci 14:252–256. doi:10.1038/nn.2711 pmid:21186360
    OpenUrlCrossRefPubMed
  44. ↵
    1. Schwarz G
    (1978) Estimating the dimension of a model. Ann Statist 6:461–464. doi:10.1214/aos/1176344136
    OpenUrlCrossRef
  45. ↵
    1. Smeets JB,
    2. Brenner E
    (1995) Perception and action are based on the same visual information: distinction between position and velocity. J Exp Psychol Hum Percept Perform 21:19–31. doi:10.1037/0096-1523.21.1.19 pmid:7707030
    OpenUrlCrossRefPubMed
  46. ↵
    1. Smeets JB,
    2. Brenner E,
    3. Lussanet M
    (1998) Visuomotor delays when hitting running spiders. In: Advances in perception-action coupling (B B, A L, G D, A R-B, eds), pp 36–40. Paris: EDK.
  47. ↵
    1. Sommer MA,
    2. Wurtz RH
    (2002) A pathway in primate brain for internal monitoring of movements. Science 296:1480–1482. doi:10.1126/science.1069590 pmid:12029137
    OpenUrlAbstract/FREE Full Text
  48. ↵
    1. Sommer MA,
    2. Wurtz RH
    (2006) Influence of the thalamus on spatial visual processing in frontal cortex. Nature 444:374–377. doi:10.1038/nature05279 pmid:17093408
    OpenUrlCrossRefPubMed
  49. ↵
    1. Sommer MA,
    2. Wurtz RH
    (2008) Visual perception and corollary discharge. Perception 37:408–418. doi:10.1068/p5873 pmid:18491718
    OpenUrlCrossRefPubMed
  50. ↵
    1. Spering M,
    2. Carrasco M
    (2015) Acting without seeing: eye movements reveal visual processing without awareness. Trends Neurosci 38:247–258. doi:10.1016/j.tins.2015.02.002 pmid:25765322
    OpenUrlCrossRefPubMed
  51. ↵
    1. Spering M,
    2. Montagnini A
    (2011) Do we track what we see? Common versus independent processing for motion perception and smooth pursuit eye movements: a review. Vision Res 51:836–852. doi:10.1016/j.visres.2010.10.017 pmid:20965208
    OpenUrlCrossRefPubMed
  52. ↵
    1. Stone LS,
    2. Krauzlis RJ
    (2003) Shared motion signals for human perceptual decisions and oculomotor actions. J Vis 3(11):7 725–736. pmid:14765956
    OpenUrlAbstract
  53. ↵
    1. Stone LS,
    2. Beutter BR,
    3. Lorenceau J
    (2000) Visual motion integration for perception and pursuit. Perception 29:771–787. doi:10.1068/p2979 pmid:11064800
    OpenUrlCrossRefPubMed
  54. ↵
    1. van Beers RJ
    (2007) The sources of variability in saccadic eye movements. J Neurosci 27:8757–8770. doi:10.1523/JNEUROSCI.2311-07.2007 pmid:17699658
    OpenUrlAbstract/FREE Full Text
  55. ↵
    1. Whitney D,
    2. Cavanagh P
    (2000) Motion distorts visual space: shifting the perceived position of remote stationary objects. Nat Neurosci 3:954–959. doi:10.1038/78878 pmid:10966628
    OpenUrlCrossRefPubMed
  56. ↵
    1. Zivotofsky AZ
    (2005) A dissociation between perception and action in open-loop smooth-pursuit ocular tracking of the duncker illusion. Neurosci Lett 376:81–86. doi:10.1016/j.neulet.2004.11.031 pmid:15698925
    OpenUrlCrossRefPubMed
  57. ↵
    1. Zivotofsky AZ,
    2. Rottach KG,
    3. Averbuch-Heller L,
    4. Kori AA,
    5. Thomas CW,
    6. Dell'Osso LF,
    7. Leigh RJ
    (1996) Saccades to remembered targets: the effects of smooth pursuit and illusory stimulus motion. J Neurophysiol 76:3617–3632. doi:10.1152/jn.1996.76.6.3617 pmid:8985862
    OpenUrlCrossRefPubMed
  58. ↵
    1. Zivotofsky AZ,
    2. White OB,
    3. Das VE,
    4. Leigh RJ
    (1998) Saccades to remembered targets: the effects of saccades and illusory stimulus motion. Vision Res 38:1287–1294. doi:10.1016/S0042-6989(97)00288-5 pmid:9666996
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 38 (38)
Journal of Neuroscience
Vol. 38, Issue 38
19 Sep 2018
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Advertising (PDF)
  • Ed Board (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Motion Extrapolation for Eye Movements Predicts Perceived Motion-Induced Position Shifts
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Motion Extrapolation for Eye Movements Predicts Perceived Motion-Induced Position Shifts
Elle van Heusden, Martin Rolfs, Patrick Cavanagh, Hinze Hogendoorn
Journal of Neuroscience 19 September 2018, 38 (38) 8243-8250; DOI: 10.1523/JNEUROSCI.0736-18.2018

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Motion Extrapolation for Eye Movements Predicts Perceived Motion-Induced Position Shifts
Elle van Heusden, Martin Rolfs, Patrick Cavanagh, Hinze Hogendoorn
Journal of Neuroscience 19 September 2018, 38 (38) 8243-8250; DOI: 10.1523/JNEUROSCI.0736-18.2018
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • extrapolation
  • eye movements
  • latency
  • motion
  • motion-induced position shifts
  • saccades

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Lumateperone Normalizes Pathological Levels of Acute Inflammation through Important Pathways Known to Be Involved in Mood Regulation
  • Estrous Cycle Mediates Midbrain Neuron Excitability Altering Social Behavior upon Stress
  • Chronic Ethanol Exposure Modulates Periaqueductal Gray to Extended Amygdala Dopamine Circuit
Show more Research Articles

Behavioral/Cognitive

  • Pulvinar Response Profiles and Connectivity Patterns to Object Domains
  • Subtype-Specific Roles of Ellipsoid Body Ring Neurons in Sleep Regulation in Drosophila
  • Proprioceptive and Visual Feedback Responses in Macaques Exploit Goal Redundancy
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
(JNeurosci logo)
(SfN logo)

Copyright © 2023 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.