Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

Bias in Visual Short-Term Memory for Motion Induced by Perceptually Suppressed Distractors

Takeshi Miyamoto and Kosuke Numasawa
Journal of Neuroscience 3 December 2025, 45 (49) e1027252025; https://doi.org/10.1523/JNEUROSCI.1027-25.2025
Takeshi Miyamoto
1Graduate School of Informatics, Nagoya University, Nagoya, Aichi 464-8601, Japan
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Takeshi Miyamoto
Kosuke Numasawa
2Institute of Health and Sport Sciences, University of Tsukuba, Tsukuba, Ibaraki 305-8574, Japan
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Kosuke Numasawa
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • Peer Review
  • PDF
Loading

Abstract

Interference between visual short-term memory (VSTM) and task-irrelevant sensory distractors is a well-documented phenomenon across a wide range of visual features. Although this sensory–memory interference is believed to result from neural interactions between mnemonic and sensory processing, the nature of the interfering signals remains unclear, particularly whether the interference originates from physical stimuli entering the retina or the resulting subjective perception. Here we addressed this question by leveraging perceptual invariance during smooth pursuit eye movements, where retinal motion induced by eye movements is perceptually suppressed. Distractors were presented as either physical object motion in world coordinates or apparent motion of a stationary background induced by eye movements, allowing us to generate identical retinal motion with distinct perception. Human observers (both female and male) were tasked with comparing the speeds of two sequentially presented motion stimuli while ignoring the distractor stimuli presented between them. Our results showed that the retinal motion distractor systematically biased the memorized motion speed. Crucially, the magnitude of the bias was comparable regardless of whether the retinal motion distractor reflected physical or apparent motion. Notably, the distraction effect of the eye movement-induced retinal motion was successfully replicated using a classifier trained on data from the physical motion condition. A further experiment confirmed that this distraction effect by the apparent motion was not due to eye movements themselves. These findings provide behavioral evidence that, at least for the feature of motion speed, perception is not required for the emergence of a bias in VSTM.

  • attractive bias
  • distraction effect
  • smooth pursuit eye movements
  • visual motion
  • visual working memory

Significance Statement

Visual short-term memory (VSTM) temporarily stores visual information but is often biased by task-irrelevant distractors presented during retention. A prevailing view based on behavioral studies holds that, among retinal inputs, only consciously perceived stimuli act as distractors in VSTM. However, previous studies faced methodological limitations in isolating the distraction effects of retinal inputs and the resulting subjective perception, potentially leading to inconsistent results. To address this, we designed novel distractor stimuli targeting VSTM for motion speed, which elicit distinct motion perception from identical retinal motion. Our results showed that the systematic VSTM bias was explained by retinal motion induced by distractors, regardless of perception. This suggests that distractors bias VSTM for motion speed through retinal inputs rather than perception.

Introduction

Visual short-term memory (VSTM), also referred to as visual working memory, is the capacity to retain visual information over short durations, even after the physical stimuli are no longer present on the retina (Baddeley, 2010). For VSTM to function effectively, it must resist interference from task-irrelevant visual inputs during the memory retention interval (hereinafter referred to as the delay period). From this perspective, the ability to maintain stable memory in the face of distractors is considered a key characteristic of an individual's VSTM capacity (McNab and Dolan, 2014; Liesefeld et al., 2020). Although various mechanisms contribute to distraction resistance in VSTM, the stored visual content can still be biased when memoranda and distractors are highly similar (Lorenc et al., 2021; Roussy et al., 2021). The neural mechanisms underlying such sensory–memory interference remain a topic of ongoing debate. Computational studies have replicated distractor-induced VSTM biases and corresponding neural activity in nonhuman primates, using recurrent neural networks (RNNs) that dynamically regulate mnemonic encoding (Murray et al., 2017; Zahorodnii et al., 2025). These models predict that when distractors closely resemble the memorized items, their visual signals are not fully filtered out, thereby interfering with mnemonic representations.

To better understand the neural basis of sensory–memory interference and identify the specific brain regions involved, it is critical to determine the nature of the interfering signals, specifically whether interference stems from retinal inputs or perception resulting from them. In this study, “retinal input” refers to physical stimuli entering the retina, whereas “perception” refers to subjective experiences that ultimately reach conscious awareness. As demonstrated in various visual illusions (Spering and Montagnini, 2011), perception does not always faithfully reflect retinal inputs (Pezzulo et al., 2024). However, this distinction has not been adequately explored in studies of sensory–memory interference. One computational model of distraction resistance, which involves circuits in the prefrontal cortex (PFC) and parietal cortex (PPC), posits that neural signals related to retinal inputs reach the PPC via the dorsal stream (Murray et al., 2017). Another model reveals that neural signature of the medial superior temporal (MST) area, an early part of the dorsal stream, aligns with RNNs incorporating gating and tuning inversion mechanisms to resist distraction (Zahorodnii et al., 2025). These models, grounded in neurophysiological data, suggest that the brain filters distractors at the level of signals from early visual areas, i.e., signals that are relatively close to retinal inputs. Conversely, behavioral studies have shown that only consciously perceived distractors induce biases in VSTM (Rademaker et al., 2015; Wildegger et al., 2015), suggesting that perception, rather than retinal inputs, interferes with VSTM. However, direct comparisons between conscious and unconscious conditions in these studies were limited, as unconscious conditions typically involved weaker or noisier retinal inputs relative to conscious conditions.

In the present study, we addressed this issue by leveraging perceptual invariance during continuous ocular tracking of a moving object (i.e., smooth pursuit eye movements). During smooth pursuit, the stationary background moves in the opposite direction of eye movements on the retina, yet this retinal motion is not consciously perceived (Furman and Gur, 2012). By using both physical object motion in world coordinates and eye movement-induced apparent motion of background, we presented observers with distractors that generated identical retinal motion but distinct perception. This approach enabled us to overcome a key limitation of previous studies: differences in stimulus strength and noise levels between conscious and unconscious conditions (Rademaker et al., 2015; Wildegger et al., 2015). Our results demonstrate that retinal motion induced by smooth pursuit, despite the absence of physical object displacement and the suppression of motion perception, distracts VSTM for motion speed. The magnitude of this bias was comparable to that caused by physical motion distractors presented in world coordinates. Further experiment confirmed that this distraction effect during smooth pursuit was not caused by the eye movement itself. Together, these findings indicate that delay-period distractors bias VSTM for motion speed through retinal inputs rather than perception.

Materials and Methods

Observers

Eight observers (eight men; 24.8 ± 2.1 years; mean ± SD) participated in Experiment 1, eight observers (seven men and one woman; 26.0 ± 2.7 years; mean ± SD) participated in Experiment 2, and 12 observers (nine men and thre women; 27.1 ± 2.2 years; mean ± SD) participated in a post hoc experiment to confirm perceptual suppression of distractors in Experiment 1. They reported having normal or corrected-to-normal vision and no known visuomotor deficits. Sample sizes were not statistically predetermined but were comparable to those in previous studies on the distraction effect of unconscious distractors (Rademaker et al., 2015; Wildegger et al., 2015). They provided written informed consent in accordance with the Declaration of Helsinki and were informed of their right to withdraw from the study at any time without penalty. The study protocol was approved by the Ethics Committee of the Department of Cognitive and Psychological Sciences at Nagoya University (No. 240801-C-04-1).

Experimental setup and stimuli

Observers were seated 57.0 cm from an LCD monitor (AW2524HF; Dell Technologies; size, 24.5 in.; resolution, 1,920 × 1,080 pixels; refresh rate, 120 Hz) with their heads stabilized by a chin rest and forehead restraint. The monitor was positioned with its height adjusted so that the observer's eye level was aligned with a fixation spot during the task.

Eye movement of the right eye was recorded using a video-based eye–tracking system. Eye position signals were detected via reflections of infrared light on the cornea, and the pupil's image was captured by an infrared camera (GS3-U3-41C6NIR; FLIR Systems; Matsuda et al., 2017). The system digitized eye position signals at 1 kHz with 12 bit precision using an A/D converter (TUSB-1612ADSM-S2Z; Turtle Industry). Prior to the task, eye position signals were calibrated by having observers fixate on target spots (diameter, 0.3°; luminance, 70.0 cd/m2) at known horizontal and vertical eccentricities under binocular viewing conditions. All stimuli during calibration and the main task were presented on a uniform gray background (luminance, 17.0 cd/m2).

The visual motion stimuli consisted of a random-dot texture, with contrast modulated by a Gaussian window (SD, 0.96; Fig. 1A; Hawken and Gegenfurtner, 2001; Miyamoto et al., 2020). This corresponded to a visible area of ∼2.4° at contrasts above 20%. The random-dot texture consisted of 50% black dots (luminance, 0.1 cd/m2) and 50% white dots (luminance, 70.0 cd/m2), each measuring 4 × 4 pixels. All visual stimuli and following experimental routine were programmed using MATLAB (MathWorks) with Psychophysics Toolbox extensions (Brainard, 1997; Pelli, 1997; Kleiner et al., 2007).

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

VSTM task with delay-period retinal distractors. A, Visual motion stimulus. The random-dot texture consisted of 50% black dots (luminance, 0.1 cd/m2) and 50% white dots (luminance, 70.0 cd/m2), with contrast modulated by a Gaussian window (SD, 0.96). B, Schematic illustration of distractor conditions. Each condition was designed to generate identical retinal motion with distinct perception. C, Trial sequence and conditions. Black arrows indicate the motion of the random-dot texture, while white arrows represent the motion of the Gaussian window and the fixation spot. Observers were instructed to compare the speeds of the sample and test motion stimuli while ignoring the distractor presented between them. D, Spatiotemporal diagrams of motion stimuli during the delay period. In the fixation condition of Experiment 1(left), the random-dot texture moved within the stationary Gaussian window, which is the same when presenting the sample and test motion (Fig. 1C). In the pursuit condition of Experiment 1(middle), the fixation spot moved in the opposite direction to the sample motion, while the Gaussian window moved in the same direction over the stationary random-dot texture. In Experiment 2 (right), both the Gaussian window and random-dot texture moved in the same direction at the same speed as the fixation spot.

Experimental procedure

The schematic illustration and procedure are shown in Figure 1, B and C. In each trial, observers were asked to compare the speeds of two motion stimuli presented sequentially (i.e., the sample and test motion) while ignoring task-irrelevant motion stimuli presented between them (i.e., the distractor motion). All motion stimuli were presented in pairs, positioned 4.0° above and below a fixation spot (a white Gaussian dot with SD of 0.15°).

Each trial began with the fixation spot displayed at the center of the monitor and a text instruction (“Press any key to continue”) above it. Observers were instructed to maintain their gaze on the fixation spot throughout the trial. Once any key was pressed on a handheld keypad, the text disappeared, followed by a 1.2–1.5 s fixation period. Subsequently, motion stimuli (sample motion) were presented, where each random-dot texture moved leftward or rightward within a stationary Gaussian window for 0.4 s, with the speed randomly selected from 6.0, 7.0, or 8.0°/s (Fig. 1C, sample period). The motion direction reversed between trials. Immediately after the sample motion offset, random noise masks, in which each dot was randomly replaced every 0.01 s, were presented for 0.2 s. This mask was introduced to minimize the motion aftereffect (Anstis et al., 1998) and was used after all motion stimuli (Zokaei et al., 2014). Following the mask, a 2.0 s fixation period was provided. Between 1.0 and 1.7 s into this fixation period, a text instruction was displayed above the fixation spot (i.e., “Fixation” or “Pursuit”), informing observers of the required eye movement during the upcoming motion stimulus (Fig. 1C, delay period). The delay period included a prestimulus epoch (0.6 s), a distractor stimulus epoch (0.4 s), and a noise mask epoch (0.2 s). Stimulus details varied by the experimental condition and are described later. After an additional 2.0 s fixation period, another pair of motion stimuli (test motion) were presented, similar to the sample motion but with the speed selected from −2.0, −1.0, ±0.0, +1.0, and +2.0°/s relative to the sample motion (Fig. 1C, test period). Following a 1.0 s blank screen, observers reported whether the test motion was perceived as faster (T+ response) or not (T − response) compared with the sample motion using the keypad (pressing the “6” key for T+ and the “4” key for T −). No performance feedback was provided.

In Experiment 1, to examine the distraction effect of perceptually suppressed retinal inputs on VSTM, the distractor motion was presented under either a fixation or pursuit condition. In the fixation condition, the fixation spot disappeared and represented at a new location 1.5–2.0° in the opposite direction of the sample motion at the onset of the prestimulus epoch. Observers were instructed to refixate on the new fixation spot, simulating a catch-up saccade typically seen during eye motion onset in the pursuit condition. This manipulation also prevented overlap between sample and distractor stimuli in screen coordinates. Distractor motion was then presented for 0.4 s, using the same procedure as the sample motion, but with the speed selected from −2.0, 0.0, or +2.0°/s relative to the sample motion. A 0.2 s noise mask followed (Fig. 1D, left). In the pursuit condition, observers were instructed to smoothly track the fixation spot that moved in the opposite direction of the sample motion at a constant speed selected from −2.0, ±0.0, and +2.0°/s relative to the sample motion. The fixation spot started to move at the onset of the prestimulus epoch and then stopped and disappeared at the offset of the noise mask epoch (for a total of 1.2 s). Distractor motion was presented for 0.4 s after the prestimulus epoch, where the Gaussian window moved at the same speed and direction as the fixation spot over the stationary random-dot texture. This stimulus provided observers with retinal motion without physical displacement of the random-dot texture (Fig. 1D, middle). The retinal motion-induced distractor stimuli were matched between the fixation and pursuit conditions, allowing direct comparison of perceptually suppressed retinal inputs on VSTM. A 0.2 s noise mask followed. In Experiment 1, each observer completed 450 trials (2 eye conditions × 3 distractor speeds × 5 test speeds × 15 repetitions), interleaved and divided into 10 blocks.

In Experiment 2, the distractor motion was similar to the pursuit condition in Experiment 1, except that both the Gaussian window and random-dot texture moved in the same direction with the same speed as the fixation spot. Thus, the distractor stimulus physically moved but had no motion in the retina during smooth pursuit (Fig. 1D, right). The speed of the moving fixation spot during the delay period was selected from −2.0, ±0.0, and +2.0°/s relative to the sample speed. In Experiment 2, each observer completed 225 trials (3 distractor speeds × 5 test speeds × 15 repetitions), interleaved and divided into five blocks.

In addition to the main experiments, we conducted a post hoc experiment to confirm the validity of the pursuit condition in Experiment 1 by evaluating motion perception induced by retinal motion resulting from eye movements (Fig. S2A). This post hoc experiment was designed to isolate the delay period in the pursuit condition of Experiment 1, including the prestimulus, distractor stimulus, and noise mask epochs. After a 1.5–1.8 s fixation period, the fixation spot and the Gaussian windows moved rightward or leftward at a constant speed selected from 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, and 10.0°/s, corresponding to the range used in Experiment 1. The task included two types of trials: in the standard trial, the random-dot texture did not move, as in the pursuit condition of Experiment 1; in the catch trial, the random-dot texture moved in the opposite direction of the fixation spot at half its speed. Following a 1.0 s blank screen, observers reported whether the random-dot texture moved in world coordinates using the keypad. No performance feedback was provided. Each observer completed 168 trials (7 pursuit speeds × 2 trial types × 12 repetitions), interleaved and divided into three blocks.

In each experiment, observers practiced the task for one or two blocks before the task for familiarization.

Data analysis

Eye motion data were processed following procedures from previous studies (Miyamoto et al., 2022, 2025). Eye position data were filtered with a second-order Butterworth low-pass filter with a 15 Hz passband. Eye velocity and acceleration were calculated by digital differentiation with the central difference algorithm and then filtered with a second-order Butterworth low-pass filter with a 30 Hz passband. Saccades were identified based on criteria of acceleration exceeding 1,000°/s2 or velocity exceeding 30°/s. Eye speed was defined as the mean eye velocity during the distractor stimulus epoch of the delay period. Pursuit gain was the ratio of eye speed to the speed of the fixation spot movement. All motion directions (stimuli and eye movements) were coded as positive to the right.

Motion speed variables were defined relative to the speed of the sample motion. Therefore, Δdist-speed represented the speed of the distractor motion relative to the sample motion (i.e., −2.0, ±0.0, and +2.0°/s), and Δtest-speed represented the speed of test motion relative to the sample motion (i.e., −2.0, −1.0, ±0.0, +1.0, and +2.0°/s). Additionally, we calculated the actual speed of distractors by subtracting eye speed from the distractor (i.e., random-dot texture) speed in world coordinates. Actual Δdist-speed was then defined as the actual speed of distractor relative to the sample speed. Δeye-speed was defined as the difference between absolute eye speed and absolute sample speed.

Invalid trials were identified and excluded based on following criteria. Trials were excluded if observers blinked during any motion stimuli, shifted their gaze by 1.0° or more from the fixation spot during the sample and test periods in both conditions, or, during the delay period in the fixation condition, made saccades during the delay period in the pursuit condition or had a pursuit gain <0.5 or >1.5 in the pursuit condition. As a result, 3,353 out of 3,600 trials (93.1%) in Experiment 1, 1,706 out of 1,800 trials (94.8%) in Experiment 2, and 1,862 out of 2016 trials (92.4%) in the post hoc experiment were included in the analysis.

Statistical analysis

To assess the distraction effect on VSTM, we used generalized linear mixed models (GLMMs) with a probit link function to predict T+ responses. In each experiment, we compared three model structures with different combinations of fixed effects, reflecting the experimental design and hypotheses. For each model, we also explored which combinations of random effects best predicted behavior (details in the Results section). As a result, we constructed 20 models in Experiment 1 and 16 models in Experiment 2. Model selection was based on the Akaike information criterion (AIC). Note that model selection results were consistent when using the Bayesian information criterion (data not shown). The magnitude of the distraction effect was quantified in units of equivalent test motion speed by calculating the ratio of the estimates of fixed effects for distractor and test motion. To visualize the distraction effect, we generated psychometric curves showing the proportion of T+ response as a function of Δtest-speed using the selected models.

To further examine the predictability of observers' responses from the distractor, we performed logistic regression on no-difference trials (i.e., Δtest-speed = 0) to classify responses based on the actual Δdist-speed. Classification analyses were conducted separately within each participant, and performance was assessed using a stratified twofold cross-validation procedure with 1,000 iterations. In Experiment 1, we trained a classifier on data from one condition and tested its performance on data from the same condition (within-condition classification) and from the other condition (cross-condition classification). Specifically, each classifier was trained on half of the data from a given condition and tested on the remaining half from both the same and other conditions. This procedure was repeated by swapping the training and test halves. To address the imbalance in the number of trials between conditions, we evaluated the classification performance using the area under the precision-recall curve (PR-AUC), which is more robust to imbalanced data than the area under the receiver operating characteristic curve (ROC-AUC). This entire procedure was repeated 1,000 times with different random partitions of the data, and overall classifier performance for each condition was defined as the mean PR-ROC across all iterations. To assess statistical significance, the same procedures were applied to classifiers trained on shuffled datasets, in which class labels were randomly permuted. Paired t tests were used to compare classification performance between the true and shuffled data. In Experiment 2, the classification analysis followed a similar within-condition procedure as in Experiment 1, with the classifier predicting responses based on Δeye-speed.

In the post hoc experiment, the hit rate was defined as the proportion of correct responses in the standard trials (i.e., correctly identifying the stimulus as stationary when it did not move) and the correct rejection (CR) rate was defined as the proportion of correct responses in the catch trials (i.e., correctly identifying the stimulus as moving when it moved). These rates were compared using a paired t test to examine whether perceptual compensation during smooth pursuit reflected the stability assumption, namely, the expectation that the external world remains stationary (Weiss et al., 2002; Freeman et al., 2010). Additionally, to test whether miss responses (i.e., perceiving the stimulus as moving when it did not move) depended on eye speed, we conducted a classification analysis within the standard trials to predict responses from eye speed, using the same procedure as described above.

All statistical analyses were performed using MATLAB with a significance level of 0.05. For classification analyses in Experiment 1, the significance level was adjusted using Holm–Bonferroni’s method to account for multiple comparisons (n = 4). Statistical significance results are denoted as a single asterisk in the figures.

Data and code availability

All data and custom codes have been deposited at Zenodo (https://doi.org/10.5281/zenodo.17376082) and are publicly available as of the date of publication. Any additional information required to reanalyze the data reported in this paper is available from the corresponding author (T.M., miyamoto{at}i.nagoya-u.ac.jp) upon request.

Results

Retinal motion distractor induces an attractive bias in VSTM for motion speed, even when generated by eye movements

In Experiment 1, we presented distractor motion as retinal motion with or without physical displacement (i.e., for the fixation or pursuit conditions) to test the distraction effect of perceptually suppressed retinal inputs on VSTM for motion. For this purpose, we designed the pursuit condition to ensure that continuous ocular tracking of the fixation spot induced retinal motion identical to the fixation condition, but without motion in world coordinates.

We validated the distractor-induced bias in VSTM using GLMMs applied to 3,353 trials from eight observers. The constructed models can be broadly categorized into the following three types. Model 1 included the eye condition (the fixation or pursuit conditions), Δdist-speed (−2, ± 0, or +2°/s relative to the sample speed), Δtest-speed (−2, −1, ±0, +1, or +2°/s relative to the sample speed), and their interactions as fixed effects. While this model reflected our experimental design, it might not fully capture the true effect, as the actual speed of retinal motion depended on eye speed in each trial, particularly in the pursuit condition. Indeed, the pursuit gain varied across trials, with an average value of 0.88 (Fig. S1A). To address this, Model 2 included the eye condition, actual Δdist-speed [adjusted by eye speed (degrees per second)], Δtest-speed, and their interactions as fixed effects. Model 3 included the actual Δdist-speed, Δtest-speed, and their interaction as fixed effects, based on the hypothesis that retinal motion induced by the distractor biases VSTM regardless of whether it reflects actual motion in world coordinates or apparent motion induced by smooth pursuit. Finally, 20 models with different combinations of random effects added to these models were compared based on AIC (Fig. S1B), and then Model 3 with random effects of actual Δdist-speed, Δtest-speed, and their interaction was selected (Fig. 2A). The selected model revealed significant fixed effects of actual Δdist-speed (estimate, −0.16 [95% CI: −0.20, −0.12]; t(3349) = −8.47; p = 3.61 × 10−17) and Δtest-speed (estimate, 1.05 [95% CI: 0.96, 1.15]; t(3349) = 21.57; p = 9.93 × 10−97) with an r2 value of 0.92 (Fig. 2B). The negative estimate of actual Δdist-speed indicates a VSTM bias where the speed of memorized motion is attracted by the speed of distractor, referred to as the attractive bias (Lorenc et al., 2021). The magnitude of distraction effect, computed by the ratio of slopes for actual Δdist-speed and Δtest-speed, was −0.15. This subtle but significant impact is the main feature of the attractive bias, consistent with VSTM of diverse visual features (Magnussen et al., 1991; Yoon et al., 2006; McKeefry et al., 2007). The attractive bias corresponds to a systematic shift of the psychometric curve as a function of Δtest-speed (Fig. 2C). Figure 2D summarizes the attractive bias induced by actual Δdist-speed, quantified using the point of subjective equality (PSE) computed by the psychometric curve. Overall, the PSE in the pursuit condition shifted in the negative direction, which could be explained by a reduced actual Δdist-speed resulting from the pursuit gain being <1.0.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

GLMMs predicting observers' responses (Experiment 1). A, AIC values for the three GLMMs (3,353 trials from 8 observers). Three core models were first constructed, each differing in the combination of fixed effects. Subsequently, a total of 20 models were tested by varying the structure of random effects for each core model structure. Here, only the best-fitting model for each structure is shown. Details of all models are shown in Figure S1B. Model 1, response ∼ 1 + condition × Δdist-speed × Δtest-speed + (1 + condition × Δdist-speed × Δtest-speed | observers). Model 2, response ∼ 1 + condition × actual Δdist-speed × Δtest-speed + (1 + actual Δdist-speed × Δtest-speed | observers). Model 3, response ∼ 1 + actual Δdist-speed × Δtest-speed + (1 + actual Δdist-speed × Δtest-speed | observers). Here, response indicates a T+ response (i.e., the test motion was perceived as faster than the sample). Δtest-speed refers to the test speed relative to the sample speed. Δdist-speed is the distractor speed relative to the sample speed as determined by experimental condition, and actual Δdist-speed is the distractor speed relative to the sample speed, adjusted for eye speed. B, Fixed effect estimates and 95% CIs (vertical bars) for the selected model. Asterisks denote statistical significance (p < 0.05). C, Psychometric curves plotted as a function of Δtest-speed for the fixation (top panel) and pursuit (bottom panel) conditions. For each condition combination, the mean actual Δdist-speed was computed per observer and used as input to the GLMM, generating a total of 48 curves (2 eye movement conditions × 3 Δdist-speed levels × 8 observers). D, Relationship between the PSE and mean actual Δdist-speed. The scatterplot displays a total of 48 data points corresponding to the psychometric curves in Figure 2C. The top panel summarizes the mean ± 95% CI of mean actual Δdist-speed for the fixation (Fix) and pursuit (Pur) conditions. The right panel shows the mean ± 95% CI of PSE for both conditions.

Predictability of observers’ responses from distractor

The result that the model excluding eye condition as a fixed effect was supported suggests that the influence of distractors can be predicted by retinal motion regardless of eye movements. To examine the predictability of observers' responses from the distractor and to further confirm the independence of the distraction effect from its origin, we used logistic regression to classify observers' responses based on the actual Δdist-speed. We focused on no-difference trials (i.e., trials where Δtest-speed = 0), where observers were expected to make random guesses. We trained a classifier on the dataset from one condition and tested its performance on both the same condition (within-condition classification) and the other condition's dataset (cross-condition classification; Fig. 3A). If, as suggested by the GLMM results, retinal motion induces the attractive bias in VSTM regardless of its origin, the classifier should perform even when applied to data from the untrained condition. As expected, the classifier trained on data from the fixation condition performed significantly above empirical chance levels computed using shuffle data, both when tested on the fixation condition itself (t(7) = 2.47; adjusted p = 4.28 × 10−2; Fig. 3B) and when tested on the pursuit condition (t(7) = 5.74; adjusted p = 2.10 × 10−3; Fig. 3D). Similarly, the classifier trained on the pursuit condition also yielded significant performance (within-condition, t(7) = 3.79; adjusted p = 1.36 × 10−2; Fig. 3C; cross-condition, t(7) = 6.02; adjusted p = 2.10 × 10−3; Fig. 3E). The fact that classification performance remained significant even in the cross-condition tests indicates that the observers' responses were, at least in part, biased by the actual Δdist-speed in the same manner across both conditions. These results further support the conclusion that the attractive bias in VSTM for motion speed is driven by distractor-induced retinal inputs, regardless of eye movements or perception.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Classification analysis of observers' response based on retinal motion induced by distractors (Experiment 1). A, Schematic diagram of within- and cross-condition classification procedure. B, Performance of within-condition classification for the fixation condition, evaluated using the PR-AUC. The classifier was trained and tested within the same condition. Colored dots represent each observer's mean performance using true labels, whereas gray dots represent performance using label-shuffled data. Horizontal and vertical bars represent the group mean and 95% CIs, respectively. An asterisk denotes statistical significance (paired t test, p < 0.05), corrected for multiple comparisons using Holm–Bonferroni’s method (n = 4). C, Performance of within-condition classification for the pursuit condition, plotted in the same format as Figure 3B. D, Performance of cross-condition classification for the fixation condition, plotted in the same format as Figure 3B. The classifier was trained on data from one condition and tested on data from the other. E, Performance of cross-condition classification for the pursuit condition, plotted in the same format as Figure 3B.

The validity test of motion perception suppression in the pursuit condition

Based on previous findings regarding perceptual invariance during smooth pursuit (Thier et al., 2001; Furman and Gur, 2012), the experimental protocol employed in the pursuit condition was expected to suppress motion perception resulting from eye movement-induced retinal motion. However, subjective reports on perception of the distractor were not collected during this condition, as such measurements could have directed observers' attention toward the distractor and thereby conflicted with the instruction to ignore it. To address this limitation, we conducted a separate post hoc experiment to validate the effectiveness of the pursuit condition in suppressing motion perception (Fig. S2A). The hit rate was 0.88 ± 0.07 (mean ± SD), indicating that observers accurately reported the stimulus as stationary when it did not move. This hit rate was significantly higher than the CR rate (0.76 ± 0.07; mean ± SD; t(11) = 3.31; p = 6.90 × 10−3; Fig. S2B). These results indicate that perceptual compensation during smooth pursuit more often causes moving stimuli appear stationary rather than causing stationary stimuli to appear moving, consistent with previous findings related to the stability assumption (Weiss et al., 2002; Freeman et al., 2010).

Nevertheless, the less-than-perfect hit rate implies that in some trials, observers misperceived stationary stimuli as moving (i.e., miss trials). This aligns with reports that perceptual compensation during smooth pursuit is not always complete (Thier et al., 2001; Furman and Gur, 2012). One relevant phenomenon is the Filehne illusion (Filehne, 1922), where a stationary background is perceived to move in the opposite direction of eye movements during smooth pursuit. This illusion is known to be modulated by eye speed (Wertheim, 1987). If the Filehne illusion contributed to the misperception observed in the post hoc experiment, one would expect miss responses to vary systematically with eye speed. To test this possibility, we conducted a classification analysis on all trials containing stationary stimuli (i.e., hit and miss trials) to predict responses based on eye speed. The classifier did not perform above an empirical chance level (t(11) = 0.39; p = 0.71; Fig. S2C), suggesting that miss trials were not related to the Filehne illusion.

In sum, these results confirm that the pursuit condition effectively suppressed motion perception of the distractor and did not induce related motion illusions. It is therefore unlikely that such perceptual effects account for the results observed in Experiment 1.

Eye movement alone does not act as a distractor

In Experiment 1, since the speed of retinal motion from the distractor in the pursuit condition corresponded directly to eye speed, we could not rule out the possibility that eye movements themselves act as a distractor of VSTM for motion speed. Some studies have mentioned that eye movements or its control during the delay period can distract VSTM for locations (Pearson and Sahraie, 2003; Postle et al., 2006). Furthermore, a subpopulation of MST neurons, a candidate brain region that play a critical role in providing distraction resistance of VSTM (Zahorodnii et al., 2025), exhibits firing activity related to smooth pursuit without substantial visual motion (Ilg and Thier, 2003; Ono and Mustari, 2006, 2012). Therefore, Experiment 2 was designed to test the effect of eye movements on the VSTM bias while removing retinal motion. The procedure in Experiment 2 was largely identical to the pursuit condition in Experiment 1, except that the random-dot texture and the Gaussian window moved coherently in the same direction, resulting in no retinal motion during smooth pursuit (Fig. 1C,D). If the attractive bias observed in the pursuit condition of Experiment 1 was driven by eye movements themselves rather than retinal motion, the distraction effect should persist in Experiment 2.

Here, three models with different combinations of fixed effects were designed to predict T+ response (1,706 trials from eight observers). Model 1 mirrored the selected model in Experiment 1, which included the actual Δdist-speed, Δtest-speed, and their interaction as fixed effects. Note that the actual Δdist-speed in Experiment 2 was determined sorely by pursuit gain (Fig. S3A) and was totally smaller than that in Experiment 1. Model 2 included the Δeye-speed (difference between the sample motion and eye speed during smooth pursuit [deg/s]), Δtest-speed, and their interaction as fixed effects, which excluded retinal motion of the distractor directly as fixed effects, reflecting the hypothesis that eye movements, not retinal motion, during the delay period led to the VSTM bias. Model 3 included both variables regarding retinal and eye information, namely, the actual Δdist-speed, Δeye-speed, Δtest-speed, and their interactions as fixed effects. Finally, 16 models with different combinations of random effects added to these models were compared based on AIC (Fig. S3B), and then Model 2 with random effects of the Δeye-speed, Δtest-speed, and their interaction was selected (Fig. 4A). This model revealed a significant fixed effect of Δtest-speed (estimate, 1.07 [95% CI: 0.94, 1.20]; t(1702) = 15.75; p = 2.79 × 10−52) but no significance in Δeye-speed (estimate, −0.03 [95% CI: −0.07, 0.02]; t(1702) = −1.02; p = 0.31), with an r2 value at 0.92 (Fig. 4B). Indeed, Δeye-speed no longer explained the shift of the psychometric curve (Fig. 4C,D).

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Summary of Experiment 2. A, AIC values for the three GLMMs (1,706 trials from 8 observers). As in Figure 2A, only the best-fitting model for each core set is shown. Details of all 16 tested models are provided in Figure S3B. Model 1, response ∼ 1 + actual Δdist-speed × Δtest-speed + (1 + condition × actual Δdist-speed × Δtest-speed | observers). Model 2, response ∼ 1 + Δeye-speed × Δtest-speed + (1 + Δeye-speed × Δtest-speed | Observers). Model 3, response ∼ 1 + actual Δdist-speed × Δtest-speed + (1 | observers). Here, Δeye-speed indicates the eye speed relative to the sample speed. B, Fixed effect estimates and 95% CIs (vertical bars) for the selected GLMM. Asterisks denote statistical significance (p < 0.05). C, Psychometric curves plotted as a function of Δtest-speed. For each condition combination, the mean Δeye-speed was computed per observer and used as input to the GLMM, generating 24 psychometric curves (3 Δeye-speed levels × 8 observers). D, Relationship between the PSE and mean Δeye-speed, plotted in the same format as Figure 2D. E, Performance of classification, plotted in the same format as Figure 3B. No statistically significant difference was observed (paired t test, p = 0.76).

We also trained a classifier to predict observer responses in no-difference trials using Δeye-speed. The classifier did not perform above chance (t(11) = 0.32; p = 0.76; Fig. 4E). These results demonstrate that eye movements alone do not act as a distractor to bias VSTM for motion speed. Thus, the attractive bias observed in Experiment 1 is attributable specifically to retinal motion.

Discussion

A series of experiments demonstrate that, for the feature of motion speed, distractors presented during the delay period bias VSTM through retinal inputs rather than perception. This effect may be attributed to the hierarchical processing of motion perception. Along the dorsal stream, representations of visual motion are progressively transformed from retinal to world coordinates. While neurons in the MT primarily encode motion in retinal coordinates, some MST neurons represent motion in world coordinates that better align with perception (Ilg et al., 2004; Inaba et al., 2007; Chukoskie and Movshon, 2009). The MST has been identified as the first area in the dorsal stream to exhibit feature-selective sustained spiking activity associated with retention of VSTM for motion (Mendoza-Halliday et al., 2014) and is a potential locus for reducing sensory–memory interference (Zahorodnii et al., 2025). Moreover, a study posited that a PFC–PPC circuit involved in distraction-resistant VSTM inherits neural signals from the dorsal stream (Murray et al., 2017). However, at this stage of motion processing, the neural signals may still primarily reflect retinal inputs rather than perception and thus could underlie the observed VSTM bias dependent on retinal inputs. In fact, perceptual invariance during smooth pursuit is supported by distributed networks rather than the dorsal stream alone (Haarmeier and Kammer, 2010; Furman and Gur, 2012). Even within the MST, most neurons encode visual motion in retinal rather than world coordinates (Lee et al., 2010). A large population of neurons reflecting perceived motion has been identified in the visual posterior sylvian (VPS) area in monkeys, where a similar population of neurons encodes information of perceived or retinal motion (Dicke et al., 2008). The VPS, located in the caudal lateral sulcus below the classical dorsal stream, receives inputs from the MST (Guldin and Grüsser, 1998). In humans, the putative VPS homologs exhibits perception-related activity during smooth pursuit (Thier et al., 2001; Trenner et al., 2008). A patient with bilateral lesions to the parieto-occipital cortex, possibly including the human VPS, is reported to perceive smooth pursuit-induced retinal motion as real-world motion, though the primary visual cortex (V1) and most probably at least parts of the MT/MST were available (Haarmeier et al., 1997). Outside the VPS, perceptual invariance during smooth pursuit has been linked to neural activity in the cerebrocerebellar circuit and the parietoinsular vestibular cortex (Lindner et al., 2006). Taken together, these neural findings on perceptual invariance, combined with our behavioral results, suggest that sensory–memory interference may occur at processing stages prior to the formation of perception within the hierarchical motion transformation.

Our findings raise a broader question whether the VSTM bias induced by retinal inputs, rather than by perception, can be generalized to other visual features. For example, VSTM for object orientation and color has been shown to be biased by distractors during the delay period, even when observers do not actively attend to them (Teng and Kravitz, 2019). While that study did not directly examine the influence of perception, it nonetheless demonstrates that distractor-induced VSTM biases in these features can occur in the absence of active attention. On the other hand, other studies that explicitly examine the influence of perception have reported that VSTM for object orientation is not biased by distractors when they are rendered subliminal (Wildegger et al., 2015) or by presenting them to the nondominant eye during binocular rivalry (Rademaker et al., 2015). However, it should be noted that these unconscious distractors may have been too weak or noisy to exert a measurable influence on VSTM, potentially leading to inconsistent results (Silvanto and Soto, 2012). Taken together, current evidence does not allow us to conclude whether our findings can be generalized to other visual features. Nevertheless, based on our hypothesis, for features that involve hierarchical perceptual transformations like motion processing, distractor-induced VSTM biases dependent on retinal inputs may also occur. For example, color information is encoded differently across the visual hierarchy, where it is represented as the chromatic stimulus at the retina in the V1 and V2, whereas as the perceived color in higher visual areas such as V4 and VO1 (Kim et al., 2020). If sensory–memory interference in color processing occurred at the regions representing retinal inputs, then distractor-induced biases may reflect retinal inputs rather than perception. Conversely, some visual features, such as luminance, orientation, and size, are partially represented in a manner consistent with perception even at the V1 (MacEvoy and Paradiso, 2001; Murray et al., 2006; Seymour et al., 2018). For these features, sensory–memory interference might be related to perception. Although experimentally dissociating retinal inputs from perception remains challenging, further studies are warranted to determine whether our findings are applicable to other visual features and to identify the specific brain regions involved in sensory–memory interference.

A computational study using RNNs suggests that the mechanism for resisting distractors is dissociable from, and supplementary to, the mechanism for storing VSTM (Zahorodnii et al., 2025). Currently, the most widely accepted mechanism for storing VSTM is the attractor model, which proposes that activity in RNNs stabilizes mnemonic representations of continuous variables through persistent bumps of activity (Chaudhuri and Fiete, 2016; Panichello et al., 2019; Brennan and Proekt, 2023). This model, originally inspired by findings of persistent neural activity in PFC during the delay period (Wimmer et al., 2014), predicts that distractors sharing features with memoranda shift the bump state toward the distractor inputs, resulting in the attractive bias (Seeholzer et al., 2019). Early visual areas are capable of providing high-fidelity visual representations to attractor networks through neurons tuned to specific visual features (Chaudhuri and Fiete, 2016), such as direction- and speed-selective neurons in the MT and MST (Celebrini and Newsome, 1994; Perrone and Thiele, 2001). Actually, the PFC contains a similar proportion of neurons encoding ongoing retinal inputs and VSTM (Mendoza-Halliday et al., 2014). Each type of information is partially encoded by distinct neuronal populations within the PFC and can be identified through a linear neural population decoder (Mendoza-Halliday and Martinez-Trujillo, 2017), suggesting that each of them reflects copies of signals inherited from upstream visual areas and local computation associated with VSTM (Roussy et al., 2021). For this reason, the attractor model for storing VSTM utilizes neural signals inherited from early visual areas, and therefore, the mechanism for resisting distractors, which operate upstream of the attractor model, may also utilize the same format of neural signals. Specifically, for visual motion, the PFC is expected to inherit neural signals from the MT/MST (Mendoza-Halliday et al., 2014), which results in the VSTM bias dependent on retinal inputs.

The hypothesis that neural signals inherited from early visual areas are utilized for the mechanisms for resisting distractors and for storing VSTM may also be desirable for top–down modulation of early visual areas by the PFC to facilitate encoding of incoming visual stimuli (Merrikhi et al., 2017; Bahmani et al., 2018). While this facilitation effect becomes particularly pronounced when the memorandum and incoming stimuli are presented at the same location (Zaksas et al., 2001; Teng and Postle, 2021), mnemonic neural modulation extends globally beyond the retinotopic map of the memorandum (Ester et al., 2009). This property may stem from the coarser topographic organization of receptive fields in the PFC compared with early visual areas (Viswanathan and Nieder, 2017). Importantly, the top–down signal from the PFC serves as an internal template for comparing with bottom–up visual inputs, thereby shaping a local comparison circuit within early visual areas (Rademaker et al., 2019; Hallenbeck et al., 2021). Such top–down memory signals could coexist with ongoing visual signals in early visual areas through distinct neural populations or encoding formats (Mendoza-Halliday et al., 2014; Favila et al., 2022; Huang et al., 2024). Consistent with this view, behavioral biases induced by distractors have been predicted by VSTM-related neural activities in early visual areas (Rademaker et al., 2019; Hallenbeck et al., 2021), and VSTM contents can be decoded from early visual areas during visual memory tasks, but not during nonvisual memory tasks, even when both remembering the same visual object (Lee et al., 2013). This mechanism is well suited to common VSTM tasks such as change detection or discrimination and is also applicable to VSTM for motion (Bisley et al., 2004; Zaksas and Pasternak, 2006; Lui and Pasternak, 2011). Collectively, the encoding of VSTM in a format congruent with visual representations in early visual areas may provide functional advantages not only for storing VSTM but also for subsequent cognitive processes.

In sum, our study demonstrates that retinal motion induced by smooth pursuit can distract VSTM for motion, even when perceptually suppressed. While perception is generally understood as a process of active inference about the external world based on sensory inputs (Pezzulo et al., 2024), most VSTM studies have not clearly distinguished between retinal inputs and perception. These findings also highlight the importance of distinguishing them in future VSTM research.

Footnotes

  • This study was supported by JSPS KAKENHI (Grant Number, 22KJ1787 and 23K16671) and JST CREST (Grant Number, JPMJCR22P5).

  • The authors declare no competing financial interests.

  • This paper contains supplemental material available at: https://doi.org/10.1523/JNEUROSCI.1027-25.2025

  • Correspondence should be addressed to Takeshi Miyamoto at miyamoto{at}i.nagoya-u.ac.jp.

SfN exclusive license.

References

  1. ↵
    1. Anstis S,
    2. Verstraten FAJ,
    3. Mather G
    (1998) The motion aftereffect. Trends Cogn Sci 2:111–117. https://doi.org/10.1016/S1364-6613(98)01142-5
    OpenUrlCrossRefPubMed
  2. ↵
    1. Baddeley A
    (2010) Working memory. Curr Biol 20:136–140. https://doi.org/10.1016/j.cub.2009.12.014
    OpenUrlCrossRef
  3. ↵
    1. Bahmani Z,
    2. Daliri MR,
    3. Merrikhi Y,
    4. Clark K,
    5. Noudoost B
    (2018) Working memory enhances cortical representations via spatially specific coordination of spike times. Neuron 97:967–979. https://doi.org/10.1016/j.neuron.2018.01.012
    OpenUrlPubMed
  4. ↵
    1. Bisley JW,
    2. Zaksas D,
    3. Droll JA,
    4. Pasternak T
    (2004) Activity of neurons in cortical area MT during a memory for motion task. J Neurophysiol 91:286–300. https://doi.org/10.1152/jn.00870.2003
    OpenUrlCrossRefPubMed
  5. ↵
    1. Brainard DH
    (1997) The psychophysics toolbox. Spat Vis 10:433–436. https://doi.org/10.1163/156856897X00357
    OpenUrlCrossRefPubMed
  6. ↵
    1. Brennan C,
    2. Proekt A
    (2023) Attractor dynamics with activity-dependent plasticity capture human working memory across time scales. Commun Psychol 1:28. https://doi.org/10.1038/s44271-023-00027-8
    OpenUrlPubMed
  7. ↵
    1. Celebrini S,
    2. Newsome WT
    (1994) Neuronal and psychophysical sensitivity to motion signals in extrastriate area MST of the macaque monkey. J Neurosci 14:4109–4124. https://doi.org/10.1523/JNEUROSCI.14-07-04109.1994
    OpenUrlAbstract/FREE Full Text
  8. ↵
    1. Chaudhuri R,
    2. Fiete I
    (2016) Computational principles of memory. Nat Neurosci 19:394–403. https://doi.org/10.1038/nn.4237
    OpenUrlCrossRefPubMed
  9. ↵
    1. Chukoskie L,
    2. Movshon JA
    (2009) Modulation of visual signals in macaque MT and MST neurons during pursuit eye movement. J Neurophysiol 102:3225–3233. https://doi.org/10.1152/jn.90692.2008
    OpenUrlCrossRefPubMed
  10. ↵
    1. Dicke PW,
    2. Chakraborty S,
    3. Thier P
    (2008) Neuronal correlates of perceptual stability during eye movements. Eur J Neurosci 27:991–1002. https://doi.org/10.1111/j.1460-9568.2008.06054.x
    OpenUrlCrossRefPubMed
  11. ↵
    1. Ester EF,
    2. Serences JT,
    3. Awh E
    (2009) Spatially global representations in human primary visual cortex during working memory maintenance. J Neurosci 29:15258–15265. https://doi.org/10.1523/JNEUROSCI.4388-09.2009
    OpenUrlAbstract/FREE Full Text
  12. ↵
    1. Favila SE,
    2. Kuhl BA,
    3. Winawer J
    (2022) Perception and memory have distinct spatial tuning properties in human visual cortex. Nat Commun 13:5864. https://doi.org/10.1038/s41467-022-33161-8
    OpenUrlCrossRefPubMed
  13. ↵
    1. Filehne WU
    (1922) Über das optische Wahrnehmen von Bewegungen. Z. Sinnesphysiol. 53:134–145.
    OpenUrl
  14. ↵
    1. Freeman TCA,
    2. Champion RA,
    3. Warren PA
    (2010) A Bayesian model of perceived head-centered velocity during smooth pursuit eye movement. Curr Biol 20:757–762. https://doi.org/10.1016/j.cub.2010.02.059
    OpenUrlCrossRefPubMed
  15. ↵
    1. Furman M,
    2. Gur M
    (2012) And yet it moves: perceptual illusions and neural mechanisms of pursuit compensation during smooth pursuit eye movements. Neurosci Biobehav Rev 36:143–151. https://doi.org/10.1016/j.neubiorev.2011.05.005
    OpenUrlCrossRefPubMed
  16. ↵
    1. Guldin WO,
    2. Grüsser OJ
    (1998) Is there a vestibular cortex? Trends Neurosci 21:254–259. https://doi.org/10.1016/S0166-2236(97)01211-3
    OpenUrlCrossRefPubMed
  17. ↵
    1. Haarmeier T,
    2. Kammer T
    (2010) Effect of TMS on oculomotor behavior but not perceptual stability during smooth pursuit eye movements. Cereb Cortex 20:2234–2243. https://doi.org/10.1093/cercor/bhp285
    OpenUrlCrossRefPubMed
  18. ↵
    1. Haarmeier T,
    2. Thier P,
    3. Repnow M,
    4. Petersen D
    (1997) False perception of motion in a patient who cannot compensate for eye movements. Nature 389:849–852. https://doi.org/10.1038/39872
    OpenUrlCrossRefPubMed
  19. ↵
    1. Hallenbeck GE,
    2. Sprague TC,
    3. Rahmati M,
    4. Sreenivasan KK,
    5. Curtis CE
    (2021) Working memory representations in visual cortex mediate distraction effects. Nat Commun 12:4714. https://doi.org/10.1038/s41467-021-24973-1
    OpenUrlCrossRefPubMed
  20. ↵
    1. Hawken MJ,
    2. Gegenfurtner KR
    (2001) Pursuit eye movements to second-order motion targets. J Opt Soc Am A 18:2282–2296. https://doi.org/10.1364/JOSAA.18.002282
    OpenUrlCrossRef
  21. ↵
    1. Huang J,
    2. Wang T,
    3. Dai W,
    4. Li Y,
    5. Yang Y,
    6. Zhang Y,
    7. Wu Y,
    8. Zhou T,
    9. Xing D
    (2024) Neuronal representation of visual working memory content in the primate primary visual cortex. Sci Adv 10:eadk3953. https://doi.org/10.1126/sciadv.adk3953
    OpenUrlCrossRefPubMed
  22. ↵
    1. Ilg UJ,
    2. Thier P
    (2003) Visual tracking neurons in primate area MST are activated by smooth-pursuit eye movements of an “imaginary” target. J Neurophysiol 90:1489–1502. https://doi.org/10.1152/jn.00272.2003
    OpenUrlCrossRefPubMed
  23. ↵
    1. Ilg UJ,
    2. Schumann S,
    3. Thier P
    (2004) Posterior parietal cortex neurons encode target motion in world-centered coordinates. Neuron 43:145–151. https://doi.org/10.1016/j.neuron.2004.06.006
    OpenUrlCrossRefPubMed
  24. ↵
    1. Inaba N,
    2. Shinomoto S,
    3. Yamane S,
    4. Takemura A,
    5. Kawano K
    (2007) MST neurons code for visual motion in space independent of pursuit eye movements. J Neurophysiol 97:3473–3483. https://doi.org/10.1152/jn.01054.2006
    OpenUrlCrossRefPubMed
  25. ↵
    1. Kim I,
    2. Hong SW,
    3. Shevell SK,
    4. Shim WM
    (2020) Neural representations of perceptual color experience in the human ventral visual pathway. Proc Natl Acad Sci U S A 117:13145–13150. https://doi.org/10.1073/pnas.1911041117
    OpenUrlAbstract/FREE Full Text
  26. ↵
    1. Kleiner M,
    2. Brainard D,
    3. Pelli D,
    4. Ingling A,
    5. Murray R,
    6. Broussard C
    (2007) What’s new in psychtoolbox-3. Perception 36:1–16.
    OpenUrlCrossRefPubMed
  27. ↵
    1. Lee B,
    2. Pesaran B,
    3. Andersen RA
    (2010) Area MSTd neurons encode visual stimuli in eye coordinates during fixation and pursuit. J Neurophysiol 105:60–68. https://doi.org/10.1152/jn.00495.2009
    OpenUrlPubMed
  28. ↵
    1. Lee SH,
    2. Kravitz DJ,
    3. Baker CI
    (2013) Goal-dependent dissociation of visual and prefrontal cortices during working memory. Nat Neurosci 16:997–999. https://doi.org/10.1038/nn.3452
    OpenUrlCrossRefPubMed
  29. ↵
    1. Liesefeld HR,
    2. Liesefeld AM,
    3. Sauseng P,
    4. Jacob SN,
    5. Müller HJ
    (2020) How visual working memory handles distraction: cognitive mechanisms and electrophysiological correlates. Vis Cogn 28:372–387. https://doi.org/10.1080/13506285.2020.1773594
    OpenUrlCrossRef
  30. ↵
    1. Lindner A,
    2. Haarmeier T,
    3. Erb M,
    4. Grodd W,
    5. Thier P
    (2006) Cerebrocerebellar circuits for the perceptual cancellation of eye-movement-induced retinal image motion. J Cogn Neurosci 18:1899–1912. https://doi.org/10.1162/jocn.2006.18.11.1899
    OpenUrlCrossRefPubMed
  31. ↵
    1. Lorenc ES,
    2. Mallett R,
    3. Lewis-Peacock JA
    (2021) Distraction in visual working memory: resistance is not futile. Trends Cogn Sci 25:228–239. https://doi.org/10.1016/j.tics.2020.12.004
    OpenUrlCrossRefPubMed
  32. ↵
    1. Lui LL,
    2. Pasternak T
    (2011) Representation of comparison signals in cortical area MT during a delayed direction discrimination task. J Neurophysiol 106:1260–1273. https://doi.org/10.1152/jn.00016.2011
    OpenUrlCrossRefPubMed
  33. ↵
    1. MacEvoy SP,
    2. Paradiso MA
    (2001) Lightness constancy in primary visual cortex. Proc Natl Acad Sci U S A 98:8827–8831. https://doi.org/10.1073/pnas.161280398
    OpenUrlAbstract/FREE Full Text
  34. ↵
    1. Magnussen S,
    2. Greenlee MW,
    3. Asplund R,
    4. Dyrnes S
    (1991) Stimulus-specific mechanisms of visual short-term memory. Vision Res 31:1213–1219. https://doi.org/10.1016/0042-6989(91)90046-8
    OpenUrlCrossRefPubMed
  35. ↵
    1. Matsuda K,
    2. Nagami T,
    3. Sugase Y,
    4. Takemura A,
    5. Kawano K
    (2017) A widely applicable real-time mono/binocular eye tracking system using a high frame-rate digital camera. In: Lecture Notes in Computer Science (Including Subser Lect Notes Artif Intell Lect Notes Bioinformatics), Vol. 10271, pp 593–608. https://doi.org/10.1007/978-3-319-58071-5_45
  36. ↵
    1. McKeefry DJ,
    2. Burton MP,
    3. Vakrou C
    (2007) Speed selectivity in visual short term memory for motion. Vision Res 47:2418–2425. https://doi.org/10.1016/j.visres.2007.05.011
    OpenUrlCrossRefPubMed
  37. ↵
    1. McNab F,
    2. Dolan RJ
    (2014) Dissociating distractor-filtering at encoding and during maintenance. J Exp Psychol Hum Percept Perform 40:960–967. https://doi.org/10.1037/a0036013
    OpenUrlCrossRefPubMed
  38. ↵
    1. Mendoza-Halliday D,
    2. Martinez-Trujillo JC
    (2017) Neuronal population coding of perceived and memorized visual features in the lateral prefrontal cortex. Nat Commun 8:15471. https://doi.org/10.1038/ncomms15471
    OpenUrlCrossRefPubMed
  39. ↵
    1. Mendoza-Halliday D,
    2. Torres S,
    3. Martinez-Trujillo JC
    (2014) Sharp emergence of feature-selective sustained activity along the dorsal visual pathway. Nat Neurosci 17:1255–1262. https://doi.org/10.1038/nn.3785
    OpenUrlCrossRefPubMed
  40. ↵
    1. Merrikhi Y,
    2. Clark K,
    3. Albarran E,
    4. Parsa M,
    5. Zirnsak M,
    6. Moore T,
    7. Noudoost B
    (2017) Spatial working memory alters the efficacy of input to visual cortex. Nat Commun 8:15041. https://doi.org/10.1038/ncomms15041
    OpenUrlCrossRefPubMed
  41. ↵
    1. Miyamoto T,
    2. Miura K,
    3. Kizuka T,
    4. Ono S
    (2020) Properties of smooth pursuit and visual motion reaction time to second-order motion stimuli. PLoS One 15:e0243430. https://doi.org/10.1371/journal.pone.0243430
    OpenUrlCrossRefPubMed
  42. ↵
    1. Miyamoto T,
    2. Numasawa K,
    3. Ono S
    (2022) Changes in visual speed perception induced by anticipatory smooth eye movements. J Neurophysiol 127:1198–1207. https://doi.org/10.1152/jn.00498.2021
    OpenUrlCrossRefPubMed
  43. ↵
    1. Miyamoto T,
    2. Numasawa K,
    3. Hirano R,
    4. Yoshimura Y,
    5. Ono S
    (2025) Reduced latency in manual interception with anticipatory smooth eye movements. iScience 28:111849. https://doi.org/10.1016/j.isci.2025.111849
    OpenUrlPubMed
  44. ↵
    1. Murray JD,
    2. Jaramillo J,
    3. Wang XJ
    (2017) Working memory and decision-making in a frontoparietal circuit model. J Neurosci 37:12167–12186. https://doi.org/10.1523/JNEUROSCI.0343-17.2017
    OpenUrlAbstract/FREE Full Text
  45. ↵
    1. Murray SO,
    2. Boyaci H,
    3. Kersten D
    (2006) The representation of perceived angular size in human primary visual cortex. Nat Neurosci 9:429–434. https://doi.org/10.1038/nn1641
    OpenUrlCrossRefPubMed
  46. ↵
    1. Ono S,
    2. Mustari MJ
    (2006) Extraretinal signals in MSTd neurons related to volitional smooth pursuit. J Neurophysiol 96:2819–2825. https://doi.org/10.1152/jn.00538.2006
    OpenUrlCrossRefPubMed
  47. ↵
    1. Ono S,
    2. Mustari MJ
    (2012) Role of MSTd extraretinal signals in smooth pursuit adaptation. Cereb Cortex 22:1139–1147. https://doi.org/10.1093/cercor/bhr188
    OpenUrlCrossRefPubMed
  48. ↵
    1. Panichello MF,
    2. DePasquale B,
    3. Pillow JW,
    4. Buschman TJ
    (2019) Error-correcting dynamics in visual working memory. Nat Commun 10:3366. https://doi.org/10.1038/s41467-019-11298-3
    OpenUrlCrossRefPubMed
  49. ↵
    1. Pearson DG,
    2. Sahraie A
    (2003) Oculomotor control and the maintenance of spatially and temporally distributed events in visuo-spatial working memory. Q J Exp Psychol A 56:1089–1111. https://doi.org/10.1080/02724980343000044
    OpenUrlCrossRefPubMed
  50. ↵
    1. Pelli DG
    (1997) The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spat Vis 10:437–442. https://doi.org/10.1163/156856897X00366
    OpenUrlCrossRefPubMed
  51. ↵
    1. Perrone JA,
    2. Thiele A
    (2001) Speed skills: measuring the visual speed analyzing properties of primate MT neurons. Nat Neurosci 4:526–532. https://doi.org/10.1038/87480
    OpenUrlCrossRefPubMed
  52. ↵
    1. Pezzulo G,
    2. Parr T,
    3. Friston K
    (2024) Active inference as a theory of sentient behavior. Biol Psychol 186:108741. https://doi.org/10.1016/j.biopsycho.2023.108741
    OpenUrlCrossRefPubMed
  53. ↵
    1. Postle BR,
    2. Idzikowski C,
    3. Della Sala S,
    4. Logie RH,
    5. Baddeley AD
    (2006) The selective disruption of spatial working memory by eye movements. Q J Exp Psychol 59:100–120. https://doi.org/10.1080/17470210500151410
    OpenUrlCrossRef
  54. ↵
    1. Rademaker RL,
    2. Bloem IM,
    3. De Weerd P,
    4. Sack AT
    (2015) The impact of interference on short-term memory for visual orientation. J Exp Psychol Hum Percept Perform 41:1650–1665. https://doi.org/10.1037/xhp0000110
    OpenUrlCrossRefPubMed
  55. ↵
    1. Rademaker RL,
    2. Chunharas C,
    3. Serences JT
    (2019) Coexisting representations of sensory and mnemonic information in human visual cortex. Nat Neurosci 22:1336–1344. https://doi.org/10.1038/s41593-019-0428-x
    OpenUrlCrossRefPubMed
  56. ↵
    1. Roussy M,
    2. Mendoza-Halliday D,
    3. Martinez-Trujillo JC
    (2021) Neural substrates of visual perception and working memory: two sides of the same coin or two different coins? Front Neural Circuits 15:764177. https://doi.org/10.3389/fncir.2021.764177
    OpenUrlCrossRefPubMed
  57. ↵
    1. Seeholzer A,
    2. Deger M,
    3. Gerstner W
    (2019) Stability of working memory in continuous attractor networks under the control of shortterm plasticity. PLoS Comput Biol 15:e1006928. https://doi.org/10.1371/journal.pcbi.1006928
    OpenUrlCrossRefPubMed
  58. ↵
    1. Seymour KJ,
    2. Stein T,
    3. Clifford CWG,
    4. Sterzer P
    (2018) Cortical suppression in human primary visual cortex predicts individual differences in illusory tilt perception. J Vis 18:3. https://doi.org/10.1167/18.11.3
    OpenUrlCrossRefPubMed
  59. ↵
    1. Silvanto J,
    2. Soto D
    (2012) Causal evidence for subliminal percept-to-memory interference in early visual cortex. Neuroimage 59:840–845. https://doi.org/10.1016/j.neuroimage.2011.07.062
    OpenUrlCrossRefPubMed
  60. ↵
    1. Spering M,
    2. Montagnini A
    (2011) Do we track what we see? Common versus independent processing for motion perception and smooth pursuit eye movements: a review. Vision Res 51:836–852. https://doi.org/10.1016/j.visres.2010.10.017
    OpenUrlCrossRefPubMed
  61. ↵
    1. Teng C,
    2. Kravitz DJ
    (2019) Visual working memory directly alters perception. Nat Hum Behav 3:827–836. https://doi.org/10.1038/s41562-019-0640-4
    OpenUrlPubMed
  62. ↵
    1. Teng C,
    2. Postle BR
    (2021) Spatial specificity of feature-based interaction between working memory and visual processing. J Exp Psychol Hum Percept Perform 47:495–507. https://doi.org/10.1037/xhp0000899
    OpenUrlPubMed
  63. ↵
    1. Thier P,
    2. Haarmeier T,
    3. Chakraborty S,
    4. Lindner A,
    5. Tikhonov A
    (2001) Cortical substrates of perceptual stability during eye movements. Neuroimage 14:33–39. https://doi.org/10.1006/nimg.2001.0840
    OpenUrlCrossRef
  64. ↵
    1. Trenner MU,
    2. Fahle M,
    3. Fasold O,
    4. Heekeren HR,
    5. Villringer A,
    6. Wenzel R
    (2008) Human cortical areas involved in sustaining perceptual stability during smooth pursuit eye movements. Hum Brain Mapp 29:300–311. https://doi.org/10.1002/hbm.20387
    OpenUrlCrossRefPubMed
  65. ↵
    1. Viswanathan P,
    2. Nieder A
    (2017) Visual receptive field heterogeneity and functional connectivity of adjacent neurons in primate frontoparietal association cortices. J Neurosci 37:8919–8928. https://doi.org/10.1523/JNEUROSCI.0829-17.2017
    OpenUrlAbstract/FREE Full Text
  66. ↵
    1. Weiss Y,
    2. Simoncelli EP,
    3. Adelson EH
    (2002) Motion illusions as optimal percepts. Nat Neurosci 56:598–604. https://doi.org/10.1038/nn0602-858
    OpenUrl
  67. ↵
    1. Wertheim AH
    (1987) Retinal and extraretinal information in movement perception: how to invert the Filehne illusion. Perception 16:299–308. https://doi.org/10.1068/p160299
    OpenUrlCrossRefPubMed
  68. ↵
    1. Wildegger T,
    2. Myers NE,
    3. Humphreys G,
    4. Nobre AC
    (2015) Supraliminal but not subliminal distracters bias working memory recall. J Exp Psychol Hum Percept Perform 41:826–839. https://doi.org/10.1037/xhp0000052
    OpenUrlCrossRefPubMed
  69. ↵
    1. Wimmer K,
    2. Nykamp DQ,
    3. Constantinidis C,
    4. Compte A
    (2014) Bump attractor dynamics in prefrontal cortex explains behavioral precision in spatial working memory. Nat Neurosci 17:431–439. https://doi.org/10.1038/nn.3645
    OpenUrlCrossRefPubMed
  70. ↵
    1. Yoon JH,
    2. Curtis CE,
    3. D’Esposito M
    (2006) Differential effects of distraction during working memory on delay-period activity in the prefrontal cortex and the visual association cortex. Neuroimage 29:1117–1126. https://doi.org/10.1016/j.neuroimage.2005.08.024
    OpenUrlCrossRefPubMed
  71. ↵
    1. Zahorodnii A,
    2. Mendoza-Halliday D,
    3. Martinez-Trujillo JC,
    4. Qian N,
    5. Desimone R,
    6. Cueva CJ
    (2025) Overcoming sensory-memory interference in working memory circuits. bioRxiv:2025.03.17.643652. https://doi.org/10.1101/2025.03.17.643652
  72. ↵
    1. Zaksas D,
    2. Pasternak T
    (2006) Directional signals in the prefrontal cortex and in area MT during a working memory for visual motion task. J Neurosci 26:11726–11742. https://doi.org/10.1523/JNEUROSCI.3420-06.2006
    OpenUrlAbstract/FREE Full Text
  73. ↵
    1. Zaksas D,
    2. Bisley JW,
    3. Pasternak T
    (2001) Motion information is spatially localized in a visual working-memory task. J Neurophysiol 86:912–921. https://doi.org/10.1152/jn.2001.86.2.912
    OpenUrlCrossRefPubMed
  74. ↵
    1. Zokaei N,
    2. Manohar S,
    3. Husain M,
    4. Feredoes E
    (2014) Causal evidence for a privileged working memory state in early visual cortex. J Neurosci 34:158–162. https://doi.org/10.1523/JNEUROSCI.2899-13.2014
    OpenUrlAbstract/FREE Full Text
Back to top

In this issue

The Journal of Neuroscience: 45 (49)
Journal of Neuroscience
Vol. 45, Issue 49
3 Dec 2025
  • Table of Contents
  • About the Cover
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Bias in Visual Short-Term Memory for Motion Induced by Perceptually Suppressed Distractors
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Bias in Visual Short-Term Memory for Motion Induced by Perceptually Suppressed Distractors
Takeshi Miyamoto, Kosuke Numasawa
Journal of Neuroscience 3 December 2025, 45 (49) e1027252025; DOI: 10.1523/JNEUROSCI.1027-25.2025

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Bias in Visual Short-Term Memory for Motion Induced by Perceptually Suppressed Distractors
Takeshi Miyamoto, Kosuke Numasawa
Journal of Neuroscience 3 December 2025, 45 (49) e1027252025; DOI: 10.1523/JNEUROSCI.1027-25.2025
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • Peer Review
  • PDF

Keywords

  • attractive bias
  • distraction effect
  • smooth pursuit eye movements
  • visual motion
  • visual working memory

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Input-Specific Organization of Intrinsic Excitability Expands Coding Capacity of Fast-Spiking Auditory Neurons
  • A Novel Mouse Model for Developmental and Epileptic Encephalopathy by Purkinje Cell-Specific Deletion of Scn1b
  • Medial Olivocochlear Efferent Modulation of Cochlear Micromechanics Requires P2X4 Receptor in Outer Hair Cells
Show more Research Articles

Behavioral/Cognitive

  • Neural Tracking of the Maternal Voice in the Infant Brain
  • Brief Encounters with Real Objects Modulate the Medial Parietal But Not Occipitotemporal Cortex
  • Too Little and Too Much: Balanced Hippocampal, But Not Medial Prefrontal, Neural Activity Is Required for Intact Novel Object Recognition in Rats
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.