Vernier acuity with stationary moving Gabors
Abstract
We examined the ability of observers to determine the vertical alignment of three Gabor patches (cosine gratings tapered in X and Y by Gaussians) when the grating within the middle patch was moving right or left. The comparison patches were flickered in counterphase, as was the test patch in a control condition. In all conditions, the Gabor patch itself (the envelope) was stationary. Vernier acuity (i.e. sensitivity) was almost as good with the moving as with the flickering Gabors, but there was a very pronounced positional bias in the case of the patterns in which the internal gratings were moving. The (stationary) patches appeared to be displaced in the direction of the grating movement. Thus if the grating were drifting rightwards, the observer would see the patches as being aligned only when the test patch position in fact was shifted far over to the left. This movement-related bias increased rapidly with retinal eccentricity, reaching 15 min at 8 deg eccentricity. The bias was greatest at 4–8 Hz temporal frequency, and at low spatial frequencies. Whether the patterns were on the horizontal or the vertical meridian was largely irrelevant, but larger biases were found with patterns moving towards or away from the fovea than with those moving in a tangential direction.
Reference (16)
- BurbeckC.A.
Large-scale relative localization across spatial frequency channels
Vision Research
(1988) - LeviD.M. et al.
Vernier acuity, crowding and cortical magnification
Vision Research
(1985) - NakayamaK.
Biological image motion processing: a review
Vision Research
(1985) - ReganD.
Visual processing of four kinds of relative motion
Vision Research
(1986) - ToetA. et al.
Differential spatial displacement thresholds for Gabor patches
Vision Research
(1988) - WestheimerG. et al.
Integration regions for visual hyperacuity
Vision Research
(1977) - AdairR.K.
The physics of baseball
(1990) - AlbrightT.D.
Centrifugal directional bias in the middle temporal visual area (MT) of the macaque
Visual Neuroscience
(1989)
Cited by (271)
High-level aftereffects reveal the role of statistical features in visual shape encoding
2024, Current BiologyVisual shape perception is central to many everyday tasks, from object recognition to grasping and handling tools.1,2,3,4,5,6,7,8,9,10 Yet how shape is encoded in the visual system remains poorly understood. Here, we probed shape representations using visual aftereffects—perceptual distortions that occur following extended exposure to a stimulus.11,12,13,14,15,16,17 Such effects are thought to be caused by adaptation in neural populations that encode both simple, low-level stimulus characteristics17,18,19,20 and more abstract, high-level object features.21,22,23 To tease these two contributions apart, we used machine-learning methods to synthesize novel shapes in a multidimensional shape space, derived from a large database of natural shapes.24 Stimuli were carefully selected such that low-level and high-level adaptation models made distinct predictions about the shapes that observers would perceive following adaptation. We found that adaptation along vector trajectories in the high-level shape space predicted shape aftereffects better than simple low-level processes. Our findings reveal the central role of high-level statistical features in the visual representation of shape. The findings also hint that human vision is attuned to the distribution of shapes experienced in the natural environment.
Deaf individuals use compensatory strategies to estimate visual time events
2023, Brain ResearchTemporal perception is so profoundly linked to hearing that congenitally and early deaf individuals appear to experience visual temporal impairments. However, most studies investigated visual temporal perception in deaf individuals using static stimuli, while ecological objects with which we interact in everyday life often move across space and time. Given that deafness does not impact spatial metric representations, we hypothesize that, while the temporal perception of static stimuli is altered after early hearing loss, it can be enhanced by providing additional, ecologically relevant information. To evaluate our hypothesis, deaf and hearing participants were tested using an oddball-like visual temporal task. In such a task, participants had to temporally discriminate a Target embedded in a series of static stimuli, whose spatiotemporal structure was dynamically manipulated during the presentation. Our results highlighted that deaf participants could not successfully discriminate the Target’s duration when only temporal information was manipulated, while their temporal sensitivity significantly improved when coherent spatiotemporal information was displayed. Our findings suggest that deaf individuals might develop compensatory strategies based on other visual, non-temporal features to estimate external time events.
Spatial and temporal proximity of objects for maximal crowding
2022, Vision ResearchCrowding refers to the deleterious visual interaction among nearby objects. Does maximal crowding occur when objects are closest to one another in space and time? We examined how crowding depends on the spatial and temporal proximity, retinally and perceptually, between a target and flankers. Our target was a briefly flashed T-stimulus presented at 10° right of fixation (3-o’clock position). It appeared at different target-onset-to-flanker asynchronies with respect to the instant when a pair of flanking Ts, revolving around the fixation target, reached the 3-o’clock position. Observers judged the orientation of the target-T (the crowding task), or its position relative to the revolving flankers (the flash-lag task). Performance was also measured in the absence of flanker motion: target and flankers were either presented simultaneously (closest retinal temporal proximity) with different angular spatial offsets, or were presented collinearly (closest retinal spatial proximity) with different temporal onset asynchronies. We found that neither retinal nor perceptual spatial or temporal proximity could account for when maximal crowding occurred. Simulations using a model based on feed-forward interactions between sustained and transient channels in static and motion pathways, taking into account the differential response latencies, can explain the crowding functions observed under various spatio-temporal conditions between the target and flankers.
Motion extrapolation in the flash-lag effect depends on perceived, rather than physical speed
2022, Vision ResearchIn the flash-lag effect (FLE), a flash in spatiotemporal alignment with a moving object is misperceived as lagging behind the moving object. One proposed explanation for this illusion is based on predictive motion extrapolation of trajectories. In this interpretation, the diverging effects of velocity on the perceived position of the moving object suggest that FLE might be based on the neural representation of perceived, rather than physical, velocity. By contrast, alternative explanations based on differential latency or temporal averaging would predict that the FLE does not rely on such a representation of perceived velocity. Here we examined whether the FLE is sensitive to illusory changes in perceived speed that result in changes to perceived velocity, while physical speed is constant. The perceived speed of the moving object was manipulated using revolving wedge stimuli with variable pattern textures (Experiment 1) and luminance contrast (Experiment 2). The motion extrapolation interpretation would predict that the changes in FLE magnitude should correspond to the changes in the perceived speed of the moving object. In the current study, two experiments demonstrated that perceived speed and FLE magnitude increased in the dynamic pattern relative to the static pattern conditions, and that the same effect was found in the low contrast compared to the high contrast conditions. These results showed that manipulations of texture and contrast that are known to alter judgments of perceived speed also modulate perceived position. We interpret this as a consequence of motion extrapolation mechanisms and discuss possible explanations for why we observed no cross-effect correlation.
The perceptual dynamics of the contrast induced speed bias
2022, Vision ResearchIn this article we present a temporal extension of the slow motion prior model to generate predictions regarding the temporal evolution of the contrast induced speed bias. We further tested these predictions using a novel experimental paradigm that allows us to measure the dynamic perceptual difference between stimuli through a series of manual pursuit open loop tasks. Results show good agreement with our model’s predictions. The main findings reveal that hand speed dynamics are affected by stimulus contrast in a way that is consistent with a dynamic model of motion perception that assumes a slow motion prior. The proposed model also confirms observations made in previous studies that suggest that motion bias persisted even at high contrast as a consequence of the dynamics of the slow motion prior.
Adaptation to geometrically skewed moving images: An asymmetrical effect on the double-drift illusion
2021, Vision ResearchProgressive addition lenses introduce distortions in the peripheral visual field that alter both form and motion perception. Here we seek to understand how our peripheral visual field adapts to complex distortions. The adaptation was induced across the visual field by geometrically skewed image sequences, and aftereffects were measured via changes in perception of the double-drift illusion. The double-drift or curveball stimulus contains both local and object motion. Therefore, the aftereffects induced by geometrical distortions might be indicative of how this adaptation interacts with the local and object motion signals.
In the absence of the local motion components, the adaptation to skewness modified the perceived trajectory of object motion in the opposite direction of the adaptation stimulus skew. This effect demonstrates that the environment can also tune perceived object trajectories. Testing with the full double-drift stimulus, adaptation to a skew in the opposite direction to the local motion component induced a change in perception, reducing the illusion magnitude (when the stimulus was presented on the right side of the screen. A non-statistically significant shift, when stimuli were on the left side). However, adaptation to the other orientation resulted in no change in the strength of the double-drift illusion (for both stimuli locations). Thus, it seems that the adaptor's orientation and the motion statistics of the stimulus jointly define the perception of the measured aftereffect.
In conclusion, not only size, contrast or drifting speed affects the double-drift illusion, but also adaptation to image distortions.