Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Articles, Behavioral/Systems/Cognitive

Summation of Visual Motion across Eye Movements Reflects a Nonspatial Decision Mechanism

Adam P. Morris, Charles C. Liu, Simon J. Cropper, Jason D. Forte, Bart Krekelberg and Jason B. Mattingley
Journal of Neuroscience 21 July 2010, 30 (29) 9821-9830; https://doi.org/10.1523/JNEUROSCI.1705-10.2010
Adam P. Morris
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Charles C. Liu
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Simon J. Cropper
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jason D. Forte
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Bart Krekelberg
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jason B. Mattingley
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

Human vision remains perceptually stable even though retinal inputs change rapidly with each eye movement. Although the neural basis of visual stability remains unknown, a recent psychophysical study pointed to the existence of visual feature-representations anchored in environmental rather than retinal coordinates (e.g., “spatiotopic” receptive fields; Melcher and Morrone, 2003). In that study, sensitivity to a moving stimulus presented after a saccadic eye movement was enhanced when preceded by another moving stimulus at the same spatial location before the saccade. The finding is consistent with spatiotopic sensory integration, but it could also have arisen from a probabilistic improvement in performance due to the presence of more than one motion signal for the perceptual decision. Here we show that this statistical advantage accounts completely for summation effects in this task. We first demonstrate that measurements of summation are confounded by noise related to an observer's uncertainty about motion onset times. When this uncertainty is minimized, comparable summation is observed regardless of whether two motion signals occupy the same or different locations in space, and whether they contain the same or opposite directions of motion. These results are incompatible with the tuning properties of motion-sensitive sensory neurons and provide no evidence for a spatiotopic representation of visual motion. Instead, summation in this context reflects a decision mechanism that uses abstract representations of sensory events to optimize choice behavior.

Introduction

The stability of perception relies on a spatial coding scheme that takes into account changes in gaze direction. In principle, gaze information could be used to construct receptive fields that are selective for a region of external space rather than a region of the retina. To date, however, physiological investigations have yielded little evidence for such a spatiotopic coding scheme (but see Galletti et al., 1993; Duhamel et al., 1997). Other studies have probed for spatiotopic representations by measuring perceptual interactions between stimuli presented before and after a saccade at a common spatial position (McConkie and Zola, 1979; Bridgeman and Mayer, 1983; O'Regan and Lévy-Schoen, 1983; Irwin et al., 1988). This psychophysical approach has supported the primacy of retina-centered rather than environment-centered representations (Prime et al., 2006).

In a recent study, however, Melcher and Morrone (2003) observed changes in perceptual thresholds for visual motion that are consistent with spatiotopic coding (Fig. 1). In that study, observers monitored randomly moving dots in the periphery for the arrival of two brief probe-intervals (M1 and M2) in which a proportion of the dots moved in a common direction. Their task was to determine the direction of motion. Participants either maintained their gaze, such that both motion signals occupied the same spatial and retinal location (Fig. 1A); or they performed an eye movement such that M1 and M2 occupied the same position in space but different positions on the retina (Fig. 1B). In both conditions, the authors observed an enhancement of sensitivity compared with a single-motion baseline.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Example trials of the motion discrimination task used by Melcher and Morrone (2003) to probe for transsaccadic integration of visual motion. Each trial contained either one or two intervals of coherent motion (arrows) positioned at the temporal center (±500 ms) of 10 s of spatiotemporal noise (0% coherence). The observer's task was to determine whether the direction of motion was leftward or rightward. The cross-hair indicates the position of gaze throughout the trial. A, The fixation task. Gaze was maintained on the upper fixation point for the duration of the trial. B, The saccadic task. Observers performed a 12° saccade from the upper fixation point to the lower fixation point during the ISI that separated the first (M1) and second (M2) motion signal.

The prevailing interpretation of these findings is that the enhancement reflects temporal integration of sensory inputs to spatiotopically tuned motion detectors early in the visual system (Melcher et al., 2004; Melcher, 2005; Prime et al., 2006; d'Avossa et al., 2007; Melcher, 2007). Increased sensitivity for the dual-motion condition may be expected, however, even without a spatiotopic representation in the brain. For example, observers could respond on the basis of the stronger of two sensory representations (“probability summation”; Watson, 1979; Meese and Williams, 2000; Tyler and Chen, 2000), or combine estimates of motion direction during a decision process akin to statistical inference (Gold and Shadlen, 2000, 2007; Knill and Pouget, 2004). The doubling of sensitivity in the dual-motion condition observed by Melcher and Morrone (2003) is greater than that predicted by these decision-stage accounts. In a first experiment, however, we show that sensitivity in the single-motion condition—but not the dual-motion condition—is greatly reduced by uncertainty about the onset time of the near-threshold motion signals. This factor, which was not controlled in the original study, thus confounds estimates of summation.

We measured the dual-motion advantage reported by Melcher and Morrone (2003) under a variety of novel conditions that allowed a more complete characterization of the underlying mechanism. Our results show that when temporal uncertainty is minimized, probabilistic decision mechanisms account completely both for our own findings and for those of Melcher and Morrone. This new perspective relies only on well established principles of perceptual decision-making and not on a spatiotopic representation of visual motion.

Materials and Methods

All experimental procedures were approved by the University of Melbourne Human Research Ethics Committee and the Institutional Review Board of Rutgers University.

The stimuli, procedures, and analyses used in the current experiments were comparable to those used in the original study. Some methodological details that were not reported in the original paper were obtained directly from one of the authors (M. C. Morrone, personal communication, December 3, 2006). Note that in our experiments, observers discriminated upward from downward motion and the random-dot stimuli were located to the left and/or right of fixation (i.e., as if the display used in the original study had been rotated by 90°). This configuration was selected for compatibility with a concurrent fMRI investigation of summation (in which motion signals could evoke lateralized BOLD responses) (Merriam et al., 2003). Experiment 1 replicated the summation effect reported by Melcher and Morrone (2003) using our modified design (see Fig. 3A).

Participants

A total of 12 observers (10 males, two females) participated in the experiments including four authors (A.P.M., J.B.M., S.J.C., and J.D.F.), two of whom were naive to the specific purpose of the experiments at the time of their participation (S.J.C. and J.D.F.), and four naive observers (L.M., K.M., P.G., A.T., J.D., F.R., and M.Q.). Four observers participated in each of the experiments. All participants had normal vision and were aged between 22 and 42 years.

Stimuli and apparatus

Visual and auditory stimuli were generated and presented using Matlab software (MathWorks) in conjunction with the OpenGL-based Psychophysics Toolbox extension (version 3) (Brainard, 1997), and running on a Pentium-class computer operating under a Windows XP (SP2) (Microsoft) environment. Visual stimuli were displayed using a linearized 22 inch cathode ray tube monitor (1280 × 1024 resolution) with a refresh rate of 60 Hz and viewed from a distance of 57 cm. A chin rest (experiments 1–4) or bite-bar (experiment 5) was used to stabilize the viewing position. All experiments were performed in a dimly illuminated testing cubicle.

The random dot motion pattern comprised 58 circular dots (diameter = 0.15°) confined to a 6°×6° square region. Half of the dots were luminance increments and the other half luminance decrements of equal contrast (Weber contrast = −98% and +98%) against a uniform gray background (mean luminance = 35.1 cd/m2). Each frame comprised complementary proportions of “signal” and “noise” dots. Noise dots were replotted at random positions within the aperture on each frame to generate spatiotemporal noise. Signal dots were displaced from their previous positions either upward or downward (depending on the direction of motion assigned to the trial), by a distance consistent with a dot speed of 10°/s. Dots that were selected as signal dots on one frame were ineligible to be selected in the subsequent frame. Signal dots therefore had a limited lifetime of two frames and the maximum level of motion coherence possible was 50%. The frame rate of the random dot stimulus matched the refresh rate of the display (60 Hz). During noise-only intervals, the proportion of noise and signal dots was set to one and zero, respectively. During signal-plus-noise intervals (i.e., coherent motion), the proportion of signal and noise dots was determined by the coherence value assigned to that trial by the adaptive QUEST algorithm (Watson and Pelli, 1983) (see Procedure).

The auditory cue used to reduce temporal uncertainty in all experiments was a brief, pure tone (60 db, 500 Hz, 70 ms) presented bilaterally in free-field via speakers mounted behind each side of the display.

Eye position was recorded using an infrared eye tracking system (Eyelink II; SR Research) for the saccadic experiment (experiment 5).

Procedure

Each trial consisted of either one (“single-motion”) or two (“dual-motion”) coherent motion signals (150 ms duration) embedded within 10 s of spatiotemporal noise (0% coherence). The two motion signals in the dual-motion condition (M1 and M2) were separated by an interstimulus interval (ISI) of 1000 ms, and were yoked to the same coherence level and direction of motion (except for experiment 3, see Results for details). Motion sign (upward or downward) was selected randomly at the start of each trial. Observers were required either to identify the direction of motion (upward or downward; experiments 1, 2, 4, and 5) or to determine whether the trial contained “signal-plus-noise” or “noise-only” (experiment 3). Observers indicated their response at the end of the trial by pressing one of two buttons on a keyboard, and feedback regarding response accuracy was provided by a change in the color of the fixation point (correct: green; incorrect: red). An adaptive algorithm (QUEST; Watson and Pelli, 1983) was used to set motion coherence on each trial to the current estimate of the signal strength required to yield 75% correct responses. Note that the QUEST algorithm was used only to specify the strength of the motion signal on each trial, and not to provide a final estimate of the observer's sensitivity for a given condition (see Data analysis). At least six QUEST sessions of 40 trials each were run for each condition in each experiment. The order in which sessions were completed was counterbalanced within and across observers in each experiment.

Note that the aim of the current study was to probe further the basic mechanisms that underlie the perceptual advantage observed for the dual-motion condition in Melcher and Morrone (2003); the hypotheses apply equally to the conditions in which gaze was fixed throughout the trial and to those in which a saccade was performed between the presentations of motion. Thus, all experiments except experiment 5 did not include eye movements.

Experiment 1.

Observers maintained gaze on a fixation point (diameter = 0.3°) located 6° to the left of a central random dot motion stimulus for the entire trial (Fig. 2A). The motion signal in the single-motion condition occurred at the temporal center of the trial except for the addition of a random offset within ±500 ms (i.e., motion onset was 4425–5425 ms after the start of the trial). The two motion signals in the dual-motion condition (separated by a 1000 ms ISI) straddled the temporal center of the trial except for the addition of a random offset within ±500 ms (i.e., M1 onset occurred 3850–4850 ms after the start of the trial). These motion onset times matched those used in the study by Melcher and Morrone (2003). Trial blocks consisted of either “cued” trials or “uncued” trials. On cued trials, an auditory tone was presented 150 ms before the onset of motion signals. In the dual-motion condition, the cue was presented before the onset of M2 (M1 was uncued). No cue was presented on uncued trials, as in the study by Melcher and Morrone (2003). In a variant of the uncued condition, the noise-only epochs at the start and end of each trial were trimmed to shorten the duration of the random-dot pattern from 10 to 5 s. The shorter interval between the onset of the random-dot pattern and the onset of coherent motion should reduce temporal uncertainty without explicit cuing (Fraisse, 1984; Gibbon et al., 1997; Leon and Shadlen, 1999, 2003; Gallistel and Gibbon, 2000; Janssen and Shadlen, 2005). To further increase the predictability of the stimulus, motion onset times were not jittered, unlike all other experiments.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Temporal uncertainty about motion onset confounds comparisons of sensitivity in single- and dual-motion conditions. A, The single-motion (left panel) and dual-motion (right panel) conditions used in experiment 1 of the current study. In cued blocks, an auditory tone marked the onset of the motion signal (M2 in the dual-motion condition) on each trial. In uncued blocks, no auditory temporal information was provided, as in the experiments of Melcher and Morrone (2003). Temporal uncertainty arises in uncued conditions because the motion signals occur at unpredictable times and are highly confusable with the leading and trailing noise. B, Psychometric (Weibull) functions relating sensitivity (d′) to motion coherence (the proportion of signal dots) for each observer in single-motion (left) and dual-motion (right) conditions, separately for cued and uncued conditions. The vertical lines that project to the abscissa indicate coherence thresholds for each curve. Error bars represent SEs. The cue had a strong effect on sensitivity in the single-motion condition but virtually no effect on sensitivity in the dual-motion condition.

Experiment 2.

The spatial layout of the fixation point and random-dot stimulus in experiment 2 was identical to that of experiment 1, as was the timing of motion signals in the dual-motion condition. Unlike experiment 1, however, thresholds were measured separately for each component of the dual-motion condition. Thus, there were two single-motion conditions; one that included M1 only; and a second that included M2 only. Each of these component motion signals was presented at the same time as they occurred in the dual-motion condition. A cue was presented 150 ms before nominal onset of M1 in all conditions (see Fig. 4A).

Experiment 3.

Experiment 3 was identical to experiment 2 with the following exceptions. The motion signals (one or two, depending on the condition for that block) were presented on only half of the trials (selected at random). The remaining trials contained incoherent motion for the duration of the trial. The observer's task was to determine whether the trial contained signal-plus-noise or noise-only. There were two dual-motion conditions; one in which M1 and M2 were in the same direction (“correlated”), as in previous experiments; and a second in which M1 and M2 were in opposite directions (“anticorrelated”; see Fig. 5A).

Experiment 4.

Experiment 4 was identical to experiment 2 with the following exceptions. M1 and M2 were presented within separate random-dot patterns centered 6° to the right and left side of a central fixation point, respectively (see Fig. 6A). Importantly, the center-to-center retinal separation of the two motion signals (12°) in the dual-motion condition of experiment 4 matched that of the transsaccadic condition of Melcher and Morrone (2003). To facilitate the ability of observers to attend covertly to the appropriate location at the appropriate time, the contrast of the dots in the right aperture (where the onset of M1 was pending) was set to double that of the dots in the left aperture for the first half of the trial, and vice versa for the second half of the trial. Specifically, the Weber contrast of the dots in the left and right apertures was set initially to ±49% and ±98%, respectively. During the middle of the 1000 ms interval between M1 and M2, these differential contrast values for the left and right apertures were switched smoothly using inversely proportional Gaussian contrast ramps (full-width half-maximum = 500 ms), such that the sum of the contrasts across both apertures was constant. This continuous transition prevented the sense of a sudden jump in the display that would otherwise arise by switching the contrast values with a step function.

Experiment 5.

Experiment 5 was identical to experiment 2 with the following exceptions. The display contained separate random-dot patterns positioned 6° above and below the horizontal meridian and centered horizontally (see Fig. 7A). As for experiment 4, the center-to-center retinal separation of these patches matched that of the transsaccadic condition of Melcher and Morrone (2003). Shortly after (450 ms) the nominal onset of M1, the fixation point stepped from 6° to the left of the center of the display to 6° to the right of the center of the display. Observers followed the fixation point with 12° saccadic eye movement. Trials were rejected if the eye movement was performed at the wrong time (i.e., if the saccade latency was not within 80–400 ms) or was spatially inaccurate (i.e., if the primary saccade failed to land within 3° of the saccade target position). There were two dual-motion conditions; one in which M1 and M2 appeared within the upper patch (“matched” condition); and a second in which M1 and M2 appeared in the upper and lower patches, respectively (“nonmatched” condition). There were three separate single-motion conditions (M1-only [upper patch], M2-only [upper patch], M2-only [lower patch]).

Data analysis

Calculation of sensitivity (d′).

The data from each observer were analyzed separately. For each condition in the discrimination experiments, the proportions of correct responses on upward and downward motion trials, PU(C) and PD(C), were determined separately at each coherence level. These proportion scores were then converted to a bias-free measure of sensitivity (d′)for each bin using the following formula: Embedded Image where z[·] denotes the inverse normal (z-score) transformation. For experiment 3, PU(C) and PD(C) were replaced with the hit-rate for signal-plus-noise trials, H, and the false-alarm rate for noise-only trials, F, respectively, and d′ values were not divided by the factor of 2. This latter difference in calculation ensured that sensitivity measures obtained from the different task designs (two alternative forced-choice vs yes-no) were nevertheless comparable (Macmillan and Creelman, 1991). Corrected values of 0.99 and 0.01 were substituted for any bins in which the observed proportion correct (or incorrect) was equal to 1 or 0, respectively.

Model-fitting.

To provide a continuous description of how sensitivity (d′) related to motion coherence, the binned data for each condition and observer were fitted with a cumulative Weibull function, F(c), of the following form: Embedded Image where c is the motion coherence for the bin and α, β, and γ are the asymptote, spread, and shape parameters, respectively (Wichmann and Hill, 2001a). Because observers tended to achieve near-perfect accuracy at high levels of coherence, the asymptote parameter was fixed to the value of d′ that corresponded to values for PU(C) and PD(C) of 0.99 and 0.01, respectively. For the detection experiment, the asymptote parameter was fixed to the value of d′ that corresponded to a hit-rate of 0.99 and a false-alarm rate equal to the average false-alarm rate across all conditions to be compared. The two free parameters of the model, β and γ, were estimated by minimizing the χ2 statistic, as follows: Embedded Image where d′i(c) and Fi(c) are the observed and fitted data points, respectively, at coherence c over N bins. The variance of the d′ estimate for each bin, var[d′i], was calculated using the method of Miller (1996) (see his Eqs. 6–8), which is the preferred method for estimating the variance of d′ estimates based on small sample sizes. This was necessary because the adaptive algorithm used to determine test values of coherence in these experiments (QUEST) often generated only a small number of trials in some regions of the coherence scale. An important property of the cost function shown in Equation 3 is that it weights the model fit in favor of the most reliable data points. Note that the variance estimates provided by the equations of Miller (1996) were divided by 2 for the discrimination experiments as a reflection of the 2 term in the definition of d′ (Eq. 1).

Motion coherence thresholds.

The final coherence threshold estimate for a given condition was obtained by substituting the parameters of the best-fitting model into the equation for the Weibull function (Eq. 2) and solving for c when d′ was equal to 1.35. This point on the Weibull function corresponds to ∼75% correct detections. The SE of this point estimate was obtained by a nonparametric bootstrap procedure in which the correct and incorrect responses from each bin were resampled (with replacement) to produce new estimates of d′ across bins (Wichmann and Hill, 2001a). A psychometric function was fitted to each bootstrap sample and a corresponding threshold estimate was derived (as was done for the original dataset). For each observer, the SD of 10,000 bootstrap estimates was used to represent the SE of the reported coherence threshold.

Summation in experiment 1 was quantified by the ratio of single- and dual-motion coherence thresholds. Confidence intervals for these ratio estimates were calculated by taking the ratio of the coherence threshold estimates for each bootstrap sample, and then noting the 2.5% and 97.5% percentile of the resultant ratio distribution. Using this percentile method, differences in sensitivity were considered significant if the confidence interval for the threshold ratio of the two conditions did not include one (Carpenter and Bithell, 2000; Wichmann and Hill, 2001b). The method for calculating estimates of summation for experiments 2–5 is described in Results.

Results

Summation is modulated by temporal uncertainty

To assess the effects of temporal uncertainty on performance, experiment 1 measured sensitivity in single- and dual-motion conditions comparable to the fixation condition of Melcher and Morrone (2003), with and without an auditory cue that marked the onset time of the coherent motion (Fig. 2A). On cued trials, the cue was presented before the onset of the motion signal in the single-motion condition and before the onset of M2 in the dual-motion condition. Thus, in both single- and dual-motion conditions, there was only a single motion signal for which temporal uncertainty was eliminated. On uncued trials, motion onset was never flagged by a cue, as in Melcher and Morrone's (2003) experiments. Importantly, the visual parameters of the task were identical in cued and uncued conditions.

Figure 2B plots psychometric functions relating sensitivity (d′) to motion coherence for cued (thick lines) and uncued (thin lines) conditions in four observers. For a single motion signal embedded within the random-dot pattern (left panel), the temporal cue had a strong effect on sensitivity, as indicated by a leftward shift of the psychometric function for each of the four observers. Coherence thresholds measured for the uncued condition were reliably higher than those obtained in the cued condition (by a factor of 1.66 on average; SEM = 0.11). This strong effect of cuing on performance confirms that temporal uncertainty degrades sensitivity for discrimination of a single motion signal. In contrast, the cue had only a modest effect on performance in the dual-motion condition for most observers (right panel). Coherence thresholds for the uncued dual-motion condition were higher than for the cued dual-motion condition by a factor of just 1.19 on average (SEM = 0.09). This latter result implies that the second motion signal is subject to minimal uncertainty even in the absence of an explicit temporal cue. This would occur, for example, if partial information about the first motion signal—information that was insufficient to sustain reliable direction discrimination—provided a temporal cue to the impending onset of the second motion signal. Regardless of the specific mechanism, the differential effect of cuing on performance in the single- and dual-motion conditions implies that temporal uncertainty may have inflated estimates of summation in the study by Melcher and Morrone (2003).

Figure 3 provides a direct comparison of coherence thresholds in the single- and dual-motion conditions with and without the cue. The right panels show the ratio of coherence thresholds for single- and dual-motion conditions for each observer. A threshold ratio of one indicates that sensitivity for the single- and dual-motion conditions was equivalent (i.e., zero summation), whereas a ratio of two indicates that sensitivity in the dual-motion condition was twice that observed in the single-motion condition (i.e., as expected for a linear integrator; (Morrone et al., 1995; Burr and Santoro, 2001). In the absence of a temporal cue, sensitivity for the dual-motion condition was higher than that observed for the single-motion condition for all observers (Fig. 3A). This summation is reflected by the threshold ratios, which were well above 1 [mean (M) = 1.41, SEM = 0.06]. This clear advantage for the dual-motion condition replicates that reported by Melcher and Morrone (2003), though the effect appears somewhat smaller than in the original study. We address this issue in the Discussion. Strikingly, the provision of a temporal cue abolished entirely the dual-motion advantage: threshold ratios were ∼1 for all observers (M = 1.03; SEM = 0.06; Fig. 3B). Given that the visual parameters of the task were identical in cued and uncued conditions, these findings are difficult to explain in terms of integration early in the visual system. If it is assumed that visual motion detectors accumulate inputs obligatorily within a finite temporal window (Burr and Santoro, 2001), summation should have been the same for the cued and uncued conditions.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Summation is strongly modulated by temporal uncertainty. A, Performance for the uncued conditions of experiment 1. Coherence thresholds (left) and threshold ratios (right) for uncued single- and dual-motion conditions indicated that sensitivity for the dual-motion condition was reliably higher than that of the single-motion condition, replicating the findings of Melcher and Morrone (2003). Error bars represent SEs (left) and 95% confidence intervals (right) derived from bootstrap analyses (see Materials and Methods). B, Performance for the cued conditions of experiment 1, in the same format as A. Sensitivity for a cued motion signal was the same regardless of whether it was preceded by a second motion signal, even though the visual parameters of the task were the same as for the uncued trials.

In a variant of the uncued condition, we found that halving the overall duration of the trial—which increases the predictability of motion onset without explicit cuing (Fraisse, 1984; Gibbon et al., 1997; Leon and Shadlen, 1999, 2003; Gallistel and Gibbon, 2000; Janssen and Shadlen, 2005)—also markedly reduces summation, even though the duration of each coherent motion signal (and the interval between motion signals in the dual-motion condition) was identical in both cases (supplemental Fig. 1, available at www.jneurosci.org as supplemental material). This attenuated summation was mostly attributable to improved sensitivity for the single-motion condition for the short trial relative to the long trial; dual-motion performance was similar in both conditions. This result is again difficult to explain in terms of sensory integration but would be expected if thresholds (in the single-motion condition) are limited by the effects of temporal uncertainty on performance.

It might appear that the effects of decision noise arising from temporal uncertainty provide a complete account of summation in this task. This is not the case, however; in experiment 2 we had the cue announce the onset of M1 rather than M2 (Fig. 4A). In that case the cue provides good temporal information about both M1 and M2, and so it would be difficult to attribute any summation observed to the differential effects of uncertainty on performance in single- and dual-motion conditions. To provide more explicit estimates of summation, we measured thresholds for the dual-motion condition as well as for each component motion signal in isolation, that is, when M1 was present but not M2, and when M2 was present but not M1. This approach allows the data to be compared more directly with the predictions of the sensory integration account, on the one hand, and with models that attribute summation to postsensory decision processes, on the other.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Summation occurs under conditions of minimal uncertainty but reflects a postsensory statistical advantage, not linear sensory integration. A, Schematic of the trial sequences for experiment 2, in which separate threshold measurements were obtained for the dual-motion condition and for each of the component single-motion conditions. A cue marked the nominal onset of M1 on all trials. B, Coherence thresholds (in the same format as Fig. 3) indicated an advantage for the dual-motion condition over the component conditions for all but one observer. C, Thresholds for the dual-motion condition (solid black symbols, from B) are compared with the predictions of a linear sensory integration model (dotted line) (Alais and Burr, 2004) and a Bayesian decision-stage integration model (dashed curve) (Ernst and Banks, 2002). The abscissa and ordinate express the strength of M1 and M2 at threshold in the dual-motion condition as proportions of their respective component thresholds. Thresholds for M1-only and M2-only conditions are by definition at (1,0) and (0,1), respectively (white and gray filled squares). The group mean is indicated by the white star. For linear integration, the two motion signals are assumed to be integrated within a single detector and share common noise. Accordingly, thresholds in the dual-motion condition should reflect the sum of the component signals and occupy a point along the diagonal line. If the motion signals are instead processed independently by sensory detectors, but are combined in a statistically optimal manner at a subsequent stage of the decision process, the dual-motion threshold should occupy a point along the unit circle. The data are consistent with the model that assumes independent sensory representations of M1 and M2 and not with the sensory integration model. Error bars represent SEs from bootstrap analyses (see Materials and Methods).

Figure 4B shows coherence thresholds for the dual-motion condition, as well as for each of the component single-motion conditions, when a cue was provided near the nominal onset time of M1. Thresholds in the dual-motion condition were typically lower than those observed in both of the component conditions, demonstrating that summation can be observed under conditions of minimal temporal uncertainty. To assess the linearity of this summation, it is not appropriate to use a simple threshold ratio because sensitivity for each of the component conditions differed within each observer. However, summation can be assessed by expressing the motion strength of each component (M1 and M2) at threshold in the dual-motion condition as a proportion of its corresponding threshold when measured in isolation (Fig. 4C). For linear integration, these normalized quantities should sum to one and the threshold in the dual-motion condition should fall at a point along the diagonal line in the figure (Alais and Burr, 2004).

There is clear evidence of summation for three observers (i.e., their dual-motion thresholds are within the shaded region), but the magnitude is smaller than that expected for a linear sensory integrator. The summation is consistent, however, with the predictions of an alternative model in which M1 and M2 are assumed to be processed independently at a sensory stage and then combined in a statistical sense at a subsequent stage of the perceptual decision process (dashed curve) (Ernst and Banks, 2002). This “decision-stage integration” model, which implements a form of maximum likelihood estimation or optimal Bayesian integration, has accounted for a variety of perceptual phenomena in which sensory-level neural interactions are thought to be implausible (e.g., in experiments on multisensory integration) (Ernst and Banks, 2002; for review, see Knill and Pouget, 2004). Because the model weights information in proportion to its reliability during the decision process, it can also account for the finding that no summation was observed when the cue announced the onset of M2 rather than M1 (experiment 1). In that case, the sensory evidence derived from M1 would be unreliable due to uncertainty and thus receive a low weight in the decision. Responses would instead be guided predominantly by the more reliable information derived from M2, and thresholds in the dual-motion condition would resemble those in the single-motion condition, as we observed. Finally, for comparison, the predicted effects of probability summation are typically smaller than those for each of the models plotted in the figure (Watson, 1979; Meese and Williams, 2000; Tyler and Chen, 2000).

In sum, experiments 1 and 2 demonstrate that summation in the motion discrimination task is strongly modulated by temporal uncertainty and is best explained by a model that assumes a statistical combination of independent sensory events at a decision stage. However, these quantitative comparisons of the data with linear and nonlinear models cannot rule out entirely a sensory basis for summation. In the following experiments, we demonstrate that the summation mechanism does not possess either of two fundamental characteristics of visual motion detectors: direction selectivity and spatial selectivity.

Summation is nonselective for motion direction

Experiment 3 used a variant of the task in which the direction of motion in the stimulus was irrelevant to the perceptual decision. Specifically, observers were required to distinguish trials that contained coherent motion (i.e., signal-plus-noise) from trials that did not contain coherent motion (i.e., noise-only; Fig. 5A). Crucially, the two motion signals in the dual-motion condition were either in the same direction (correlated) or in opposite directions (anticorrelated). The sensory integration hypothesis predicts that the dual-motion advantage should be greater for the correlated condition than for the anticorrelated condition, because the effects of integration will be maximal when both motion signals stimulate a common population of direction-selective units (Albright, 1984; Meese and Harris, 2001; Clifford and Ibbotson, 2002). In contrast, a decision-stage mechanism predicts no difference in sensitivity between the two conditions because the motion signals provide equally good statistical evidence for the presence of coherent motion regardless of the directional correlation between them.

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

The mechanism that gives rise to summation is not tuned for motion direction. A, Schematic of the trial sequences for experiment 3, in which motion signals in the dual-motion condition were either in the same direction (correlated) or in opposite directions (anticorrelated). Unlike all other experiments reported in this paper, the observer's task was to discriminate signal-plus-noise trials from noise only trials. The single-motion conditions are not shown. B, Coherence thresholds for correlated and anticorrelated motion conditions were similar for all observers and lower than the component motion conditions for all but one observer. C, Thresholds for the correlated (green symbols) and anticorrelated dual-motion conditions (magenta symbols) are compared with the predictions of the linear sensory integration model and the Bayesian decision-stage integration model. The format of the plot is similar to Figure 4C except that the axes have been extended to accommodate anticorrelated motion components. The linear sensory integration model predicts summation only for similar motion directions (because integration occurs within direction-selective motion detectors), whereas the Bayesian decision-stage integration model predicts equal summation for correlated and anticorrelated conditions (because motion responses are interpreted as evidence for the presence of coherent motion regardless of their sign). Similar summation was found for correlated and anticorrelated conditions. Note that the response data from the conditions represented by the upper half of the plot were pooled with those of the lower half before determining thresholds. Diametrically opposing datapoints are therefore identical and are shown only for the purpose of clarity. Error bars represent SEs from bootstrap analyses (see Materials and Methods).

For three of the four observers, the coherence thresholds for the dual-motion conditions were lower than those for each of the component conditions (M1-only, M2-only), indicating that both correlated and anticorrelated directions yielded a dual-motion advantage (Fig. 5B). These dual-motion thresholds can be compared with the predictions of the linear sensory integration model and the nonlinear, decision-stage integration model (Fig. 5C). Figure 5C has been expanded to allow separate expression of correlated (green symbols) and anticorrelated motion components (magenta symbols). Here, the linear integration model predicts summation only for correlated motion directions (dotted line). By contrast, the decision-stage integration model predicts equivalent summation for correlated and anticorrelated conditions (dashed curve).

The coherence thresholds for the dual-motion condition are clearly inconsistent with the sensory integration model: similar improvements in sensitivity were observed for correlated and anticorrelated conditions. The symmetric thresholds are, however, entirely consistent with the predictions of the decision-stage model. Summation is slightly weaker for this coherence-detection task than for direction discrimination, making it unclear whether the data are best explained by a Bayesian integration model (Ernst and Banks, 2002), or by probability summation (Watson, 1979; Meese and Williams, 2000; Tyler and Chen, 2000).

Summation is nonselective for space

Melcher and Morrone's (2003) key finding was that summation occurred even when a saccade intervened between the presentation of M1 and M2. In that condition, the two motion signals occupied the same position in space but stimulated disparate positions on the retina. To ensure that this summation reflected a spatially selective integration mechanism, they conducted a control experiment in which an observer maintained gaze on a central fixation point and M1 and M2 were presented above and below fixation, respectively. This arrangement approximated the retinal events that occurred during their saccadic condition, except that there was no spatial correspondence between the two motion signals. They observed no summation under these conditions, consistent with their putative spatiotopic integration mechanism. In that experiment, however, an auditory tone was presented during the interval between the two motion signals to cue the observer's attention from one random-dot pattern to the other. In experiment 1, we showed that cuing the onset of M2 (while leaving M1 onset uncertain) in this way abolishes summation even for motion signals that occupy the same retinal and spatial location. From this perspective, the absence of summation in Melcher and Morrone's (2003) control experiments is perhaps not surprising, and prompted us to reinvestigate the question of whether the integration mechanism exhibits spatial selectivity.

In experiment 4, we examined the spatial specificity of summation using an approach similar to that of Melcher and Morrone's (2003) control experiments, except that temporal uncertainty was minimized by cuing the onset of M1 (Fig. 6A). For three of the four observers, sensitivity in the dual-motion condition exceeded that of both component conditions, indicating that summation occurs even for spatially separated motion signals (Fig. 6B). For the remaining observer (SJC), sensitivity in the dual-motion condition was notably worse than for the condition in which M1 was presented alone. The reason for this unexpected finding is not clear, but may reflect a difficulty in shifting attention from one side of the display to the other in the dual-motion condition. Figure 6C compares thresholds in the dual-motion condition with the predictions of the sensory and decision-stage integration models. A spatially selective integrator—regardless of whether it is linear or nonlinear, and of whether it operates at a sensory or decision-stage—predicts no summation for spatially nonmatched motion signals. By contrast, the predictions for a spatially invariant integrator are the same as for the spatially matched condition in experiment 2 (compare Fig. 4C). For all observers except SJC, the summation observed with spatially separated motion signals is consistent with the decision-stage integration model. Moreover, the magnitude of summation is comparable to that in which the motion signals occupied the same retinal and spatial position (compare Fig. 6C and Fig. 4C).

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

Summation occurs for spatially nonoverlapping motion components. A, Schematic of the trial sequences for experiment 4, in which M1 and M2 occurred within distinct random-dot patterns positioned on opposite sides of the display. B, Coherence thresholds (in the same format as Fig. 3) indicated an advantage for the dual-motion condition over the component conditions for all but one observer. C, Thresholds for the dual-motion condition are compared with the predictions of the linear sensory integration model (dotted line) and the Bayesian decision-stage integration model (dashed line). The format of the plot is the same as in Figure 4C. Here, the sensory integration model predicts no summation because the two motion signals would stimulate largely disparate populations of motion detectors. The decision-stage model, by comparison, predicts the same level of summation as when the motion signals occupied a common spatial position. This is because the model considers derived quantities such as representations of the relative likelihood of the available response choices rather than sensory events per se. Such decision variables discard irrelevant information such as the spatial position of the discriminandum (Gold and Shadlen, 2000, 2007; Knill and Pouget, 2004). The data are again consistent with the predictions of the decision-stage model (compare Fig. 4C) for three of the four observers. Note that the group mean (star) excludes the atypical observer. Error bars represent SEs from bootstrap analyses (see Materials and Methods).

Finally, we conducted a further experiment to rule out the unlikely possibility that saccadic eye movements introduce spatial specificity to an otherwise nonspatial integration mechanism. As in Melcher and Morrone's (2003) saccadic task, observers performed a 12° saccadic eye movement during the 1000 ms interval that separated the two motion signals. Crucially, the two motion signals occurred at either the same position in space (that is, M1 and M2 both appeared within a patch located above the horizontal meridian; matched condition) or at different positions in space (that is, M1 and M2 appeared within a patch above and below the horizontal meridian respectively; nonmatched condition). The geometry of the task and stimuli for the matched and nonmatched conditions were equivalent, and the retinal separation of the two motion signals was equal to that used in Melcher and Morrone's (2003) experiments. We also measured sensitivity for each of the component motion signals (M1 only [upper patch], M2 only [upper patch], M2 only [lower patch]) to permit quantification of summation.

Two of the four observers showed summation in both matched and nonmatched conditions (Fig. 7). The magnitude of this effect was similar for the two conditions, and also similar to that observed during fixation (compare Fig. 7C with Figs. 4C and 6C). These findings are in clear opposition to the predictions of the spatiotopic sensory integration hypothesis, in which summation is expected only for spatially aligned motion signals. The remaining two observers performed worse in the dual-motion condition than for the condition in which M1 was presented alone. These findings are also inconsistent with the sensory integration hypothesis, but can be explained by the decision-stage model if we assume that the eye movement requirements of the task compromised the ability of observers to effectively incorporate both motion signals into the perceptual decision. The nonubiquitous presence of summation across participants in this experiment parallels that observed in experiments 2 and 4 for discrimination of spatially matched and nonmatched motion signals during fixation (compare Figs. 4C and 6C).

Figure 7.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 7.

Summation for spatially matched and spatially nonmatched motion signals also occurs across saccadic eye movements. A, Schematic of the trial sequences for experiment 5. For the matched condition, both motion signals were presented within a random dot pattern located above the center of the display. For the nonmatched condition, M1 and M2 were presented within random dot patterns located above and below the center of the display, respectively. Observers made a 12° rightward saccade during the ISI. Single-motion conditions are not shown. B, Coherence thresholds (in the same format as Fig. 3) are shown for matched and nonmatched dual-motion conditions. C, Thresholds for the dual-motion condition are compared with the predictions of the linear sensory integration model and the Bayesian decision-stage integration model. The format of the plot is the similar to previous figures. The spatiotopic sensory integration model predicts summation for spatially aligned motion signals (dotted green) but not for spatially misaligned motion signals (dotted magenta line). The decision-stage model predicts summation regardless of the spatial alignment (dashed curve). Clear summation was found for two observers, with no evidence of a perceptual advantage for spatially aligned motion signals. Error bars represent SEs from bootstrap analyses (see Materials and Methods).

Discussion

The findings of the current study argue strongly against a sensory-integration explanation for the dual-motion advantage observed during fixation and across saccadic eye movements by Melcher and Morrone (2003). The provision of an auditory cue around the time of the (second) motion signal was sufficient to eliminate this summation effect, even though the visual parameters of the task were identical in cued and uncued conditions (experiment 1). This finding is inconsistent with a motion detector that integrates sensory inputs obligatorily over time (Burr and Santoro, 2001), regardless of whether the detector is spatiopically or retinotopically tuned. Cuing the onset of the first motion signal (experiment 2), which provides good temporal information about both motion signals (because of the short ISI), restored the dual-motion advantage for most observers. However, the mechanism that gives rise to summation under conditions of minimal temporal uncertainty is not tuned to motion direction per se, because equivalent summation was observed for detection of motion signals in the same direction as for detection of motion signals in opposite directions (experiment 3). Moreover, the mechanism is not spatially selective. Similar summation was observed for spatially matched and spatially nonmatched motion signals (in retinal and spatial coordinates), regardless of whether gaze was maintained (experiments 2 and 4) or an intervening saccade was required (experiment 5). The results of experiments 3–5 are inconsistent with the sensory integration hypothesis because motion detectors would be expected to combine minimally signals that are in their nonpreferred directions or located outside their receptive fields (Albright et al., 1984).

In contrast, the results from all of our experiments are consistent with a model in which the two motion signals are assumed to be processed independently at the sensory-stage, but are available to the decision-maker at the time of the perceptual choice. This additional sensory evidence leads to a probabilistic enhancement of perceptual sensitivity, either because the decision is based on the “better” of the two estimates (Watson, 1979; Meese and Williams, 2000; Tyler and Chen, 2000), or because a refined estimate of motion direction is obtained via postsensory computations akin to near-optimal statistical inference. (Gold and Shadlen, 2000, 2007; Knill and Pouget, 2004). In that case, the quantities that are considered are samples of a decision variable—an abstract representation of sensory data that is useful for directing the particular decision at hand, such as the relative likelihood of the available choice alternatives, but which discards other properties of a visual stimulus (e.g., its spatial location, color, form etc.) (Gold and Shadlen, 2007). This lack of spatial and feature representation suggests that these decision-stage mechanisms would be of limited use for maintaining the stability of visual perception across saccadic eye movements.

This new interpretation stands in opposition to that of Melcher and Morrone (2003) and other authors (Melcher et al., 2004; Melcher, 2005, 2007; Prime et al., 2006; d'Avossa et al., 2007), in which summation is attributed to temporal integration by spatiotopically tuned motion detectors early in the visual system. These hypothetical detectors were suggested to reside in area MT (V5) of visual cortex—a prediction that was supported by an initial neuroimaging investigation of spatial tuning in human area MT (d'Avossa et al., 2007). Subsequent work, however, demonstrated that ostensibly spatiotopic responses in MT reflect noise rather than the visual stimulus (Gardner et al., 2008). Instead, a retinal frame of reference for visual responses in human MT (and other visual areas) was confirmed, consistent with studies of nonhuman primates (Krekelberg et al., 2003). Given our reinterpretation of Melcher and Morrone's (2003) findings, we do not know of any psychophysical evidence to suggest that spatiotopic coding of visual motion should be found in visual cortex, consistent with the findings of Gardner et al. (2008).

One aspect of our results that remains unexplained is that the magnitude of summation observed in the uncued condition of experiment 1—which most closely matched the experiments of Melcher and Morrone (2003)—appears to be smaller (a threshold ratio of ∼2) than that observed in the original study (a mean threshold ratio of 1.73 for the fixation condition). We note two important points that could explain this apparent discrepancy. First, threshold ratios are volatile, as is evident in the large confidence intervals surrounding our estimates in Figure 3. Given that our estimates were based on a larger dataset than in the original experiment (at least six QUEST sessions per condition for each observer compared with four in the original study), we expect similar or larger confidence intervals would surround the ratio estimates of Melcher and Morrone (2003). Thus, the apparent differences in effect size between the two studies are not statistically reliable. Second, experiment 1 identified temporal uncertainty as an uncontrolled factor in the original study that confounds measurements of sensitivity and summation. This effect of uncertainty on performance was not small, but rather an almost twofold modulation of sensitivity for discrimination of a single motion signal. Such effects of stimulus uncertainty are well documented (Cohn and Lasley, 1974; Lasley and Cohn, 1981; Pelli, 1985; Shiu and Pashler, 1994; Luck et al., 1996; Prinzmetal et al., 1997; Luck and Thomas, 1999; Gould et al., 2007). Hence, small differences in levels of uncertainty between the two studies could lead to considerable changes in estimates of summation magnitude. Although we have no direct evidence for increased uncertainty in the original study, we speculate that it could be related to the fact that a wide range of ISIs was used (500–8000 ms) compared with the single ISI in the current study (1000 ms). Alternatively, given the small number of participants [four in the current study, three in the study by Melcher and Morrone (2003)], different levels of uncertainty might have arisen simply from natural intersubject variability.

From a broader perspective, the current findings shed new light on other studies that have used variants of the dual-pulse discrimination task introduced by Melcher and Morrone (2003). Such studies have probed feature-based attentional selection (Melcher et al., 2005), visual selection in the absence of awareness (Melcher and Vidnyánszky, 2006), and attentional modulation of sensory integration time constants (Melcher et al., 2004). Specifically, it will be important to determine the potential contribution of temporal uncertainty to the reported psychophysical phenomena.

In sum, the findings of the current study suggest that the dual-motion advantage reported by Melcher and Morrone (2003)—as observed during fixation and across saccadic eye movements—is most parsimoniously explained by a probabilistic advantage at the level of decision-making and not by sensory integration. This new perspective reconciles the findings of Melcher and Morrone with the large body of work which suggests that little information about visual features is retained and integrated across saccadic eye movements (Prime et al., 2006, 2007) and leaves open the question of how perceptual stability is realized in the brain. Further, our results highlight the importance of decision-making factors beyond the representation of sensory variables and provide a novel example of near-optimal perceptual integration in the human brain.

Footnotes

  • This work was supported by an Overseas Biomedical Fellowship from the National Health and Medical Research Council (NHMRC) of Australia awarded to A.P.M. (525487), an NHMRC Project Grant awarded to J.B.M., and a National Institutes of Health grant awarded to B.K. (EY017605). We thank David Melcher and Concetta Morrone for generously providing methodological details and data from their study.

  • Correspondence should be addressed to Dr. Adam P. Morris, Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ 07102. adam{at}vision.rutgers.edu

References

  1. ↵
    1. Alais D,
    2. Burr D
    (2004) No direction-specific bimodal facilitation for audiovisual motion detection. Brain Res Cogn Brain Res 19:185–194.
    OpenUrlCrossRefPubMed
  2. ↵
    1. Albright TD
    (1984) Direction and orientation selectivity of neurons in visual area MT of the macaque. J Neurophysiol 52:1106–1130.
    OpenUrlAbstract/FREE Full Text
  3. ↵
    1. Albright TD,
    2. Desimone R,
    3. Gross CG
    (1984) Columnar organization of directionally selective cells in visual area MT of the macaque. J Neurophysiol 51:16–31.
    OpenUrlAbstract/FREE Full Text
  4. ↵
    1. Brainard DH
    (1997) The Psychophysics Toolbox. Spat Vis 10:433–436.
    OpenUrlPubMed
  5. ↵
    1. Bridgeman B,
    2. Mayer M
    (1983) Failure to integrate visual information from successive fixations. Bull Psychon Soc 21:285–286.
    OpenUrl
  6. ↵
    1. Burr DC,
    2. Santoro L
    (2001) Temporal integration of optic flow, measured by contrast and coherence thresholds. Vision Res 41:1891–1899.
    OpenUrlCrossRefPubMed
  7. ↵
    1. Carpenter J,
    2. Bithell J
    (2000) Bootstrap confidence intervals: when, which, what? A practical guide for medical statisticians. Stat Med 19:1141–1164.
    OpenUrlCrossRefPubMed
  8. ↵
    1. Clifford CW,
    2. Ibbotson MR
    (2002) Fundamental mechanisms of visual motion detection: models, cells and functions. Prog Neurobiol 68:409–437.
    OpenUrlCrossRefPubMed
  9. ↵
    1. Cohn TE,
    2. Lasley DJ
    (1974) Detectability of a luminance increment: effect of spatial uncertainty. J Opt Soc Am 64:1715–1719.
    OpenUrlPubMed
  10. ↵
    1. d'Avossa G,
    2. Tosetti M,
    3. Crespi S,
    4. Biagi L,
    5. Burr DC,
    6. Morrone MC
    (2007) Spatiotopic selectivity of BOLD responses to visual motion in human area MT. Nat Neurosci 10:249–255.
    OpenUrlCrossRefPubMed
  11. ↵
    1. Duhamel JR,
    2. Bremmer F,
    3. BenHamed S,
    4. Graf W
    (1997) Spatial invariance of visual receptive fields in parietal cortex neurons. Nature 389:845–848.
    OpenUrlCrossRefPubMed
  12. ↵
    1. Ernst MO,
    2. Banks MS
    (2002) Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415:429–433.
    OpenUrlCrossRefPubMed
  13. ↵
    1. Fraisse P
    (1984) Perception and estimation of time. Annu Rev Psychol 35:1–36.
    OpenUrlCrossRefPubMed
  14. ↵
    1. Galletti C,
    2. Battaglini PP,
    3. Fattori P
    (1993) Parietal neurons encoding spatial locations in craniotopic coordinates. Exp Brain Res 96:221–229.
    OpenUrlPubMed
  15. ↵
    1. Gallistel CR,
    2. Gibbon J
    (2000) Time, rate, and conditioning. Psychol Rev 107:289–344.
    OpenUrlCrossRefPubMed
  16. ↵
    1. Gardner JL,
    2. Merriam EP,
    3. Movshon JA,
    4. Heeger DJ
    (2008) Maps of visual space in human occipital cortex are retinotopic, not spatiotopic. J Neurosci 28:3988–3999.
    OpenUrlAbstract/FREE Full Text
  17. ↵
    1. Gibbon J,
    2. Malapani C,
    3. Dale CL,
    4. Gallistel C
    (1997) Toward a neurobiology of temporal cognition: advances and challenges. Curr Opin Neurobiol 7:170–184.
    OpenUrlCrossRefPubMed
  18. ↵
    1. Gold JI,
    2. Shadlen MN
    (2000) Representation of a perceptual decision in developing oculomotor commands. Nature 404:390–394.
    OpenUrlCrossRefPubMed
  19. ↵
    1. Gold JI,
    2. Shadlen MN
    (2007) The neural basis of decision making. Annu Rev Neurosci 30:535–574.
    OpenUrlCrossRefPubMed
  20. ↵
    1. Gould IC,
    2. Wolfgang BJ,
    3. Smith PL
    (2007) Spatial uncertainty explains exogenous and endogenous attentional cuing effects in visual signal detection. J Vis 7:4.1–17.
    OpenUrlPubMed
  21. ↵
    1. Irwin DE,
    2. Brown JS,
    3. Sun JS
    (1988) Visual masking and visual integration across saccadic eye movements. J Exp Psychol Gen 117:276–287.
    OpenUrlCrossRefPubMed
  22. ↵
    1. Janssen P,
    2. Shadlen MN
    (2005) A representation of the hazard rate of elapsed time in macaque area LIP. Nat Neurosci 8:234–241.
    OpenUrlCrossRefPubMed
  23. ↵
    1. Knill DC,
    2. Pouget A
    (2004) The Bayesian brain: the role of uncertainty in neural coding and computation. Trends Neurosci 27:712–719.
    OpenUrlCrossRefPubMed
  24. ↵
    1. Krekelberg B,
    2. Kubischik M,
    3. Hoffmann KP,
    4. Bremmer F
    (2003) Neural correlates of visual localization and perisaccadic mislocalization. Neuron 37:537–545.
    OpenUrlCrossRefPubMed
  25. ↵
    1. Lasley DJ,
    2. Cohn T
    (1981) Detection of a luminance increment: effect of temporal uncertainty. J Opt Soc Am 71:845–850.
    OpenUrlCrossRefPubMed
  26. ↵
    1. Leon MI,
    2. Shadlen MN
    (1999) Effect of expected reward magnitude on the response of neurons in the dorsolateral prefrontal cortex of the macaque. Neuron 24:415–425.
    OpenUrlCrossRefPubMed
  27. ↵
    1. Leon MI,
    2. Shadlen MN
    (2003) Representation of time by neurons in the posterior parietal cortex of the macaque. Neuron 38:317–327.
    OpenUrlCrossRefPubMed
  28. ↵
    1. Luck SJ,
    2. Thomas SJ
    (1999) What variety of attention is automatically captured by peripheral cues? Percept Psychophys 61:1424–1435.
    OpenUrlPubMed
  29. ↵
    1. Luck SJ,
    2. Hillyard SA,
    3. Mouloua M,
    4. Hawkins HL
    (1996) Mechanisms of visual-spatial attention: resource allocation or uncertainty reduction? J Exp Psychol 22:725–737.
    OpenUrl
  30. ↵
    1. Macmillan NA,
    2. Creelman CD
    (1991) Detection theory: a user's guide (Cambridge UP, Cambridge, UK).
  31. ↵
    1. McConkie GW,
    2. Zola D
    (1979) Is visual information integrated across successive fixations in reading? Percept Psychophys 25:221–224.
    OpenUrlPubMed
  32. ↵
    1. Meese TS,
    2. Harris MG
    (2001) Broad direction bandwidths for complex motion mechanisms. Vision Res 41:1901–1914.
    OpenUrlCrossRefPubMed
  33. ↵
    1. Meese TS,
    2. Williams CB
    (2000) Probability summation for multiple patches of luminance modulation. Vision Res 40:2101–2113.
    OpenUrlCrossRefPubMed
  34. ↵
    1. Melcher D
    (2005) Spatiotopic transfer of visual-form adaptation across saccadic eye movements. Curr Biol 15:1745–1748.
    OpenUrlCrossRefPubMed
  35. ↵
    1. Melcher D
    (2007) Predictive remapping of visual features precedes saccadic eye movements. Nat Neurosci 10:903–907.
    OpenUrlCrossRefPubMed
  36. ↵
    1. Melcher D,
    2. Morrone MC
    (2003) Spatiotopic temporal integration of visual motion across saccadic eye movements. Nat Neurosci 6:877–881.
    OpenUrlCrossRefPubMed
  37. ↵
    1. Melcher D,
    2. Vidnyánszky Z
    (2006) Subthreshold features of visual objects: unseen but not unbound. Vision Res 46:1863–1867.
    OpenUrlCrossRefPubMed
  38. ↵
    1. Melcher D,
    2. Crespi S,
    3. Bruno A,
    4. Morrone MC
    (2004) The role of attention in central and peripheral motion integration. Vision Res 44:1367–1374.
    OpenUrlCrossRefPubMed
  39. ↵
    1. Melcher D,
    2. Papathomas TV,
    3. Vidnyánszky Z
    (2005) Implicit attentional selection of bound visual features. Neuron 46:723–729.
    OpenUrlCrossRefPubMed
  40. ↵
    1. Merriam EP,
    2. Genovese CR,
    3. Colby CL
    (2003) Spatial updating in human parietal cortex. Neuron 39:361–373.
    OpenUrlCrossRefPubMed
  41. ↵
    1. Miller J
    (1996) The sampling distribution of d′. Percept Psychophys 58:65–72.
    OpenUrlPubMed
  42. ↵
    1. Morrone MC,
    2. Burr DC,
    3. Vaina LM
    (1995) Two stages of visual processing for radial and circular motion. Nature 376:507–509.
    OpenUrlCrossRefPubMed
  43. ↵
    1. O'Regan JK,
    2. Lévy-Schoen A
    (1983) Integrating visual information from successive fixations: does trans-saccadic fusion exist? Vision Res 23:765–768.
    OpenUrlCrossRefPubMed
  44. ↵
    1. Pelli DG
    (1985) Uncertainty explains many aspects of visual contrast detection and discrimination. J Opt Soc Am A 2:1508–1532.
    OpenUrlPubMed
  45. ↵
    1. Prime SL,
    2. Niemeier M,
    3. Crawford JD
    (2006) Transsaccadic integration of visual features in a line intersection task. Exp Brain Res 169:532–548.
    OpenUrlCrossRefPubMed
  46. ↵
    1. Prime SL,
    2. Tsotsos L,
    3. Keith GP,
    4. Crawford JD
    (2007) Visual memory capacity in transsaccadic integration. Exp Brain Res 180:609–628.
    OpenUrlCrossRefPubMed
  47. ↵
    1. Prinzmetal W,
    2. Nwachuku I,
    3. Bodanski L,
    4. Blumenfeld L,
    5. Shimizu N
    (1997) The phenomenology of attention. Conscious Cogn 6:372–412.
    OpenUrlCrossRefPubMed
  48. ↵
    1. Shiu LP,
    2. Pashler H
    (1994) Negligible effect of spatial precuing on identification of single digits. J Exp Psychol Hum Percept Perform 20:1037–1054.
    OpenUrlCrossRef
  49. ↵
    1. Tyler CW,
    2. Chen CC
    (2000) Signal detection theory in the 2AFC paradigm: attention, channel uncertainty and probability summation. Vision Res 40:3121–3144.
    OpenUrlCrossRefPubMed
  50. ↵
    1. Watson AB
    (1979) Probability summation over time. Vision Res 19:515–522.
    OpenUrlCrossRefPubMed
  51. ↵
    1. Watson AB,
    2. Pelli DG
    (1983) QUEST: a Bayesian adaptive psychometric method. Percept Psychophys 33:113–120.
    OpenUrlPubMed
  52. ↵
    1. Wichmann FA,
    2. Hill NJ
    (2001a) The psychometric function: I. Fitting, sampling, and goodness of fit. Percept Psychophys 63:1293–1313.
    OpenUrlAbstract/FREE Full Text
  53. ↵
    1. Wichmann FA,
    2. Hill NJ
    (2001b) The psychometric function: II. Bootstrap-based confidence intervals and sampling. Percept Psychophys 63:1314–1329.
    OpenUrlAbstract/FREE Full Text
Back to top

In this issue

The Journal of Neuroscience: 30 (29)
Journal of Neuroscience
Vol. 30, Issue 29
21 Jul 2010
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Advertising (PDF)
  • Ed Board (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Summation of Visual Motion across Eye Movements Reflects a Nonspatial Decision Mechanism
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Summation of Visual Motion across Eye Movements Reflects a Nonspatial Decision Mechanism
Adam P. Morris, Charles C. Liu, Simon J. Cropper, Jason D. Forte, Bart Krekelberg, Jason B. Mattingley
Journal of Neuroscience 21 July 2010, 30 (29) 9821-9830; DOI: 10.1523/JNEUROSCI.1705-10.2010

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Summation of Visual Motion across Eye Movements Reflects a Nonspatial Decision Mechanism
Adam P. Morris, Charles C. Liu, Simon J. Cropper, Jason D. Forte, Bart Krekelberg, Jason B. Mattingley
Journal of Neuroscience 21 July 2010, 30 (29) 9821-9830; DOI: 10.1523/JNEUROSCI.1705-10.2010
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Responses to this article

Respond to this article

Jump to comment:

  • The �integration hypothesis� relies on an untenable assumption of constant noise for the decision.
    Adam P Morris
    Published on: 06 December 2010
  • Probabilistic or decision mechanisms do not explain spatiotopic integration of motion
    Maria Concetta Morrone
    Published on: 06 September 2010
  • Published on: (6 December 2010)
    Page navigation anchor for The �integration hypothesis� relies on an untenable assumption of constant noise for the decision.
    The �integration hypothesis� relies on an untenable assumption of constant noise for the decision.
    • Adam P Morris, Postdoctoral Fellow
    • Other Contributors:
      • Charles C. Liu, Simon J. Cropper, Jason D. Forte, Bart Krekelberg, and Jason B. Mattingley

    Morrone et al. have raised three issues with our paper that we will address.

    First, Morrone et al. claim that our findings do not address motion integration or spatiotopicity because – unlike in the study of Melcher and Morrone (2003) – we did not randomize motion onset times across trials. In fact, the blocking and randomization of stimulus parameters in our experiments were identical to those used by Melcher an...

    Show More

    Morrone et al. have raised three issues with our paper that we will address.

    First, Morrone et al. claim that our findings do not address motion integration or spatiotopicity because – unlike in the study of Melcher and Morrone (2003) – we did not randomize motion onset times across trials. In fact, the blocking and randomization of stimulus parameters in our experiments were identical to those used by Melcher and Morrone. The “uncued” conditions in our critical experiment (Experiment 1) were direct replicates of the 1s ISI blocks from Melcher and Morrone (2003). The “cued” conditions differed only by the presence of an auditory cue and served to index the effects of uncertainty on performance in single- and dual-motion trials.

    Second, Morrone et al. suggest that if the advantage in the dual- motion condition reflects uncertainty-reduction rather than integration, as we have argued, then there should have been an increase in the slopes of the relevant psychometric functions. Slope changes, however, are expected only for large differences in uncertainty and should not be considered a reliable diagnostic tool (Pelli, 1985; Tyler & Chen, 2000). Even so, if one were to make such an inference, the literature has shown that a reduction in uncertainty is expected to decrease slope values rather than increase them. According to Pelli (1985, pp 1516): "Increasing the uncertainty M increases threshold and steepens the log-log slope of the psychometric function" (see also Tyler & Chen, 2000). Such an effect is apparent in the psychometric functions shown in Figure 3 (central panel) of Melcher and Morrone (2003), consistent with our uncertainty-reduction interpretation.

    Finally, Morrone et al. cite their recent work on spatial integration (Burr, Baldassi, Morrone & Verghese, 2009) to argue that uncertainty is a crucial prerequisite for integration effects. We agree, and would add that uncertainty is not inevitably a problem for summation experiments, including those that examine temporal integration. Our argument, however, is that if uncertainty affects component (single-motion) and compound (dual-motion) conditions differently, the critical assumption of constant noise for the decision process is violated and it becomes inappropriate to interpret a doubling of sensitivity as evidence for sensory integration.

    Our results clearly demonstrate marked differences in uncertainty for component and compound conditions. In Experiment 1, we found that cuing the onset of a single motion signal in the task of Melcher and Morrone (2003) lowered thresholds by a factor of 1.7, whereas cuing the second motion pulse in the dual motion condition had almost no influence on performance. Similarly, halving the overall duration of the stimulus – by trimming the first and last 2.5 seconds of dynamic noise – enhanced sensitivity for the single-motion condition but had virtually no effect on performance for the dual-motion condition (Supplemental Fig. 1 of our paper). Both results suggest that “decision-noise” arising from temporal uncertainty was greater in the single-motion condition of Melcher and Morrone (2003) than in the dual-motion condition.

    The cause of these asymmetric effects of uncertainty is not clear. We note, however, that in the task of Melcher and Morrone (2003), the brief, near-threshold motion signals are embedded within an unusually long period of dynamic noise (10s), and do not occur with equal probability at each point in the trial (rather, motion signals are always positioned somewhere near the middle of the 10s interval). Observers are thus discouraged from integrating over the entire stimulus and instead are encouraged to select an epoch upon which to base the perceptual decision. This design provides observers with ample time to alter their sampling strategies mid-trial (perhaps on the basis of acquired information), and also introduces the possibility that observers sample the stimulus differently in single- and dual-motion trial blocks.

    Regardless of the precise mechanism, our results suggest that threshold ratios in the Melcher and Morrone (2003) task reflect the limiting and asymmetric effects of uncertainty on sensitivity and not the integration properties of the relevant detectors. By this account, the doubling of sensitivity reported by Melcher and Morrone is an idiosyncratic feature of their 10s stimulus presentation and does not provide evidence for a spatiotopic representation of visual space in the human brain.

    References:

    Melcher D, Morrone MC (2003) Spatiotopic temporal integration of visual motion across saccadic eye movements. Nat Neurosci 6:877– 881.

    Pelli DG (1985) Uncertainty explains many aspects of visual contrast detection and discrimination. J Opt Soc Am A 2:1508 –1532.

    Tyler CW, Chen CC (2000) Signal detection theory in the 2AFC paradigm: attention, channel uncertainty and probability summation. Vision Res 40:3121–3144.

    Show Less
    Competing Interests: None declared.
  • Published on: (6 September 2010)
    Page navigation anchor for Probabilistic or decision mechanisms do not explain spatiotopic integration of motion
    Probabilistic or decision mechanisms do not explain spatiotopic integration of motion
    • Maria Concetta Morrone, Professor
    • Other Contributors:
      • David P Melcher, and David C Burr

    Melcher and Morrone (2003) reported that motion signals that correspond either in spatiotopic or retinotopic position are summated across saccades, suggesting a potentially important mechanism for maintaining visual stability; but Morris and colleagues (2010) have challenged these results, claiming that they result from a nonspatial reduction in uncertainty.

    In order to demonstrate spatiotopic selectivity of int...

    Show More

    Melcher and Morrone (2003) reported that motion signals that correspond either in spatiotopic or retinotopic position are summated across saccades, suggesting a potentially important mechanism for maintaining visual stability; but Morris and colleagues (2010) have challenged these results, claiming that they result from a nonspatial reduction in uncertainty.

    In order to demonstrate spatiotopic selectivity of integration, it is first necessary to demonstrate integration, a doubling in sensitivity for two stimuli compared with one, and this requires some care. We (Burr, Baldassi, Morrone & Verghese, 2009) have recently studied the circumstances under which the visual system integrates or segregates motion signals. One experiment (Fig 2 of Burr et al., 2009) showed that when the location of the motion signals were unknown, coherence sensitivity depended linearly on stimulus area, as is predictable from signal detection theory (the system must integrate signals over the entire region, so noise levels will be constant in all conditions and sensitivity will increase in direct proportion to stimulus area). However, when the signal-sectors were cued, the increase in sensitivity was much less, well approximated by a square-root relationship (again predictable from signal detection theory, as the system can exclude the non-signal sectors, so both signal and noise are reduced for small stimulus area).

    Although Burr et al (2009) studied integration and segregation over space, the same argument obviously extends to time: motion integration requires an “uncued” paradigm with maximum uncertainty of stimulus presentation. For this reason, Melcher and Morrone (2003) interleaved their conditions, presenting them randomly within the same session conditions where the motion stimuli precede the saccade, follow the saccade or are presented both before and after. As the observers had no knowledge of which condition they were in on a given trial, their best strategy was to integrate over the entire interval, both signal and noise. These conditions of maximum uncertainty produced a factor of two advantage of the dual presentation compared with either single presentation, the maximum theoretically possible, but only for spatiotopically or retinotopically corresponding stimuli. Further evidence that Melcher and Morrone’s data show integration not uncertainty reduction is that the slopes of the psychometric functions did not increase in the less uncertain conditions, as they should (Pelli, 1985; Verghese, 2001).

    On the other hand, Morris et al (2010) measured thresholds in blocked trials, so observers knew in which interval the trial should appear. These conditions, like the cued conditions of Burr et al (2009), led to a very weak, root-two improvement (with or without saccades), that could disappear under conditions of auditory cuing. And not surprisingly, this small improvement in sensitivity was not specific for spatial location, occurring both for spatiotopically coincident stimuli and for stimuli in unrelated locations: much as the cued stimuli in Burr et al’s (2009) study did not have to be spatially adjacent for there to be apparent integration over space.

    So despite the title of their paper, and the conclusions drawn, Morris et al’s study has nothing to do with spatiotopicity, or even integration and segregation of motion mechanisms. All their results are predictable from the particular experimental conditions chosen, both from theory and from analogy to published work investigating integration and segregation of motion; they say nothing about the spatial selectivity of integration across saccades.

    Understanding spatiotopicity is important, probably the key to understanding how space is represented in the brain. Whereas retinotopy is, at least in principle, easy to understand, in that it reflects the topographic retino-cortical connections spatiotopicity is a more complex and subtle phenomenon. Despite the enormous research effort aimed at revealing mechanisms implicated in spatiotopicity, such as shifting receptive fields, the identification of the pathways for corollary discharge and evidence for true spatiotopicity in both monkey and human, we are still far from a complete understanding how spatiotopicity comes about, and how the external world is represented in our brains. Understanding the details of spatiotopicity will be one of the more exciting challenges of the next few decades, a challenge that needs to be met with careful and rigorous research, free from prejudice and preconceptions.

    Burr, D.C., Baldassi, S., Morrone, M.C. & Verghese, P. (2009). Pooling and segmenting motion signals. Vision Res, 49 (10), 1065-1072.

    Melcher, D. & Morrone, M.C. (2003). Spatiotopic temporal integration of visual motion across saccadic eye movements. Nat Neurosci, 6 (8), 877-881.

    Morris, A.P., Liu, C.C., Cropper, S.J., Forte, J.D., Krekelberg, B. & Mattingley, J.B. (2010). Summation of visual motion across eye movements reflects a nonspatial decision mechanism. J Neurosci, 30 (29), 9821-9830.

    Pelli, D.G. (1985). Uncertainty explains many aspects of visual contrast detection and discrimination. J. Opt. Soc. Am., A2, 1508-1532.

    Verghese, P. (2001). Visual search and attention: a signal detection theory approach. Neuron, 31 (4), 523-535.

    Show Less
    Competing Interests: None declared.

Related Articles

Cited By...

More in this TOC Section

Articles

  • Memory Retrieval Has a Dynamic Influence on the Maintenance Mechanisms That Are Sensitive to ζ-Inhibitory Peptide (ZIP)
  • Neurophysiological Evidence for a Cortical Contribution to the Wakefulness-Related Drive to Breathe Explaining Hypocapnia-Resistant Ventilation in Humans
  • Monomeric Alpha-Synuclein Exerts a Physiological Role on Brain ATP Synthase
Show more Articles

Behavioral/Systems/Cognitive

  • Musical Expertise Induces Audiovisual Integration of Abstract Congruency Rules
  • The Laminar Development of Direction Selectivity in Ferret Visual Cortex
  • Individual Differences in Amygdala-Medial Prefrontal Anatomy Link Negative Affect, Impaired Social Functioning, and Polygenic Depression Risk
Show more Behavioral/Systems/Cognitive
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.