Reply

Sensory signals are noisy, and noise is unpredictable: so it is natural to suppose the variability of reaction times to be due to the time it takes to detect a noisy signal, perhaps by integrating it until it diverges 'significantly' from background noise. Although this is undoubtedly how a system for detecting signals in the presence of noise ought to behave, large data sets demonstrate that when stimuli are well above their thresholds, latencies are not in practice statistically distributed in the way such models predict (Fig. 3). This suggests that the dominant factor is not detection time, but lies in some subsequent stage of processing that is more variable still. For instance, to recognize a complex object, apart from detecting the existence of its various components, we need also to determine that they are in some particular relationship to one another. Recent neurophysiological studies of this second stage of decision-making9,10 have demonstrated rise-to-threshold mechanisms distinct from detection of the underlying visual elements, and subject to their own variability. With high-contrast stimuli, detection is quick, and latency is dominated by the variability caused by this second stage, as statistical analysis of neuronal responses in the frontal eye field shows11. The LATER model, which forms the basis for our paper, describes this second stage. In LATER, the variability does not arise because of uncertainty in afferent signals, but represents a mechanism of deliberate and gratuitous randomization, which has the biologically desirable function of making our actions unpredictable12.

Figure 3: Reciprobit plots of large-latency data sets.
figure 1

(a) Manual key presses in response to a visual stimulus, n = 825 (ref. 17). (b) Saccades in response to visual stimuli in a step task (n = 1500) and an overlap task (n = 1900; R.H.S.C., unpublished data). (c) Manual responses to an auditory stimulus, n = 2040 (ref. 18). In each case, the data follow a recinormal distribution far into the tail (99.9%), although at short latencies, a second process is also evident.

The origin of LATER lies not in detection theory, but in empirical analysis of actual distributions13. Whereas modest data sets may often be fitted by several kinds of models, the very large sets needed to provide a stringent test decide unequivocally in favor of LATER, with accurate prediction of the observed distributions well into the tail (Fig. 3); signal detection models cannot do this. The distinct population of very rapid responses that are often apparent can be explained by a simple extension of LATER14, though this seems an unprofitable area to pursue at present: being so few, their distribution can be fitted by several other perfectly plausible hypotheses. Errors may arise from competing LATER units corresponding to incorrect responses, and LATER provides a good prediction of competitive behavior of this kind in the case of countermanding tasks15 and where subjects are offered an overt choice16. In the same way, although our data sets were large enough to demonstrate what we wished to demonstrate—that LATER correctly predicts what happens when subjects alter their criterion—they cannot distinguish between different models sharing the essential characteristic of rise-to-threshold of some kind.

To summarize, some tasks—such as responding to low-contrast spots—are dominated by detection. In others, detection is trivial, and it is decision that takes most of the time. In the former case, reaction time is diffusionist; the latter is LATER. Thus, two models of reaction time may happily co-exist.

See Putting noise into neurophysiological models of simple decision making by Carpenter and Reddi