Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

Temporal Expectation Hastens Decision Onset But Does Not Affect Evidence Quality

Ruud L. van den Brink, Peter R. Murphy, Kobe Desender, Nicole de Ru and Sander Nieuwenhuis
Journal of Neuroscience 6 January 2021, 41 (1) 130-143; https://doi.org/10.1523/JNEUROSCI.1103-20.2020
Ruud L. van den Brink
1Section Computational Cognitive Neuroscience, Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, 20251 Germany
2Department of Psychology, Leiden University, Leiden, 2333 AK, The Netherlands
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Ruud L. van den Brink
Peter R. Murphy
1Section Computational Cognitive Neuroscience, Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, 20251 Germany
2Department of Psychology, Leiden University, Leiden, 2333 AK, The Netherlands
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Peter R. Murphy
Kobe Desender
1Section Computational Cognitive Neuroscience, Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, 20251 Germany
3Department of Experimental Psychology, Ghent University, Ghent, 9000, Belgium
4Brain and Cognition, KU Leuven, Leuven, 3000, Belgium
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Kobe Desender
Nicole de Ru
2Department of Psychology, Leiden University, Leiden, 2333 AK, The Netherlands
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Sander Nieuwenhuis
2Department of Psychology, Leiden University, Leiden, 2333 AK, The Netherlands
5Leiden Institute for Brain and Cognition, Leiden, 2333 AK, The Netherlands
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Sander Nieuwenhuis
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

The ability to predict the timing of forthcoming events, known as temporal expectation, has a strong impact on human information processing. Although there is growing consensus that temporal expectations enhance the speed and accuracy of perceptual decisions, it remains unclear whether they affect the decision process itself, or non-decisional (sensory/motor) processes. Here, healthy human participants (N = 21; 18 female) used predictive auditory cues to anticipate the timing of low-contrast visual stimuli they were required to detect. Modeling of the behavioral data using a prominent sequential sampling model indicated that temporal expectations speeded up non-decisional processes but had no effect on decision formation. Electrophysiological recordings confirmed and extended this result: temporal expectations hastened the onset of a neural signature of decision formation but had no effect on its build-up rate. Anticipatory α band power was modulated by temporal expectation and co-varied with intrinsic trial-by-trial variability in behavioral and neural signatures of the onset latency of the decision process. These findings highlight how temporal predictions optimize our interaction with unfolding sensory events.

SIGNIFICANCE STATEMENT Temporal expectation enhances performance, but the locus of this effect remains debated. Here, we contrasted the two dominant accounts: enhancement through (1) expedited decision onset, or (2) an increase in the quality of sensory evidence. We manipulated expectations about the onset of a dim visual target using a temporal cueing paradigm, and probed the locus of the expectation effect with two complementary approaches: drift diffusion modeling (DDM) of behavior, and estimation of the onset and progression of the decision process from a supramodal accumulation-to-bound signal in simultaneously measured EEG signals. Behavioral modeling and neural data provided strong, converging evidence for an account in which temporal expectations enhance perception by speeding up decision onset, without affecting evidence quality.

  • decision-making
  • EEG
  • expectation
  • perception
  • drift diffusion modeling

Introduction

To efficiently process the large amount of sensory information that we constantly receive, the brain actively predicts upcoming sensory input rather than passively registering it. One way the brain achieves this is by exploiting temporal contingencies in the continuous stream of sensory input. These contingencies can be used to prepare for relevant events and optimize processing of those events. The temporal expectations shaped by these contingencies have a profound impact on perception and action, enhancing the speed and, in some cases, the accuracy of responding in a wide range of information-processing tasks (Niemi and Näätänen, 1981; Hackley and Valle-Inclán, 2003; Nobre et al., 2007; Denison et al., 2017; Nobre and van Ede, 2018). Recent research has established that the brain expresses such temporal expectations by synchronizing oscillatory neural dynamics (power and phase) with the temporal structure of the environment (Busch et al., 2009; Schroeder and Lakatos, 2009; Stefanics et al., 2010; Cravo et al., 2013; Henry et al., 2014; van den Brink et al., 2014). For example, fluctuations in the amplitude of ongoing neural oscillations in the α band (9–12 Hz) closely track the time course of temporal expectations (Rohenkohl and Nobre, 2011; Zanto et al., 2011; Heideman et al., 2018). In contrast, the mechanisms through which temporal expectations enhance task performance remain unclear. The goal of the current study was to identify how temporal expectations shape perception.

Although there is a growing consensus that temporal expectations enhance the speed and/or accuracy of perceptual decisions (Correa et al., 2005, 2006a; Rolke and Hofmann, 2007; Jepma et al., 2012; Rohenkohl et al., 2012a; Vangkilde et al., 2012), different studies have arrived at different conclusions as to whether this is achieved through (1) expedited decision onset (Bausenhart et al., 2010; Seibold et al., 2011; Jepma et al., 2012) or (2) an increase in the quality of the sensory evidence (Rohenkohl et al., 2012a; Vangkilde et al., 2012; Cravo et al., 2013), two accounts that are not mutually exclusive. Under most computational frameworks for decision-making, decision onset is determined by the duration of non-decisional processes (i.e., sensory encoding), whereas evidence quality is equivalent to the mean rate at which evidence is accumulated (i.e., decision formation; Rohenkohl et al., 2012a; Vangkilde et al., 2012).

One way to address this discrepancy around the locus of temporal expectation effects is by decomposing performance on cognitive tasks into latent information-processing parameters using sequential-sampling models (Forstmann et al., 2016) and examining the effects of temporal expectations on those parameters. Previous attempts to discriminate between the decision onset account and evidence quality account employed such model-based analyses of behavioral data. Rather than solely considering average response times (RTs) and error rates, model-based analyses generally consider the full distributions of RTs and their relationship with error rates, which are expected to be influenced differentially by various latent parameters, depending on task constraints such as stimulus masking or response deadlines. However, recent studies on perceptual decision-making have revealed that conclusions based on prominent sequential-sampling models can in some cases be contradicted by complementary analyses of neural signatures of decision formation (McGovern et al., 2018; Spieser et al., 2018), suggesting that it is critical to corroborate insights from modeling with neural evidence. Specifically, non-invasive human EEG recordings have identified a domain-general build-to-threshold signal, the centroparietal positivity (CPP), that has been suggested to reflect decision formation via the gradual accumulation of sensory evidence (O'Connell et al., 2018). Importantly, measures of CPP onset latency and slope can be used to dissociate decision onset from other influences on the decision process (evidence strength and the rate of accumulation; Loughnane et al., 2016).

To address the outstanding discrepancy in the literature, we combined these modeling and electrophysiological approaches to examine the effects of temporal expectations on perceptual decision-making in a temporal cuing task, a powerful paradigm for manipulating temporal expectations (Coull and Nobre, 1998). The two approaches provided strong, converging evidence for an account in which temporal expectations enhance task performance specifically by shortening the time before the onset of the decision process and not by affecting the decision process itself. In addition, time-frequency analysis of the EEG data identified peri-stimulus α band power as a significant factor underlying the effect of temporal expectation on decision onset.

Materials and Methods

Participants

A total of 30 participants took part in the study. After EEG artifact rejection and predetermined exclusion criteria (see section EEG recording and preprocessing), a total of 21 participants remained (mean age 22.6 years old; SD 2.4; range 18–28; 18 female). All participants had normal or corrected-to-normal vision, and were free from any neurologic or psychiatric disorders. Participants gave written informed consent and were compensated with €7.50 or course credit. The experiment was approved by the Leiden University Institute of Psychology Ethics Committee.

Task

The task consisted of the detection of the appearance of a small opening (decrease in luminance, 0.19° by 0.23° visual angle) on either side of a white square box (side length = 2.34° visual angle) presented on a black background (Fig. 1A). Participants reported the onset of the target stimulus (i.e., the small opening) by pressing the space bar of a computer keyboard with the index finger of their right hand, regardless of whether the target appeared on the left or the right. The target remained on the screen for a limited viewing time of one second, during which the participant was required to respond. If the participant responded after this time, the trial was counted as an error. Upon target offset, the surrounding box also disappeared briefly (50 ms) to signal the onset of a new trial. If the participant responded before target onset, the text “You responded too soon!” appeared in red and the trial was aborted. Task difficulty was manipulated by adjusting the luminance of the target stimulus. The decrease in luminance relative to the surrounding box on easy and difficult targets (28% and 20%) was determined in a pilot experiment (N = 6), and set such that its effect on RT was of approximately the same size as the effect of cue validity on RT for the short cue-target interval (CTI; see below). This allowed a fair dissociation between underlying model parameters and electrophysiological markers. We opted for a detection task rather than a discrimination task because temporal cuing effects on RT are consistently found for detection tasks but not discrimination tasks (Correa et al., 2004), and we intended to leverage such effects in modeling analyses and in probing the electrophysiological data for signatures of the underlying processes at play.

In order to manipulate temporal expectation about target onset, we used the temporal cueing paradigm (Coull and Nobre, 1998). Following a 1.5-s fixation interval, target onset was probabilistically cued by a brief (150 ms) auditory signal. Cue 1 (440 Hz) signaled that the target would likely appear after 2700 ms (long CTI), and cue 2 (1320 Hz) signaled that the target would likely appear after 1350 ms (short CTI). The CTIs were chosen based on prior work (Stefanics et al., 2010). Both cues had a validity of 80%, such that on 20% of the trials the target would appear after the uncued interval (invalid trials). The cues and the target were presented in different sensory modalities so that participants could optimally distinguish them. The cue was presented in the auditory domain to ensure that participants perceived the cue even when briefly losing fixation.

As is common in the temporal cueing literature, we included a small proportion (13%) of catch trials, on which no target appeared after the cue. This ensures that participants generally await target presentation at the long CTI and make few anticipatory responses (Correa et al., 2006b). These trials, along with a small amount of trials (mean = 4.0 s, SD = 5.1) on which participants responded before target onset (i.e., false alarms), were excluded from all analyses, with the exception that false alarms were included to assess effects on error rate. Catch trials were excluded because they do not have an associated RT to fit (see below) and because they are unmatched to non-catch trials in terms of their hazard function for errors (i.e., compared with other trials, catch trials contain a longer time window during which a response could be made that would count as a false alarm).

Participants were instructed to maintain fixation throughout the task, to use the cue to speed up target detection, and to “respond as quickly and accurately as possible.” In total, participants performed eight blocks of 115 trials per block (920 in total). Participants briefly practiced the task beforehand (two blocks of 24 trials). The total duration of the task was ∼1 h.

Behavioral data analysis

Effects of cue validity, CTI, and difficulty on RT were tested with a repeated-measures ANOVA in JASP version 0.9.2 (JASP Team, 2019), with cue validity (valid or invalid), CTI (short or long), and difficulty (easy or difficult) as within-participant factors. Planned paired-samples t tests were conducted to examine the effect of cue validity on RT for the short CTI, the effect of difficulty on RT, and differences in the size of these two effects. Because we expected the effect of cue validity for the short CTI to be approximately the same size as the effect of difficulty, we calculated a Bayes factor (using default priors) for this statistical comparison to estimate the evidence for the null hypothesis of no difference. Bayes factors between zero and one indicate evidence for the null hypothesis, with 1/10 ≤ Bayes Factor (BF) ≤ 1/3 indicating “substantial” evidence for the null hypothesis (Wetzels and Wagenmakers, 2012).

Accuracy was analyzed using generalized mixed regression modeling, separately for the two types of possible errors on the task (misses and false alarms). This method allowed analyzing accuracy data at the single-trial level and was necessary given that the bounded nature of accuracy scores results in inherent violations of the assumptions made by ANOVA. We fitted random intercepts for each participant; error variance caused by between-subject differences was accounted for by adding random slopes to the model. The latter was done only when this significantly increased the model fit. We used logistic linear mixed models, for which χ2 statistics are reported. Model fitting was done in R (R Development Core Team, 2008) using the lme4 package (Bates et al., 2015).

Hierarchical drift diffusion modeling (DDM)

We decomposed behavioral data from the target-detection task into latent parameters of the decision process using the DDM, a popular instance of sequential-sampling models of RT tasks (Ratcliff and McKoon, 2008; Forstmann et al., 2016). The DDM assumes that for two-alternative forced choice decisions, noisy sensory evidence is accumulated from a starting point z, at drift rate v, toward one of two decision bounds (thresholds), labeled 0 and a. When the accumulated evidence reaches one of the two bounds, the corresponding decision is initiated. The distance between the bounds, referred to as boundary separation, is equal to a. The model ascribes all non-decisional processes, including sensory encoding and response execution, to a non-decision time parameter Ter.

Most decision-making tasks require selecting between two overt responses. However, our task is similar to a go/no-go task, in that the participants have to arbitrate between a simple go decision when the target is presented and a no-go decision when the target is not presented. As a consequence, RTs for a no-go decision cannot be empirically measured. Therefore, in line with previous studies (Gomez et al., 2007; Zhang et al., 2016), we assumed an implicit absorbing lower decision bound for no-go decisions and an explicit absorbing upper boundary for go decisions. We then fitted the DDM to participants' decisions (i.e., the proportion of go and no-go decisions) as well as the distributions of RTs (for regular trials with correctly timed responses).

We fitted this DDM to behavioral data (choices and RTs), using the hierarchical Bayesian model fitting procedure implemented in the HDDM toolbox (version 0.6.1; Wiecki et al., 2013). The HDDM uses Markov-chain Monte Carlo sampling, which generates full posterior distributions over parameter estimates, quantifying not only the most likely parameter value but also uncertainty associated with each estimate. Because of the hierarchical nature of the HDDM, estimates for individual participants are constrained by group-level prior distributions. In practice, this results in more stable estimates for individual participants, especially when working with low trial numbers (Vandekerckhove et al., 2011; Wiecki et al., 2013).

For each model fit, we drew 100,000 samples from the posterior distribution. The first 10% of these were discarded as burn-in and every second sample was discarded for thinning, reducing autocorrelation in the chains. Group-level chains were visually inspected to ensure convergence, ruling out sudden jumps in the posterior and ruling out autocorrelation. Additionally, the model that is reported in the main text was run three times to compute Gelman–Rubin R̂ statistics (comparing within-chain and between-chain variance). We checked and confirmed that all group-level parameters had an R̂ value between 0.98 and 1.02, suggesting convergence between these three instantiations of the same model. Because individual parameter estimates are constrained by group-level priors, data are not independent and therefore frequentist statistics cannot be used. The probability that a condition differs from another can be computed by calculating the overlap in posterior distributions. The fits of models of different complexity were compared with each other by calculating the deviance information criterion (DIC; Spiegelhalter et al., 2002). Lower DIC values indicate that a model explains the data better, while taking model complexity into account. DIC differences >10 are generally taken as strong evidence for a difference in model goodness-of-fit.

Cue validity was expected to reliably influence behavior for early, but not late, targets (Correa et al., 2006b). The lack of an effect for the long CTI is caused by the fact that on invalidly cued long-CTI trials, the participant can reorient temporal attention from the expected short CTI to the actual long CTI, in time for the target to appear. For this reason, we only fitted the behavioral data from trials with a short CTI. Trials on which participants failed to respond (i.e., misses, mean = 24.4 trials, SD 18.1) were considered trials on which the no-go bound was crossed. For these types of trials, RTs were set to NaN (i.e., not a number), so that the (missing) RT did not contribute to the parameter estimation (as implemented in HDDM version 0.6.1), whereas the (erroneous) decision itself did contribute, thus facilitating accurate and reliable estimates of model parameters. The trials included for fitting the DDM were thus hits (trials with an RT between target onset and the response deadline), and misses (with RT set to NaN), both in the short CTI condition only.

In the main text, we focus on a model in which both drift rate and non-decision time were free to vary as a function of cue validity, and drift rate was also free to vary as a function of difficulty. Apart from this model, we fitted and compared a variety of different models in which drift rate was always allowed to vary as a function of difficulty, and in which all possible combinations of drift rate, non-decision time and/or boundary separation were allowed to vary as a function of cue validity. Previous fits of the DDM to go/no-go tasks have allowed for the possibility of a biased starting point z (Gomez et al., 2007). This was possible because the tasks to which these fits were made produced a moderate number of errors (and associated RTs) on no-go trials, which were essential for appropriately constraining estimates of the a and z parameters when both were allowed to vary. In our case, there were very few such trials (i.e., behavioral responses within the temporal interval that targets could appear, but on trials when the target was not presented), and we therefore lacked the appropriate constraint to estimate a and z in the same model. For this reason, all models assumed an unbiased starting point (z = a/2; see also Ratcliff and van Dongen, 2011). We note that if the key effect of temporal expectation on behavior was in fact on starting point, in our fits, this would load onto the boundary separation parameter (intuitively, if crossing of only one of the two available boundaries occurs regularly and yields measurable behavior, then a shift in starting point toward that boundary will generate identical behavior as a decrease in boundary separation of the same magnitude). However, the most complex model (in which all parameters depended on validity) yielded no significant effect of cue validity on decision bound, making it unlikely that shifts in starting point account for our main results.

View this table:
  • View inline
  • View popup
Table 1.

Trial type frequency per condition in percent

An implicit assumption of the DDM is that evidence accumulation is triggered by the onset of the imperative stimulus, and that no integration takes place before this. While this assumption is reasonable for common perceptual discrimination tasks that employ suprathreshold stimuli with clearly discernible onsets (Roitman and Shadlen, 2002), it may be less valid for tasks such as ours in which detection of a low-intensity target stimulus reflects the culmination of the decision process. We acknowledge that a more appropriate conceptualization of the decision process in such contexts may be one of continuous, possibly leaky (Usher and McClelland, 2001; Ossmy et al., 2013) evidence accumulation that is triggered at trial (rather than target) onset, with temporal expectation effects arising through transient modulations of aspects of this continuous process. We propose that modeling of observed behavior on our task using the DDM nonetheless provides a highly useful simplification of such a process, for the following reasons. First, the key behavioral measures for dissociating candidate characterizations of continuous, prestimulus accumulation, and effects of temporal expectation thereon, are the occurrence and timing of false alarms. However, false alarm rates were very low in our data (Table 1) and were not affected by temporal expectation (see Results). These findings suggest that prestimulus accumulation was not a central determinant of decisions on our task (a point corroborated by our analysis of ERPs to expected but omitted targets; see below), was not clearly modulated by temporal expectation, and moreover that our data provided insufficient constraint for modeling this process. Second, whether one assumes target onset-triggered evidence accumulation, or continuous accumulation that is transiently modulated by temporal expectation without affecting pretarget behavior, in both cases, effects of temporal expectation on non-decision time, drift rate, or decision bound will have markedly different consequences for post-target detection behavior that are discernible through the DDM-based modeling scheme that we employed, specifically, a change in non-decision time primarily serves to offset the RT distribution by some constant, whereas changes in drift rate and/or bound will affect both RT and accuracy in dissociable ways. Thus, while our modeling likely provides a simplified account of how the decision process unfolds across entire trials of our task, it nonetheless permitted us to meaningfully test our key hypotheses.

View this table:
  • View inline
  • View popup
Table 2.

Models ranked in terms of DIC

EEG recording and preprocessing

EEG data were recorded using a BioSemi ActiveTwo system from 64 channels placed according to the international 10/20 system. Additionally, a reference electrode was placed on each mastoid, and bipolar electrooculogram (EOG) recordings were obtained from electrodes placed ∼1 cm lateral of the outer canthi (horizontal EOG) and from electrodes placed ∼1 cm above and below the left eye (vertical EOG). During acquisition, impedances were kept below 30 kΩ. The EEG signal was preamplified at the electrode to improve the signal-to-noise ratio with a gain of 16×, and digitized at 24-bit resolution with a sampling rate of 1024 Hz. Each active electrode was measured online with respect to a common mode sense (CMS) active electrode producing a monopolar (non-differential) channel. All EEG data were analyzed in MATLAB 2012a, using the EEGLAB toolbox (Delorme and Makeig, 2004) and custom code. EEG data were re-referenced off-line to the average of the mastoid channels. Artifactual channels were interpolated with cubic spline interpolation.

In order to preserve the temporal characteristics of the slowly evolving CPP, we adopted a two-stage cleaning procedure. First, to remove drifts, the continuous EEG data were high-pass filtered offline at 0.5 Hz, and segmented from −4.8 s until +1 s surrounding target onset and surrounding response onset. An automatic algorithm detected trials that contained artifacts based on the following criteria: channel joint probability (5.5); channel kurtosis (5.5), and absolute voltage deflections (2 mV). Next, we detected eye-movement and blink artifacts using joint approximation diagonalization of eigen matrices (JADE) independent component analysis (ICA). ICA weights were subsequently stored and projected onto the raw unsegmented data to which no high-pass filter had been applied. The purpose of back projection was to prevent ICA from solely explaining variance because of large drifts in the unfiltered data and thus inaccurately identify eye-blink-related components and to ensure that trials would not meet artifact rejection criteria because of eye blinks alone.

After removing artifactual ICA components from the unfiltered data, the data were again high-pass filtered (using a two-way least squares FIR filter with a Hanning smoothed kernel) but with a lower frequency cutoff (0.1 Hz) to preserve slow-varying components in the data. The data were segmented, and the automatic artifact detection algorithm was applied, this time with a more stringent absolute voltage deflection criterion (300 μV). All data were then manually checked for residual artifacts and such trials were removed if necessary. Following artifact rejection, the data were low-pass filtered at 6 Hz. Participants for whom <50 trials remained in a single condition were excluded from further analysis to ensure that enough trials remained for single-trial analyses. This applied to eight participants. For one participant, inadvertently no EEG data were saved to disk, making the final sample 21 participants. This exclusion criterion was determined a priori, and data of the excluded participants were not analyzed further. Finally, the data were re-referenced to the common average, and converted to current source density to prevent overlap between the CPP and the frontocentral negativity (Kelly and O'Connell, 2013).

CPP identification and parameter estimation

Based on the response-locked scalp topography and following prior work (O'Connell et al., 2012; Twomey et al., 2015), all CPP-related analyses were conducted on the average of three centro-parietal EEG channels (P1, P2, and Pz). Significant deviation of the CPP from baseline was examined for each time point from 200-ms pretarget to 800-ms post-target onset using permutation testing (10,000 iterations), correcting for multiple comparisons using the false discovery rate (FDR; q = 0.05).

To estimate CPP slope and onset, we fitted a two-part line segment that was connected by a central inflection point (Fig. 2) to the data in a window starting at −200 ms prestimulus and ending at the time of the response (single-trial analyses) or the peak latency of the CPP (trial-average analyses). The difference in the length of the fitting window between single-trial and trial average analyses was motivated by the facts that responses should clearly demark the termination of the decision (and thus provide a good upper time limit on any single-trial electrophysiological signature of the decision process), but trial average waveforms do not have an associated response, and the peak CPP thus provided an alternate approximation of the point at which the decision process was terminated. The fitted preinflection line segment was constrained to have zero slope and amplitude. The two free parameters of this piece-wise function (time of inflection point and slope of postinflection line) were fit via the Nelder–Mead Simplex optimization routine (via the fminsearchbnd MATLAB function), which minimized the sum of squared residuals between the fit and the observed CPP. The latency of the fitted inflection point was taken as the onset latency of the CPP, and the slope of the line segment following the inflection point was taken as the slope of the CPP. In single-trial analyses, trials with unrealistic onset latencies (before stimulus onset or >600 ms following stimulus onset) and trials with an unrealistic slope (zero or negative) were excluded from single-trial analysis. Although such trials were rare (average percentage of excluded trials: 0.21%; range: 0–0.62%), it was deemed necessary to avoid contamination by residual artifacts at the start of the trial and motor-related artifacts toward the end of the trial. One participant did not show a clear peak in the CPP, so for that participant, we used the time of maximum first-order derivative (i.e., the time point where CPP build-up rate started to decline) as a substitute for peak latency for fitting in trial-average analyses. In cases where we used slope in between-participant correlations, we normalized the CPP amplitude across participants to counteract arbitrary differences because of scalp conduction properties, electrode impedances, and other sources of spurious variance before computing slope. This method of estimating CPP parameters yielded accurate fits to the data: the correlation between estimated CPP and data were 0.98 (range: 0.95–0.99).

Because the fitting window for parameter estimation at the single-trial level varied with RT, and RT in turn varied as a function of task condition, it was conceivable that estimated CPP onset would vary as a function of task condition by chance alone. Therefore, we computed a permuted null distribution by randomly sampling a time point within the fitting window for each trial 10,000 times. We then computed a trial-average value, separately for each condition and iteration of the permutation test, and subsequently averaged across participants. A p value for the true participant-average between-condition difference in estimated CPP onset was then computed by counting the number of observations in the permuted null distribution equal to or larger than the true value. Thus, the p value reflects the significance of the effect of condition on CPP onset latency, beyond the effect of between-condition differences in the fitting window alone. In addition to the p value, we report a 95% confidence interval (CI) around the mean difference between conditions, computed by summating the true mean difference between conditions and the 5th and 95th percentiles of the permuted null distribution.

Posterior α band power

We used Morlett wavelets to decompose the EEG data into its spectral representation. The wavelets were linearly spaced from 1 to 30 Hz with a cycle range from 3 to 12 (and were defined as in van den Brink et al., 2014). Power for each trial was expressed as a percentage change from the average of a −400 to −100-ms, frequency-specific, prestimulus baseline. Baseline power was calculated as the average across channels and conditions during the baseline period, to preserve potential between-condition differences in power before stimulus onset as well as their topographical distribution. We then averaged power in the α band (9–12 Hz) across 14 parieto-occipital channels, selected based on prior work on the relationship between α power and temporal expectations (Rohenkohl and Nobre, 2011).

We first examined whether α power was modulated by expectations about stimulus onset by comparing valid to invalid trials with a short CTI, using nonparametric permutation testing (10,000 iterations) and expected larger α power around stimulus onset on invalid trials (i.e., when a target was presented but the participant did not expect it). We then sorted trials by α power in a −25- to 75-ms window surrounding target onset (i.e., the earliest window that showed an effect of expectation on α power), computed the average CPP and RT for three bins of trials of equal size, and estimated CPP slope and onset using the fitting procedure described above. We performed the binning procedure separately for each of the eight conditions of the task design to ensure that a potential relationship between α power and CPP parameters was not driven by between-condition differences in RT that covaried with α power (because of the task design), but instead reflected intrinsic trial-by-trial covariation within a given condition. Finally, for each participant we averaged across the eight conditions and then fitted a straight line to the values of RT, CPP slope, and CPP onset across the three bins and compared the distribution of slopes across participants to zero using permutation testing (10,000 iterations).

Experimental design and statistical analyses

The study followed a within-participants design with one session per participant. RT was analyzed with a repeated-measures ANOVA. Error rates were analyzed with generalized mixed regression modeling (see above, Behavioral data analysis). Effects of expectation on DDM parameters were assessed with a hierarchical fitting procedure (see above, Hierarchical drift diffusion modeling). Effects of expectation on CCP parameters were examined using non-parametric permutation testing (see above, CPP identification and parameter estimation). Effects of expectation on α power and the relationship between α power and CPP parameters and RT were examined using permutation testing (see above, Posterior α band power). This study was not preregistered.

Data and code availability

The raw and processed data, as well as code to reproduce the results are publicly available without restriction [osf.io/bs3gj/ and github.com/rudyvdbrink/Temporal_expectation_CPP].

Results

Twenty-one individuals each performed 920 trials of a speeded, visual target-detection task in which perceptual difficulty was manipulated by adjusting the luminance of the target stimulus (Fig. 1A). Temporal expectations about the timing of the target were manipulated using auditory cues. On each trial, one of two cue tones probabilistically predicted the onset time of the subsequent target. One tone was followed by an early target (1350 ms after the cue) on 80% of the trials and a late target (2700 ms after the cue) on 20% of the trials. For the other tone, these contingencies were reversed. Participants were instructed to use the temporal information signaled by the cue to optimize their RTs.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Task design (A) and behavioral results (B). The error rate includes both types of errors (false alarms and misses). Horizontal markers on the box plots indicate the median, the bottom and top edges indicate the 25th and 75th percentile, and the whiskers extend to the most extreme non-outlier data points. Outliers are shown as circles. CTI: cue-target interval.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Method for estimating CPP slope and onset. Two straight lines that share an inflection point were fitted to the data with the constraint that the preonset part must have a slope and amplitude of zero, and the postonset part must have a positive slope. The window for fitting was from 200 ms before stimulus onset until the time of response (for single-trial analyses) or until the peak latency of the CPP (for trial-average analyses).

Temporal expectations influence task performance

The behavioral results were as expected, displaying clear effects of temporal expectation and task difficulty on RT (Fig. 1B). Repeated-measures ANOVAs yielded significant main effects on RT of validity (F(1,20) = 9.70, p = 0.005; Embedded Image = 0.327), CTI (F(1,20) = 68.33, p < 0.001; Embedded Image = 0.774), and difficulty (F(1,20) = 75.40, p < 0.001; Embedded Image = 0.790). No interaction effects were significant, except the interaction effect of validity and CTI (F(1,20) = 32.11, p < 0.001; Embedded Image = 0.022). On short CTI trials, participants responded more quickly when they expected the target than when they did not expect it (t(20) = 6.40, p < 0.001; BF = 6401.65), the typical temporal cuing effect. On long CTI trials, we did not find an effect of cue validity on RT: on invalidly cued long-CTI trials, the target hazard rate increases over time, despite a proportion of catch trials, leading participants to reorient their attention toward the long CTI, thus mitigating a temporal cuing effect (Correa et al., 2006b). Thus, in line with previous findings (Coull and Nobre, 1998; Nobre et al., 2007), RTs for the short CTI decreased with increasing target probability, indicating that participants used the temporal information conveyed by the cues.

Results from a generalized mixed regression model predicting the proportion of misses (i.e., go trials without a response; Table 1) based on validity, difficulty and CTI and all interactions between these predictors showed significant main effects of validity (χ2(1) = 4.87, p = 0.027, odds-ratio = 1.18), of difficulty (χ2(1) = 64.94, p < 0.001, odds-ratio = 1.90), and CTI (χ2(1) = 31.05, p < 0.001, odds-ratio = 1.96). There were no significant interaction effects (all ps > 0.49). We subsequently tested whether there was an effect of validity for either of the CTIs considered separately. This was not the case, both when collapsing across difficulty levels (short CTI: χ2(1) = 1.60, p = 0.21, odds-ratio = 1.18; long CTI: χ2(1) = 3.75, p = 0.053, odds-ratio = 1.23), and for the individual difficulty levels alone (short CTI, easy: z = −0.57, p = 0.57, odds-ratio = 1.11; short CTI, difficult: z = −1.16, p = 0.25, odds-ratio = 1.18; long CTI, easy: z = −1.58, p = 0.12, odds-ratio = 1.41; long CTI, difficult: z = −1.22, p = 0.22, odds-ratio = 1.23). We next examined the proportions of false alarms (i.e., trials with a response before target onset; Table 1). Because for this type of error participants responded before stimulus onset and thus never saw the stimulus, we collapsed across difficulty levels. Results showed a main effect of CTI (χ2(1) = 16.07, p < 0.001, odds-ratio = 5.98), but not of validity (χ2(1) = 0.12, p = 0.73, odds-ratio = 1.26). Although the interaction between CTI and validity was significant (χ2(1) = 4.53, p = 0.033, odds-ratio = 0.46), the simple main effects of validity were not significant (short CTI: z = 1.70, p = 0.09; long CTI: z = −1.33, p = 0.18).

Thus, the predominant effect of temporal expectation was manifested in RT, on the short CTI, where participants responded faster when they expected the stimulus compared with when they did not expect it. The effect of cue validity on RT at the short CTI (Δ 32 ms, SD 23 ms) and the effect of difficulty (Δ 31 ms, SD 16 ms) were statistically indistinguishable (t(20) = 0.20, p = 0.84), with a Bayes factor of 0.23, indicating “substantial” evidence for the null hypothesis of no difference between these effect sizes (Wetzels and Wagenmakers, 2012). This null finding enabled a fair dissociation between the distinct computational parameters that we assumed to underlie these RT effects (see below).

DDM supports decision onset account

We examined the impact of temporal expectations on latent aspects of decision-making by fitting the DDM to participants' behavioral data, using a hierarchical Bayesian model fitting procedure (see Materials and Methods; Wiecki et al., 2013). The DDM is a prominent mathematical model of simple decisions like those made in our target-detection task, and can parsimoniously account for RT distributions and choice data across a wide array of tasks (Forstmann et al., 2016). The model assumes that noisy sensory evidence is accumulated over time until one of two opposing bounds is reached, at which point a decision is made in favor of the corresponding decision. In the application of the DDM that we used (Gomez et al., 2007; Zhang et al., 2016), the upper and lower bounds correspond to a “go decision” (a decision to execute a response, as expected when a target is presented) and an implicit “no-go decision” (a decision to withhold an overt response, as expected when a target is not presented; Fig. 3A). Previous work has found that DDMs with an implicit boundary for no-go decisions provide a better fit of go/no-go task data than single-threshold variants of the model (Gomez et al., 2007). Core DDM parameters include the drift rate, v (which is determined by the quality of the sensory evidence and relates monotonically to the mean rate of evidence accumulation toward the correct decision bound); boundary separation, a (the distance between the two decision bounds, which captures response caution), and non-decision time, Ter (capturing the time needed for sensory encoding and response execution).

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Hierarchical DDM results. A, Schematic and example traces of the go/no-go DDM. Following a period of non-decision time (Ter), noisy evidence accumulates over time at mean drift rate (v) until a threshold (0 or a) is reached. In our implementation, starting point (z) was unbiased. B, Empirical (data) and simulated (model) defective cumulative quantile probability (left) and mean RTs (right). In the right panel, small dots show individual participants. Large dots show the group means. Error bars show the within-participant SEM. C, Non-decision time (Ter) differed between valid and invalid trials. D, Drift rate (v) did not differ between valid and invalid trials but did differ between easy and difficult trials. Vertical dotted lines in C, D show the average of the participant-specific maximum a posteriori parameter estimates. Distributions show the group posterior. Statistical significance is reflected in the overlap between distributions. E, Across-participant correlation between the effects of cue validity on RT and Ter. The dotted line shows the identity line. The solid gray line shows a least-squares regression line.

To directly contrast the predictions of the decision onset account of temporal expectations with an account wherein expectations affect evidence quality (and thus accumulation rate), we examined a model fit to data from short CTI trials in which non-decision time (Ter) and drift rate (v) were free to vary as a function of cue validity. Drift rate was also free to vary as a function of difficulty, in line with common findings that differences in stimulus strength are well captured by this parameter (Ratcliff, 2002). The model accurately reproduced observed RTs, with correlation coefficients between condition-specific mean RTs in data and model >0.84, and the empirical data fell within the 95th percentile credibility interval of the model (Fig. 3B). Model fits showed a significant difference in Ter between valid and invalid trials (p = 0.002; Fig. 3C), indicating a temporal expectation effect on non-decisional processes. In contrast, v did not differ between valid and invalid trials (p = 0.30; Fig. 3D), which is inconsistent with the evidence quality account. Instead, as expected, v differed between easy and difficult trials (p = 0.004; collapsed across validity conditions).

Several features of our results showed that the effect of cue validity on RT was primarily driven by a change in Ter. First, the effect of cue validity on Ter (Δ 29 ms, SD 17 ms) and the effect of cue validity on RT (Δ 32 ms, SD 23 ms) were statistically indistinguishable (BF = 0.30). Second, the participant-specific effect of cue validity on RT was positively correlated with the effect of cue validity on Ter (r = 0.64, p = 0.002; Fig. 3E). Moreover, the slope of the least squares regression line that captured this relationship (0.87; 95% CI determined via bootstrapping: 0.42–1.17) did not differ from the slope of the identity line (1.00, reflecting an idealized case where the cuing effect on RT is fully captured by Ter). While a cross-subject correlation between the effects of difficulty on RT and on v was also significant (r = −0.63, p = 0.002), by contrast, there was no significant correlation between the effects of cue validity on v and RT (r = 0.32, p = 0.17). Together, these findings show that Ter, not v, was the primary factor that drove the cue-validity effect on RT.

To further assess the effect of temporal expectations on the different model parameters, we compared a range of models (including the model described above) that differed with regard to which parameter or combination of parameters was allowed to vary as a function of cue validity. In all models, v was free to vary as a function of difficulty. To compare the adequacy of the seven models in explaining the observed data we used the DIC metric, a statistical criterion for model selection that takes into account model complexity. As can be seen in Table 2, the model in which only Ter was free to vary between valid and invalid trials provided the best fit of the data, a significantly better fit than the model ranked second (Ter + a + v, ΔDIC = 10) and the v-only model (ΔDIC = 179). Notably, the four best fitting models were all of the models in which Ter was allowed to vary between valid and invalid trials. Thus, our DDM analyses strongly supported an account whereby temporal expectations selectively act to hasten non-decisional processes but do not affect the process of evidence accumulation itself.

The v account predicts that a change in temporal expectations should result in a concomitant change in accuracy, which we did not reliably observe at the short CTI. However, accuracy on our task was high overall (by design), which may have absorbed a potential effect on accuracy. Alternatively, it is possible that a change in v because of expectation occurred simultaneously with a change in a, leaving accuracy levels unaffected while improving RT. To exclude this possibility, we fit an additional model in which v and a were free to vary as a function of cue validity, v was additionally free to vary as a function of difficulty, and Ter was not free to vary. The DIC score of this model (−7497) was significantly higher (indicating a poorer fit) than that of the best-performing model (Table 2) in which the effects of temporal expectation were captured by Ter only (ΔDIC = 125). Moreover, a model in which all three parameters were free to vary as a function of validity did not yield a significant effect of validity on v or on a (v: p = 0.74, a: p = 0.23). Thus, although changes in v and a could jointly produce reduced RTs without changes in accuracy, our results indicate that the effects of temporal expectations on performance were better captured by a change in Ter only.

Target detection is associated with an electrophysiological signature of decision-making

Previous work has identified an electrophysiological signature of evidence accumulation during decision-making tasks: a supramodal, centroparietal positivity (CPP) in the event-related potential that exhibits the typical dynamics of a build-to-threshold decision signal (O'Connell et al., 2012; Kelly and O'Connell, 2013; Twomey et al., 2015). The CPP is motor-independent: it is robustly observed in cases where no motor response is required (O'Connell et al., 2012) and its dynamics are clearly dissociable from motor signals (Twomey et al., 2015), indicating that this signal reflects decision formation and not merely motor preparation or execution. The CPP thus provided us with a neural measure by which to examine the effect of temporal expectation on decision formation (Loughnane et al., 2016).

In line with previous work, the CPP in our data showed a significant (p < 0.05, FDR-corrected), gradual build-up following target onset, peaked around the time of the response, and was maximal over centroparietal electrodes (Fig. 4A). Moreover, the CPP was uncontaminated by visual-evoked potentials, as our task was designed to minimize sensory transients. The CPP also exhibited the typical relationship with RT that might be expected from a build-to-threshold decision variable signal. Specifically, we split each participant's RT distribution into equal-sized fast, medium, and slow bins (within CTI, difficulty, and validity conditions) and plotted the average waveforms aligned to target presentation and response execution for each bin (Fig. 4B). Consistent with previous observations (O'Connell et al., 2012; Kelly and O'Connell, 2013) the onset and peak latencies of the stimulus-locked CPP directly scaled with RT, while its slope (i.e., build-up rate) was inversely proportional to RT (for fits of individual participants, and cross-participant correlations with behavior and model parameters, see Fig. 5). Furthermore, the response-locked CPP reached a stereotyped amplitude at the time of response execution, consistent with fixed decision bounds. Combined, these findings are consistent with the notion that during a detection task with high accuracy, the CPP tracked the gradual accumulation of stationary sensory evidence toward a decision bound.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Condition-average CPP. A, Stimulus-locked and response-locked CPP waveforms and scalp topography. Error bars show the within-participant SEM. The highlighted electrodes, P1, P2, and Pz, were used for all analyses. B, CPP characteristics scale with RT. Single trials were sorted as a function of RT and divided into equal-sized fast, medium, and slow bins, separately for each participant and task condition, then averaged. Colored vertical dotted lines show the average RT per bin. Dots below the traces show the CPP onset latency. Error bars show the within-participant SEM. Gray lines show data of individual participants. CI: permutation-derived 95% confidence interval around the mean difference between conditions.

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

CPP fits per participant, and relationship with other measures. A, Condition-averaged CPP for each participant. Solid red lines show the fitted CPP traces. Red dashed lines show the estimated CPP onset latency. B, Cross-participant correlations between model parameters, RT, and three CPP measures: CPP onset latency, CPP slope, and CPP peak latency. Before estimating slope, the individual CPP curves were scaled to fixed height to correct for arbitrary amplitude differences between participants.

Temporal expectation and task difficulty have dissociable effects on CPP onset latency and slope

The decision onset account predicts that temporal expectation reduces the onset latency of the CPP. In contrast, an effect of temporal expectation on evidence quality predicts a steeper slope (i.e., build-up rate) of the CPP (Kelly and O'Connell, 2013). Trial-average analyses (Fig. 6A,B) provided preliminary support for the decision onset account. Cue validity showed a trend-level effect on onset latency in the expected direction (p = 0.097, 95% CI: −56.71, 6.23), but no effect on slope (p = 0.555, 95% CI: −0.01, 0.01). In contrast, difficulty affected the CPP slope (p = 0.010, 95% CI: 0.002, 0.013), reflecting the expected shallower CPP slope on difficult trials, but not the onset latency (p = 0.775, 95% CI: −0.23.65, 7.18). This also indicated that the absence of an effect of cue validity on CPP slope was not because of a lack of sensitivity in accurately measuring CPP slope, because task difficulty and cue validity had a comparable effect on RT. Furthermore, the latency between the peak of the response-aligned CPP and the motor response (a proxy for the time between decision and response execution) did not differ between valid and invalid trials (p = 0.31, BF = 0.38). This suggested that the effect of cue validity on Ter (see above, Target detection is associated with an electrophysiological signature of decision-making) was unlikely to be because of between-condition differences in postdecisional processing and, instead, likely reflected differences in decision onset.

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

Effects of cue validity and difficulty on the CPP. Effect of cue validity (A) and difficulty on the trial-averaged CPP (B). Dots and lines below the ERP waveforms show the average estimated CPP onset latency and within-participant SEM. C, Comparison between CPP measures extracted at the single-trial level. Error bars reflect the within-participant SEM. Gray lines show data of individual participants, centered with respect to the condition and participant average. CI: permutation-derived 95% confidence interval around the mean difference between conditions.

The trial-average analyses may have underestimated the true effect of validity on CPP onset latency if invalid short-CTI trials (in which the early target was not expected) were associated with larger trial-by-trial variability in onset latency. In that case, the onset latency distribution for invalid trials might have a wider left tail, which would have led to an earlier onset of the trial-averaged CPP, diminishing the difference in CPP onset latency between valid and invalid trials. To examine this possibility, we computed single-trial measures of CPP onset latency and slope (see Materials and Methods and CPP identification and parameter estimation). Although the mean single-trial estimated CPP onset latencies were significantly correlated with trial average estimates across participants (r = 0.73, p < 0.001), these data confirmed that invalidly cued short-CTI trials were indeed associated with increased variability in onset latency compared with validly cued short-CTI trials (p = 0.003, 95% CI: −17.91, −4.07; Fig. 6C). This was not simply a consequence of variation in the quality of CPP model fits, as the correlation between estimated CPP and data at the single-trial level did not differ between valid and invalid conditions (valid: 0.82; invalid: 0.83; t(20) = 0.63, p = 0.53, BF = 0.27). Our key predictions might therefore be addressed more reliably using single-trial analyses.

To avoid spurious effects of trial-to-trial variability on the shape of the trial-average CPP waveform, we examined the effects of temporal expectation and task difficulty on single-trial measures of CPP onset latency and CPP slope (Fig. 6C). These analyses revealed that the CPP onset latency was significantly shorter on validly cued trials than on invalidly cued trials (p = 0.008, 95% CI: −16.30, −2.93), while the CPP slope did not differ (p = 0.480, 95% CI: −0.003, 0.003, BF = 0.23). As expected, and in line with the trial-average analyses, CPP slope was shallower for difficult trials than for easy trials (p = 0.021, 95% CI: 0.0005, 0.0054).

Our finding of a cue-validity effect on CPP onset latency is consistent with the notion that temporal expectation hastens decision onset. However, an alternative possibility is that on validly cued (vs invalidly cued) short-CTI trials, when the target was expected to appear after the short CTI, participants engaged in premature sampling (of noise) on a proportion of the trials (Laming, 1979; Jepma et al., 2012; Devine et al., 2019). That is, participants may anticipate the arrival of a target stimulus and speed up responses by starting to sample information from the display at the moment when they think the stimulus will be presented. On a proportion of the trials they may start sampling too early, and if these premature sampling trials are included in the analysis, the average single-trial CPP onset latency may give the impression of speeded decision onset.

Premature sampling should lead to faster but less accurate responses, because participants will start with sampling noise. Accordingly, we discouraged premature sampling by instructing participants to avoid response errors and aborting trials on which responses were made before target onset. Anticipations were not more common on short-CTI trials when a target was expected, compared with when a target was not expected (t(20) = 1.39, p = 0.18, BF = 0.52), suggesting that any premature sampling that participants may have engaged in was not modulated by temporal expectation. To further rule out a role for premature sampling, we conducted a control analysis, focusing on the ERP waveform associated with invalidly cued long-CTI trials. On these trials participants expected the target to occur after the short CTI, but it was not presented. Given that the CPP appears to reflect accumulated evidence regardless of sign (O'Connell et al., 2012), premature sampling (relative to the true target onset latency) should result in a positive “pretarget” CPP deflection that cannot possibly reflect evoked activity, because the target is not yet presented at that time. However, our control analysis did not yield any support for this prediction (Fig. 7).

Figure 7.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 7.

Control analysis to rule out premature sampling. The figure shows the ERP for invalid long-CTI trials, at the time points surrounding the short CTI (i.e., when a target was expected but not presented), relative to a 200-ms precue baseline. If participants were starting to accumulate evidence before stimulus onset, we would expect to see a positive deflection around stimulus onset, but no time points were significantly different from zero (all FDR p > 0.05). Color and axis limits are as in Figure 4A. The error bar shows the within-participant SEM.

CPP onset latency covaries with behaviorally relevant fluctuations in peri-stimulus α power

In search of a mechanism underlying the effect of temporal expectations on decision onset, we examined a potential endogenous determinant of CPP onset latency that might account for trial-to-trial variability in CPP onset latency and corresponding fluctuations in task performance. In particular, we used EEG time-frequency analysis to examine the power of ongoing neural oscillations in the α band (9–12 Hz) over posterior cortical areas, reflecting rhythmic fluctuations in cortical excitability (Pfurtscheller, 2001; Jensen and Mazaheri, 2010). A large number of studies have found that the amplitude of α band oscillations around the onset time of a target stimulus predicts subsequent task performance (Ergenoglu et al., 2004; Thut et al., 2006; van Dijk et al., 2008; Mathewson et al., 2009), in line with the well-accepted notion that the neural and behavioral responses to a stimulus depend on the cortical state at the time of stimulus presentation. Importantly, the power and phase of α band oscillations around the time of a stimulus can be modulated by top-down attention to optimize the processing of that stimulus (Samaha et al., 2015; van Diepen et al., 2015).

The top-down modulation of the power of oscillatory α band activity has been proposed to play a pivotal role in performance on temporal expectation tasks. Previous EEG research has found that α power is reduced just before, or at, the expected onset time of a target event (Praamstra et al., 2006; Rohenkohl and Nobre, 2011; Breska and Deouell, 2017), yielding enhanced cortical excitability over the time interval in which the target event is expected. This signature of temporally focused attention, often referred to as α desynchronization, was also present in our data. Power in the α band was maximal over posterior scalp regions (Fig. 8A) and specifically posterior α was modulated by expectations about stimulus onset (Fig. 8B), with lower α power preceding the onset of an expected stimulus compared with an unexpected stimulus, possibly reflecting stronger anticipatory α desynchronization. This led us to ask whether fluctuations in this signal were associated with fluctuations in CPP onset latency, our neural marker of decision onset. To address this question, we binned trials based on single-trial, peri-stimulus posterior α power within each cell of the task design, and then averaged across cells to make sure that any difference between bins in α power was unrelated to task manipulations, including the cue validity manipulation, but instead reflected endogenous trial-by-trial variability.

Figure 8.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 8.

Relationship between posterior α (9–12 Hz) power, temporal expectations, and CPP parameters. A, Power spectrum and topography of α power. B, Comparison of power across time and frequency for valid versus invalid trials. Blue colors indicate lower power for valid trials. The topographical plot shows the comparison of power in the α (9–12 Hz) band between valid trials and invalid trials on the short CTI, collapsed across difficulty levels; highlighted channels are those averaged over to create remaining panels, chosen independently, based on Rohenkohl and Nobre (2011). Shaded areas in the lower panel indicate a significant (p < 0.05, uncorrected) difference between valid and invalid trials. C, Trial-averaged and condition-averaged CPP binned by peri-stimulus α power (left), and effect of α bins on RT, CPP onset latency and CPP slope (right). Error bars show the within-subject SEM. Gray lines show data of individual participants, centered with respect to the condition and participant average. CI: permutation-derived 95% confidence interval around the mean difference between conditions.

A statistical comparison between bins indicated that the differences in peri-stimulus posterior α power were behaviorally relevant, as decreases in α power were associated with faster RTs (p < 0.001, 95% CI: −12.62, −4.02; Fig. 8C). Importantly, decreased α power was also associated with a reduced CPP onset latency (p = 0.039, 95% CI: −35.03, −0.56), but not with differences in CPP slope (p = 0.855, 95% CI: −0.004, 0.0008; Fig. 8C). Identical results in terms of direction and significance were found using an alternate frequency band (9–14 Hz). As expected, long CTI trials, which showed no significant effect of cue validity on RT, also showed no significant effect of cue validity on α power. Combined, our results indicate that top-down modulation of α band activity putatively modulates cortical excitability in a way that expedites the onset of evidence accumulation of expected stimuli.

Discussion

Temporal contingencies in sensory input, such as music, speech and other temporal sequences, provide a critical source for temporal predictions formed in the brain, considerably improving the speed and accuracy of responding to unfolding events. Here, we used mathematical modeling of behavior and neural signal measurements to identify how temporal expectations enhance perception.

All model-based analyses suggested that the cue-validity effect, our behavioral measure of temporal expectations, was specifically driven by changes in non-decisional processes (Ter), and not by possible effects on the decision process (including an effect on evidence quality and, accordingly, the mean rate of accumulation, v). Specifically, model fits as well as model-selection procedures supported the Ter account. Furthermore, the cue-validity effects on Ter and RT were highly correlated and almost identical in size. No such support was obtained for an account whereby temporal expectations enhance the quality of the sensory evidence (Rohenkohl et al., 2012a). Although non-decision time includes the time needed for both sensory encoding and response execution, previous studies have found that temporal expectation has negligible effects on response execution time, as indicated by the interval between the onset of the lateralized readiness potential and the overt response (Müller-Gethmann et al., 2003; Hackley et al., 2007; but see Tandonnet et al., 2006). Likewise, we found that the time difference between the peak latency of the CPP and the motor response, a proxy for the duration of response execution, did not differ between valid and invalid trials. Although this metric is not without limitations because the CPP may in some cases continue to accumulate postdecision (Steinemann et al., 2018), these results indicate that our temporal expectation effect on Ter primarily reflects predecisional rather than postdecisional processing time.

Our study is the first to complement mathematical modeling with detailed EEG analyses regarding the effects of temporal expectation on decisional and non-decisional process. Such neural evidence is critical, given reported discrepancies between results from sequential-sampling models and neural signatures of decision formation (McGovern et al., 2018; Spieser et al., 2018). These additional analyses of the CPP allowed us to examine the effect of temporal expectation on the onset and rate of decision formation in the brain (Loughnane et al., 2016), and yielded further support for the decision onset account. Single-trial analyses, necessary to avoid a measurement artifact in the trial-average CPP, revealed a shortened CPP onset latency on validly cued trials, consistent with the notion that temporal expectation hastens decision onset. In contrast, valid and invalid trials did not differ in CPP slope, suggesting that temporal expectation did not affect the quality of the sensory evidence or other processes influencing the rate of decision formation. Control analyses excluded premature sampling of the stimulus array before target onset (Laming, 1979; Jepma et al., 2012; Devine et al., 2019) as an alternative explanation of these results.

EEG α band power is thought to reflect local cortical excitability (Pfurtscheller, 2001; Jensen and Mazaheri, 2010) and has previously been shown to track temporal expectations, enhancing perception of events occurring at expected moments (Rohenkohl and Nobre, 2011; Zanto et al., 2011; Heideman et al., 2018). In our study, peri-stimulus posterior α power was also behaviorally relevant, showing a strong positive relationship with RTs. The novel insight provided by our data were that peri-stimulus α power also covaried with CPP onset latency (and not with CPP slope), suggesting that peri-stimulus posterior α power is a key determinant of decision onset, presumably contributing to the perceptual facilitation associated with temporal expectation. This facilitation may be brought about by top-down signals that modulate visual cortex activity to expedite target processing, and result in an earlier start of the decision process (and therefore a shorter CPP onset latency). Notably, α is the dominant frequency band for feedback signaling from higher-order cortical regions to lower-level visual cortex (van Kerkoerle et al., 2014; Michalareas et al., 2016). Thus, it is possible that α power is a signature of a top-down modulatory process: co-variation between posterior α power and CPP onset latency may reflect fluctuations in top-down signaling in accordance with expectations about the appearance of a target.

An important open question is how temporal expectation hastens decision onset. An implicit assumption of many sequential sampling models is that evidence accumulation begins automatically, immediately when sensory encoding is completed. Under this assumption, an expectation-induced reduction of decision onset latency necessarily stems from faster sensory encoding. However, recent literature suggests that decision onset may be decoupled from sensory encoding, and possibly under strategic control (Teichert et al., 2016). Hence, it is possible that our findings reflect a strategic reduction in decision onset that is unrelated to sensory encoding time. While the CPP allowed us to directly estimate the onset of decision formation, it did not allow us to directly measure the process of sensory encoding. Thus, future efforts to combine measures of both decision onset and sensory encoding may prove fruitful in answering the question of whether temporal expectation hastens decision onset through faster sensory encoding, or through strategic control of the onset of the accumulation process.

There are prominent discrepancies between the current results and previously published model-based analyses of behavioral temporal expectation effects (Rohenkohl et al., 2012a; Vangkilde et al., 2012; Cravo et al., 2013). Vangkilde and colleagues (Vangkilde et al., 2012, 2013) found that temporal expectations enhanced (unspeeded) visual letter discrimination and analyzed these behavioral data using the theory of visual attention (TVA). These analyses suggested that the improvement in perceptual discrimination was caused by an increased rate of evidence accumulation (TVA parameter: perceptual processing speed), and not by faster sensory encoding (TVA parameter: temporal threshold for conscious perception). Likewise, Nobre and colleagues found that temporal expectations enhanced visual contrast sensitivity, and reported fits of a sequential-sampling model suggesting that this enhancement in perceptual discrimination was caused by an increased quality of the sensory evidence (Rohenkohl et al., 2012a; Cravo et al., 2013).

Nobre and colleagues (Rohenkohl et al., 2012a; Cravo et al., 2013) modeled their behavioral data using a diffusion model that was developed to simultaneously fit psychometric (accuracy) and chronometric (RT) functions (Palmer et al., 2005). However, we argue that this diffusion model was not entirely appropriate with regard to the task performed by the participants. Specifically, in both studies participants were asked to discriminate the orientation of visual gratings (Gabor patches) at seven contrast levels, resulting in accuracy levels spanning from near chance to near perfect. Each trial consisted of a stream of noise patches and infrequent visual gratings (targets) presented either with a fixed, rhythmic (high temporal expectation) or jittered, arrhythmic (low temporal expectation) stimulus onset asynchrony. Critically, each target was presented for only 50 ms and then followed by a noise patch after 350 ms. This means that the task potentially violated an assumption underlying the diffusion model of Palmer and colleagues, namely that the evidence on the screen does not change during the decision process. The consequences of the violation of this stationarity assumption are unclear. The brief target duration and the short time until the next noise patch meant that on that task, there was very limited time for participants to secure a robust visual short-term memory trace of the decision-relevant stimulus feature, which in turn provides input to the decision process once the stimulus is no longer visible (Smith and Ratcliff, 2009; Rohenkohl et al., 2012b; Ratcliff et al., 2016). In such conditions, expedited non-decision time could well be beneficial for discrimination accuracy in that it would allow more time for the creation of a high-quality memory trace before the perturbance of the subsequent noise patch (Rolke and Hofmann, 2007; Nieuwenhuis et al., 2012). Importantly, such a change in accuracy is often diagnostic of a change in drift rate, while generally inconsistent with a change in non-decision time. Thus, a potentially violated assumption may have led the model to misattribute an effect on non-decision time to a change in drift rate. Taken together, these arguments invite caution when interpreting previous modeling results (Rohenkohl et al., 2012a; Cravo et al., 2013) in terms that are explicit about the locus of the effect of temporal expectation on performance.

The discrepancy between our results and those of Vangkilde et al. (2012) also deserves further theoretical consideration. To our knowledge, the formal relationship between diffusion models and the TVA, including the mapping between their model parameters, has not been studied yet. Given the model assumption and mimicry issues discussed here, it is critical that we obtained solid neural evidence for the decision onset account.

Another plausible explanation for the discrepancy between the current results and previous work concerns differences between studies in the structure of temporal information provided to participants. Nobre and van Ede (2018) distinguish four types of informative temporal structures that are commonly found in the environment and manipulated in experimental tasks: (1) temporal associations, that is, predictive temporal relations between successive stimuli, such as the auditory cue and the visual target in our study; (2) hazard rates (Vangkilde et al., 2013); (3) rhythms; and (4) sequences, recurring temporal structures that are more complex than simple rhythms. Although there is some evidence that temporal expectations based on these structures rely on similar brain mechanisms (Correa and Nobre, 2008), the differences and similarities between the types of temporal structures and associated brain mechanisms are only beginning to be investigated (Breska and Deouell, 2017; Nobre and van Ede, 2017; Shalev et al., 2019; Bouwer et al., 2020). Interestingly, the manipulations of temporal expectation of Vangkilde et al. (2012, 2013) and Nobre and colleagues (Rohenkohl et al., 2012a; Cravo et al., 2013) were based on hazard rates and rhythms, respectively. In contrast, behavioral studies that used temporal associations (i.e., temporal cuing or a foreperiod manipulation) have reported evidence that temporal expectations specifically affect decision onset and not the decision process itself (Bausenhart et al., 2010; Seibold et al., 2011; Jepma et al., 2012). Future research should examine the possibility that different types of temporal structures call on dissociable temporal expectation mechanisms. Additionally, it remains to be investigated to what extent our findings generalize to perception in other sensory modalities (Ball et al., 2018; Lee et al., 2019; Herbst and Obleser, 2019).

Our findings contribute to a growing literature documenting the usefulness of the CPP as a neural marker of perceptual decision formation (O'Connell et al., 2018), but there are also important outstanding questions regarding the methodological limitations and functional significance of the CPP (Urai and Pfeffer, 2014). Furthermore, recent work has put forward the peak latency of the N200 component as an alternative EEG marker of the onset of the evidence accumulation process (Nunez et al., 2019). Additionally, although our study focused on perception, there is marked evidence that temporal expectation can also affect aspects of motor processing (Hackley and Valle-Inclán, 2003; Fecteau and Munoz, 2007; Nobre et al., 2007), albeit perhaps not the duration of these processes. Lastly, future research may shed more light on the precise mechanistic origin of trial-by-trial co-fluctuations in posterior α power around the time of stimulus presentation and the onset of decision formation.

Footnotes

  • This work was supported by a fellowship for postdoctoral researchers funded by the Alexander von Humboldt Foundation (R.L.v.d.B.), German Research Foundation (DFG) Grants DO 1240/3-1, DO 1240/4-1, and SFB 936/A7 (to Tobias Donner), the FWO [PEGASUS]2 Marie Skłodowska-Curie Fellowship 12T9717N (to K.D.), and the Consolidator Grant of the European Research Council Grant GA 283314-NOREPI (to S.N.). We thank Konstantinos Tsetsos for helpful comments on an earlier version of this article. We also thank Iliana Samara for her help with data collection.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Ruud L. van den Brink at r.van-den-brink{at}uke.de

SfN exclusive license.

References

  1. ↵
    1. Ball F,
    2. Michels LE,
    3. Thiele C,
    4. Noesselt T
    (2018) The role of multisensory interplay in enabling temporal expectations. Cognition 170:130–146. doi:10.1016/j.cognition.2017.09.015 pmid:28992555
    OpenUrlCrossRefPubMed
  2. ↵
    1. Bates D,
    2. Mächler M,
    3. Bolker B,
    4. Walker S
    (2015) Fitting linear mixed-effects models using lme4. J Stat Softw 67:1–48.
    OpenUrlCrossRefPubMed
  3. ↵
    1. Bausenhart KM,
    2. Rolke B,
    3. Seibold VC,
    4. Ulrich R
    (2010) Temporal preparation influences the dynamics of information processing: evidence for early onset of information accumulation. Vision Res 50:1025–1034. doi:10.1016/j.visres.2010.03.011 pmid:20338190
    OpenUrlCrossRefPubMed
  4. ↵
    1. Bouwer FL,
    2. Honing H,
    3. Slagter HA
    (2020) Beat-based and memory-based temporal expectations in rhythm: similar perceptual effects, different underlying mechanisms. J Cogn Neurosci 32:1221–1241.
    OpenUrl
  5. ↵
    1. Breska A,
    2. Deouell LY
    (2017) Neural mechanisms of rhythm-based temporal prediction: delta phase-locking reflects temporal predictability but not rhythmic entrainment. PLoS Biol 15:e2001665. doi:10.1371/journal.pbio.2001665 pmid:28187128
    OpenUrlCrossRefPubMed
  6. ↵
    1. Busch NA,
    2. Dubois J,
    3. VanRullen R
    (2009) The phase of ongoing EEG oscillations predicts visual perception. J Neurosci 29:7869–7876. doi:10.1523/JNEUROSCI.0113-09.2009 pmid:19535598
    OpenUrlAbstract/FREE Full Text
  7. ↵
    1. Correa A,
    2. Nobre AC
    (2008) Neural modulation by regularity and passage of time. J Neurophysiol 100:1649–1655. pmid:18632896
    OpenUrlCrossRefPubMed
  8. ↵
    1. Correa A,
    2. Lupiáñez J,
    3. Milliken B,
    4. Tudela P
    (2004) Endogenous temporal orienting of attention in detection and discrimination tasks. Percept Psychophys 66:264–278. doi:10.3758/bf03194878 pmid:15129748
    OpenUrlCrossRefPubMed
  9. ↵
    1. Correa A,
    2. Lupiáñez J,
    3. Tudela P
    (2005) Attentional preparation based on temporal expectancy modulates processing at the perceptual level. Psychon Bull Rev 12:328–334. doi:10.3758/bf03196380 pmid:16082814
    OpenUrlCrossRefPubMed
  10. ↵
    1. Correa A,
    2. Lupiáñez J,
    3. Madrid E,
    4. Tudela P
    (2006a) Temporal attention enhances early visual processing: a review and new evidence from event-related potentials. Brain Res 1076:116–128. doi:10.1016/j.brainres.2005.11.074 pmid:16516173
    OpenUrlCrossRefPubMed
  11. ↵
    1. Correa A,
    2. Lupiáñez J,
    3. Tudela P
    (2006b) The attentional mechanism of temporal orienting: determinants and attributes. Exp Brain Res 169:58–68. doi:10.1007/s00221-005-0131-x pmid:16273403
    OpenUrlCrossRefPubMed
  12. ↵
    1. Coull JT,
    2. Nobre AC
    (1998) Where and when to pay attention: the neural systems for directing attention to spatial locations and to time intervals as revealed by both PET and fMRI. J Neurosci 18:7426–7435. doi:10.1523/JNEUROSCI.18-18-07426.1998
    OpenUrlAbstract/FREE Full Text
  13. ↵
    1. Cravo AM,
    2. Rohenkohl G,
    3. Wyart V,
    4. Nobre AC
    (2013) Temporal expectation enhances contrast sensitivity by phase entrainment of low-frequency oscillations in visual cortex. J Neurosci 33:4002–4010. pmid:23447609
    OpenUrlAbstract/FREE Full Text
  14. ↵
    1. Denison RN,
    2. Heeger DJ,
    3. Carrasco M
    (2017) Attention flexibly trades off across points in time. Psychon Bull Rev 24:1142–1151. doi:10.3758/s13423-016-1216-1 pmid:28054311
    OpenUrlCrossRefPubMed
  15. ↵
    1. Delorme A,
    2. Makeig S
    (2004) EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J Neurosci Methods 134:9–21.
    OpenUrlCrossRefPubMed
  16. ↵
    1. Devine CA,
    2. Gaffney C,
    3. Loughnane GM,
    4. Kelly SP,
    5. O'Connell RG
    (2019) The role of premature evidence accumulation in making difficult perceptual decisions under temporal uncertainty. Elife 8:e48526. doi:10.7554/eLife.48526
    OpenUrlCrossRef
  17. ↵
    1. Ergenoglu T,
    2. Demiralp T,
    3. Bayraktaroglu Z,
    4. Ergen M,
    5. Beydagi H,
    6. Uresin Y
    (2004) Alpha rhythm of the EEG modulates visual detection performance in humans. Cogn Brain Res 20:376–383. doi:10.1016/j.cogbrainres.2004.03.009
    OpenUrlCrossRefPubMed
  18. ↵
    1. Fecteau JH,
    2. Munoz DP
    (2007) Warning signals influence motor processing. J Neurophysiol 97:1600–1609. doi:10.1152/jn.00978.2005 pmid:17135471
    OpenUrlCrossRefPubMed
  19. ↵
    1. Forstmann BU,
    2. Ratcliff R,
    3. Wagenmakers EJ
    (2016) Sequential sampling models in cognitive neuroscience: advantages, applications, and extensions. Annu Rev Psychol 67:641–666. doi:10.1146/annurev-psych-122414-033645
    OpenUrlCrossRefPubMed
  20. ↵
    1. Gomez P,
    2. Ratcliff R,
    3. Perea M
    (2007) A model of the go/no-go task. J Exp Psychol Gen 136:389–413. doi:10.1037/0096-3445.136.3.389
    OpenUrlCrossRefPubMed
  21. ↵
    1. Hackley SA,
    2. Valle-Inclán F
    (2003) Which stages of processing are speeded by a warning signal? Biol Psychol 64:27–45. doi:10.1016/s0301-0511(03)00101-7 pmid:14602354
    OpenUrlCrossRefPubMed
  22. ↵
    1. Hackley SA,
    2. Schankin A,
    3. Wohlschlaeger A,
    4. Wascher E
    (2007) Localization of temporal preparation effects via trisected reaction time. Psychophysiology 44:334–338. doi:10.1111/j.1469-8986.2007.00500.x pmid:17343715
    OpenUrlCrossRefPubMed
  23. ↵
    1. Heideman SG,
    2. Rohenkohl G,
    3. Chauvin JJ,
    4. Palmer CE,
    5. van Ede F,
    6. Nobre AC
    (2018) Anticipatory neural dynamics of spatial-temporal orienting of attention in younger and older adults. Neuroimage 178:46–56. doi:10.1016/j.neuroimage.2018.05.002
    OpenUrlCrossRefPubMed
  24. ↵
    1. Henry MJ,
    2. Herrmann B, and
    3. Obleser J
    (2014) Entrained neural oscillations in multiple frequency bands comodulate behavior. Proc Natl Acad Sci USA 111:14935–14940.
    OpenUrlAbstract/FREE Full Text
  25. ↵
    1. Herbst SK,
    2. Obleser J
    (2019) Implicit temporal predictability enhances pitch discrimination sensitivity and biases the phase of delta oscillations in auditory cortex. Neuroimage 203:116198. doi:10.1016/j.neuroimage.2019.116198 pmid:31539590
    OpenUrlCrossRefPubMed
  26. ↵
    JASP Team (2019) JASP (version 0.9.2). [Computer software]. Available at https://jasp-stats.org/faq/how-do-i-cite-jasp/.
  27. ↵
    1. Jensen O,
    2. Mazaheri A
    (2010) Shaping functional architecture by oscillatory alpha activity: gating by inhibition. Front Hum Neurosci 4:186.
    OpenUrlCrossRefPubMed
  28. ↵
    1. Jepma M,
    2. Wagenmakers E-J,
    3. Nieuwenhuis S
    (2012) Temporal expectation and information processing: a model-based analysis. Cognition 122:426–441. doi:10.1016/j.cognition.2011.11.014 pmid:22197060
    OpenUrlCrossRefPubMed
  29. ↵
    1. Kelly SP,
    2. O'Connell RG
    (2013) Internal and external influences on the rate of sensory evidence accumulation in the human brain. J Neurosci 33:19434–19441. doi:10.1523/JNEUROSCI.3355-13.2013 pmid:24336710
    OpenUrlAbstract/FREE Full Text
  30. ↵
    1. Laming D
    (1979) Choice reaction performance following an error. Acta Psychol (Amst) 43:199–224. doi:10.1016/0001-6918(79)90026-X
    OpenUrlCrossRef
  31. ↵
    1. Lee CC,
    2. Clifford CW,
    3. Arabzadeh E
    (2019) Temporal cueing enhances neuronal and behavioral discrimination performance in rat whisker system. J Neurophysiol 121:1048–1058. doi:10.1152/jn.00604.2018 pmid:30699040
    OpenUrlCrossRefPubMed
  32. ↵
    1. Loughnane GM,
    2. Newman DP,
    3. Bellgrove MA,
    4. Lalor EC,
    5. Kelly SP,
    6. O'Connell RG
    (2016) Target selection signals influence perceptual decisions by modulating the onset and rate of evidence accumulation. Curr Biol 26:496–502. doi:10.1016/j.cub.2015.12.049 pmid:26853360
    OpenUrlCrossRefPubMed
  33. ↵
    1. Mathewson KE,
    2. Gratton G,
    3. Fabiani M,
    4. Beck DM,
    5. Ro T
    (2009) To see or not to see: prestimulus alpha phase predicts visual awareness. J Neurosci 29:2725–2732. doi:10.1523/JNEUROSCI.3963-08.2009 pmid:19261866
    OpenUrlAbstract/FREE Full Text
  34. ↵
    1. McGovern DP,
    2. Hayes A,
    3. Kelly SP,
    4. O'Connell RG
    (2018) Reconciling age-related changes in behavioural and neural indices of human perceptual decision-making. Nat Hum Behav 2:955–966. doi:10.1038/s41562-018-0465-6 pmid:30988441
    OpenUrlCrossRefPubMed
  35. ↵
    1. Michalareas G,
    2. Vezoli J,
    3. van Pelt S,
    4. Schoffelen JM,
    5. Kennedy H,
    6. Fries P
    (2016) Alpha-beta and gamma rhythms subserve feedback and feedforward influences among human visual cortical areas. Neuron 89:384–397. doi:10.1016/j.neuron.2015.12.018 pmid:26777277
    OpenUrlCrossRefPubMed
  36. ↵
    1. Müller-Gethmann H,
    2. Ulrich R,
    3. Rinkenauer G
    (2003) Locus of the effect of temporal preparation: evidence from the lateralized readiness potential. Psychophysiology 40:597–611. doi:10.1111/1469-8986.00061 pmid:14570167
    OpenUrlCrossRefPubMed
  37. ↵
    1. Niemi P,
    2. Näätänen R
    (1981) Foreperiod and simple reaction time. Psychol Bull 89:133–162. doi:10.1037/0033-2909.89.1.133
    OpenUrlCrossRef
  38. ↵
    1. Nieuwenhuis S,
    2. Jepma M,
    3. Wagenmakers EJ
    (2012) Temporal expectation may affect the onset, not the rate, of evidence accumulation [electronic response to Rohenkohl, Cravo, Wyart, and Nobre (2012). Temporal expectation improves the quality of sensory information]. J Neurosci 32:8424–8428.
    OpenUrlAbstract/FREE Full Text
  39. ↵
    1. Nobre AC,
    2. van Ede F
    (2018) Anticipated moments: temporal structure in attention. Nat Rev Neurosci 19:34–48. doi:10.1038/nrn.2017.141
    OpenUrlCrossRefPubMed
  40. ↵
    1. Nobre AC,
    2. Correa A,
    3. Coull JT
    (2007) The hazards of time. Curr Opin Neurobiol 17:465–470. pmid:17709239
    OpenUrlCrossRefPubMed
  41. ↵
    1. Nunez MD,
    2. Gosai A,
    3. Vandekerckhove J,
    4. Srinivasan R
    (2019) The latency of a visual evoked potential tracks the onset of decision making. Neuroimage 197:93–108. doi:10.1016/j.neuroimage.2019.04.052 pmid:31028925
    OpenUrlCrossRefPubMed
  42. ↵
    1. O'Connell RG,
    2. Dockree PM,
    3. Kelly SP
    (2012) A supramodal accumulation-to-bound signal that determines perceptual decisions in humans. Nat Neurosci 15:1729.
    OpenUrlCrossRefPubMed
  43. ↵
    1. O'Connell RG,
    2. Shadlen MN,
    3. Wong-Lin K,
    4. Kelly SP
    (2018) Bridging neural and computational viewpoints on perceptual decision-making. Trends Neurosci 41:838–852.
    OpenUrl
  44. ↵
    1. Ossmy O,
    2. Moran R,
    3. Pfeffer T,
    4. Tsetsos K,
    5. Usher M,
    6. Donner TH
    (2013) The timescale of perceptual evidence integration can be adapted to the environment. Curr Biol 23:981–986. doi:10.1016/j.cub.2013.04.039 pmid:23684972
    OpenUrlCrossRefPubMed
  45. ↵
    1. Palmer J,
    2. Huk AC,
    3. Shadlen MN
    (2005) The effect of stimulus strength on the speed and accuracy of a perceptual decision. J Vis 5:376–404. doi:10.1167/5.5.1 pmid:16097871
    OpenUrlCrossRefPubMed
  46. ↵
    1. Pfurtscheller G
    (2001) Functional brain imaging based on ERD/ERS. Vision research 41:1257–1260.
    OpenUrlCrossRefPubMed
  47. ↵
    1. Praamstra P,
    2. Kourtis D,
    3. Kwok HF,
    4. Oostenveld R
    (2006) Neurophysiology of implicit timing in serial choice reaction-time performance. J Neurosci 26:5448–5455. doi:10.1523/JNEUROSCI.0440-06.2006 pmid:16707797
    OpenUrlAbstract/FREE Full Text
  48. ↵
    R Development Core Team (2008) R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna. Available at http://www.R-project.org.
  49. ↵
    1. Ratcliff R
    (2002) A diffusion model account of response time and accuracy in a brightness discrimination task: fitting real data and failing to fit fake but plausible data. Psychon Bull Rev 9:278–291. doi:10.3758/bf03196283 pmid:12120790
    OpenUrlCrossRefPubMed
  50. ↵
    1. Ratcliff R,
    2. McKoon G
    (2008) The diffusion decision model: theory and data for two-choice decision tasks. Neural computation 20:873–922.
    OpenUrlCrossRefPubMed
  51. ↵
    1. Ratcliff R,
    2. Van Dongen HPA
    (2011) Diffusion model for one-choice reaction-time tasks and the cognitive effects of sleep deprivation. Proc Natl Acad Sci USA 108:11285–11290. doi:10.1073/pnas.1100483108
    OpenUrlAbstract/FREE Full Text
  52. ↵
    1. Ratcliff R,
    2. Smith PL,
    3. Brown SD,
    4. McKoon G
    (2016) Diffusion decision model: current issues and history. Trends Cogn Sci 20:260–281. doi:10.1016/j.tics.2016.01.007 pmid:26952739
    OpenUrlCrossRefPubMed
  53. ↵
    1. Rohenkohl G,
    2. Nobre AC
    (2011) α oscillations related to anticipatory attention follow temporal expectations. J Neurosci 31:14076–14084. doi:10.1523/JNEUROSCI.3387-11.2011 pmid:21976492
    OpenUrlAbstract/FREE Full Text
  54. ↵
    1. Rohenkohl G,
    2. Cravo AM,
    3. Wyart V,
    4. Nobre AC
    (2012a) Temporal expectation improves the quality of sensory information. J Neurosci 32:8424–8428. doi:10.1523/JNEUROSCI.0804-12.2012 pmid:22699922
    OpenUrlAbstract/FREE Full Text
  55. ↵
    1. Rohenkohl G,
    2. Cravo AM,
    3. Wyart V,
    4. Nobre AC
    (2012b) Re: temporal expectation may affect the onset, not the rate, of evidence accumulation [electronic response to Nieuwenhuis, S., Jepma, M., and Wagenmakers, E.-J. Temporal expectation may affect the onset, not the rate, of evidence accumulation]. J Neurosci 32:8424–8428.
    OpenUrlAbstract/FREE Full Text
  56. ↵
    1. Roitman JD,
    2. Shadlen MN
    (2002) Response of neurons in the lateral intraparietal area during a combined visual discrimination reaction time task. J Neurosci 22:9475–9489. doi:10.1523/JNEUROSCI.22-21-09475.2002
    OpenUrlAbstract/FREE Full Text
  57. ↵
    1. Rolke B,
    2. Hofmann P
    (2007) Temporal uncertainty degrades perceptual processing. Psychon Bull Rev 14:522–526. doi:10.3758/bf03194101 pmid:17874600
    OpenUrlCrossRefPubMed
  58. ↵
    1. Samaha J,
    2. Bauer P,
    3. Cimaroli S,
    4. Postle BR
    (2015). Top-down control of the phase of alpha-band oscillations as a mechanism for temporal prediction. Proc Natl Acad Sci USA 112:8439–8444.
    OpenUrlAbstract/FREE Full Text
  59. ↵
    1. Schroeder CE,
    2. Lakatos P
    (2009) Low-frequency neuronal oscillations as instruments of sensory selection. Trends Neurosci 32:9–18.
    OpenUrlCrossRefPubMed
  60. ↵
    1. Seibold VC,
    2. Bausenhart KM,
    3. Rolke B,
    4. Ulrich R
    (2011) Does temporal preparation increase the rate of sensory information accumulation? Acta Psychol (Amst) 137:56–64. doi:10.1016/j.actpsy.2011.02.006 pmid:21440239
    OpenUrlCrossRefPubMed
  61. ↵
    1. Shalev N,
    2. Nobre AC,
    3. van Ede F
    (2019) Time for what? Breaking down temporal anticipation. Trends Neurosci 42:373–374. doi:10.1016/j.tins.2019.03.002 pmid:30885405
    OpenUrlCrossRefPubMed
  62. ↵
    1. Smith PL,
    2. Ratcliff R
    (2009) An integrated theory of attention and decision making in visual signal detection. Psychol Rev 116:283–317. doi:10.1037/a0015156 pmid:19348543
    OpenUrlCrossRefPubMed
  63. ↵
    1. Spiegelhalter DJ,
    2. Best NG,
    3. Carlin BP,
    4. van der Linde A
    (2002) Bayesian measures of model complexity and fit. J R Stat Soc Series B 64:583–639. doi:10.1111/1467-9868.00353
    OpenUrlCrossRef
  64. ↵
    1. Spieser L,
    2. Kohl C,
    3. Forster B,
    4. Bestmann S,
    5. Yarrow K
    (2018) Neurodynamic evidence supports a forced-excursion model of decision-making under speed/accuracy instructions. eNeuro 5:e0159. doi:10.1523/ENEURO.0159-18.2018
    OpenUrlAbstract/FREE Full Text
  65. ↵
    1. Steinemann NA,
    2. O'Connell RG,
    3. Kelly SP
    (2018) Decisions are expedited through multiple neural adjustments spanning the sensorimotor hierarchy. Nat Commun 9:1–13. doi:10.1038/s41467-018-06117-0
    OpenUrlCrossRefPubMed
  66. ↵
    1. Stefanics G,
    2. Hangya B,
    3. Hernádi I,
    4. Winkler I,
    5. Lakatos P,
    6. Ulbert I
    (2010) Phase entrainment of human delta oscillations can mediate the effects of expectation on reaction speed. J Neurosci 30:13578–13585. doi:10.1523/JNEUROSCI.0703-10.2010 pmid:20943899
    OpenUrlAbstract/FREE Full Text
  67. ↵
    1. Tandonnet C,
    2. Burle B,
    3. Vidal F,
    4. Hasbroucq T
    (2006) Knowing when to respond and the efficiency of the cortical motor command: a Laplacian ERP study. Brain Res 1109:158–163. doi:10.1016/j.brainres.2006.06.052 pmid:16863647
    OpenUrlCrossRefPubMed
  68. ↵
    1. Teichert T,
    2. Grinband J,
    3. Ferrera V
    (2016) The importance of decision onset. J Neurophysiol 115:643–661. doi:10.1152/jn.00274.2015 pmid:26609111
    OpenUrlCrossRefPubMed
  69. ↵
    1. Twomey DM,
    2. Murphy PR,
    3. Kelly SP,
    4. O'Connell RG
    (2015) The classic P300 encodes a build-to-threshold decision variable. Eur J Neurosci 42:1636–1643.
    OpenUrlCrossRefPubMed
  70. ↵
    1. Thut G,
    2. Nietzel A,
    3. Brandt SA,
    4. Pascual-Leone A
    (2006) Alpha-band electroencephalographic activity over occipital cortex indexes visuospatial attention bias and predicts visual target detection. J Neurosci 26:9494–9502. doi:10.1523/JNEUROSCI.0875-06.2006
    OpenUrlAbstract/FREE Full Text
  71. ↵
    1. Urai AE,
    2. Pfeffer T
    (2014) An action-independent signature of perceptual choice in the human brain. J Neurosci 34:5081–5082. doi:10.1523/JNEUROSCI.0477-14.2014 pmid:24719086
    OpenUrlFREE Full Text
  72. ↵
    1. Usher M,
    2. McClelland JL
    (2001) The time course of perceptual choice: the leaky, competing accumulator model. Psychol Rev 108:550–592. doi:10.1037/0033-295X.108.3.550
    OpenUrlCrossRefPubMed
  73. ↵
    1. Vandekerckhove J,
    2. Tuerlinckx F,
    3. Lee MD
    (2011) Hierarchical diffusion models for two-choice response times. Psychol Methods 16:44–62. doi:10.1037/a0021765 pmid:21299302
    OpenUrlCrossRefPubMed
  74. ↵
    1. van den Brink RL,
    2. Wynn S,
    3. Nieuwenhuis S
    (2014) Post-error slowing as a consequence of disturbed low-frequency oscillatory phase entrainment. J Neurosci 34:11096–11105. doi:10.1523/JNEUROSCI.4991-13.2014 pmid:25122906
    OpenUrlAbstract/FREE Full Text
  75. ↵
    1. van Diepen RM,
    2. Cohen MX,
    3. Denys D,
    4. Mazaheri A
    (2015) Attention and temporal expectations modulate power, not phase, of ongoing alpha oscillations. J Cogn Neurosci 27:1573–1586. doi:10.1162/jocn_a_00803
    OpenUrlCrossRefPubMed
  76. ↵
    1. van Dijk H,
    2. Schoffelen JM,
    3. Oostenveld R,
    4. Jensen O
    (2008) Prestimulus oscillatory activity in the alpha band predicts visual discrimination ability. J Neurosci 28:1816–1823. doi:10.1523/JNEUROSCI.1853-07.2008 pmid:18287498
    OpenUrlAbstract/FREE Full Text
  77. ↵
    1. van Kerkoerle T,
    2. Self MW,
    3. Dagnino B,
    4. Gariel-Mathis MA,
    5. Poort J,
    6. van der Togt C,
    7. Roelfsema PR
    (2014) Alpha and gamma oscillations characterize feedback and feedforward processing in monkey visual cortex. Proc Natl Acad Sci USA 111:14332–14341. doi:10.1073/pnas.1402773111
    OpenUrlAbstract/FREE Full Text
  78. ↵
    1. Vangkilde S,
    2. Coull JT,
    3. Bundesen C
    (2012) Great expectations: temporal expectation modulates perceptual processing speed. J Exp Psychol Hum Percept Perform 38:1183–1191. doi:10.1037/a0026343 pmid:22250866
    OpenUrlCrossRefPubMed
  79. ↵
    1. Vangkilde S,
    2. Petersen A,
    3. Bundesen C
    (2013) Temporal expectancy in the context of a theory of visual attention. Philos Trans R Soc Lond B Biol Sci 368:20130054. doi:10.1098/rstb.2013.0054 pmid:24018716
    OpenUrlCrossRefPubMed
  80. ↵
    1. Wetzels R,
    2. Wagenmakers EJ
    (2012) A default Bayesian hypothesis test for correlations and partial correlations. Psychon Bull Rev 19:1057–1064.
    OpenUrlCrossRefPubMed
  81. ↵
    1. Wiecki TV,
    2. Sofer I,
    3. Frank MJ
    (2013) HDDM: hierarchical Bayesian estimation of the drift-diffusion model in Python. Front Neuroinform 7:14. doi:10.3389/fninf.2013.00014 pmid:23935581
    OpenUrlCrossRefPubMed
  82. ↵
    1. Zanto TP,
    2. Pan P,
    3. Liu H,
    4. Bollinger J,
    5. Nobre AC,
    6. Gazzaley AJ
    (2011) Age-related changes in orienting attention in time. J Neurosci 31:12461–12470. doi:10.1523/JNEUROSCI.1149-11.2011 pmid:21880908
    OpenUrlAbstract/FREE Full Text
  83. ↵
    1. Zhang J,
    2. Rittman T,
    3. Nombela C,
    4. Fois A,
    5. Coyle-Gilchrist I,
    6. Barker RA,
    7. Hughes LE,
    8. Rowe JB
    (2016) Different decision deficits impair response inhibition in progressive supranuclear palsy and Parkinson's disease. Brain 139:161–173. doi:10.1093/brain/awv331 pmid:26582559
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 41 (1)
Journal of Neuroscience
Vol. 41, Issue 1
6 Jan 2021
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Ed Board (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Temporal Expectation Hastens Decision Onset But Does Not Affect Evidence Quality
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Temporal Expectation Hastens Decision Onset But Does Not Affect Evidence Quality
Ruud L. van den Brink, Peter R. Murphy, Kobe Desender, Nicole de Ru, Sander Nieuwenhuis
Journal of Neuroscience 6 January 2021, 41 (1) 130-143; DOI: 10.1523/JNEUROSCI.1103-20.2020

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Temporal Expectation Hastens Decision Onset But Does Not Affect Evidence Quality
Ruud L. van den Brink, Peter R. Murphy, Kobe Desender, Nicole de Ru, Sander Nieuwenhuis
Journal of Neuroscience 6 January 2021, 41 (1) 130-143; DOI: 10.1523/JNEUROSCI.1103-20.2020
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • decision-making
  • EEG
  • expectation
  • perception
  • drift diffusion modeling

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Sex differences in histamine regulation of striatal dopamine
  • The Neurobiology of Cognitive Fatigue and Its Influence on Effort-Based Choice
  • Zooming in and out: Selective attention modulates color signals in early visual cortex for narrow and broad ranges of task-relevant features
Show more Research Articles

Behavioral/Cognitive

  • Zooming in and out: Selective attention modulates color signals in early visual cortex for narrow and broad ranges of task-relevant features
  • Target selection signals causally influence human perceptual decision making
  • The molecular substrates of second-order conditioned fear in the basolateral amygdala complex
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.