Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

Cortical β Power Reflects a Neural Implementation of Decision Boundary Collapse in Speeded Decisions

Hans Kirschner, Adrian G. Fischer, Claudia Danielmeier, Tilmann A. Klein and Markus Ullsperger
Journal of Neuroscience 27 March 2024, 44 (13) e1713232024; https://doi.org/10.1523/JNEUROSCI.1713-23.2024
Hans Kirschner
1Institute of Psychology, Otto-von-Guericke University, Magdeburg D-39106, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Hans Kirschner
Adrian G. Fischer
2Department of Education and Psychology, Freie Universität Berlin, Berlin D-14195, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Claudia Danielmeier
3School of Psychology, University of Nottingham, Nottingham NG7 2RD, United Kingdom
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Tilmann A. Klein
1Institute of Psychology, Otto-von-Guericke University, Magdeburg D-39106, Germany
4Center for Behavioral Brain Sciences, Magdeburg D-39106, Germany
5Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig D-04103, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Markus Ullsperger
1Institute of Psychology, Otto-von-Guericke University, Magdeburg D-39106, Germany
4Center for Behavioral Brain Sciences, Magdeburg D-39106, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Markus Ullsperger
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

A prominent account of decision-making assumes that information is accumulated until a fixed response threshold is crossed. However, many decisions require weighting of information appropriately against time. Collapsing response thresholds are a mathematically optimal solution to this decision problem. However, our understanding of the neurocomputational mechanisms underlying dynamic response thresholds remains significantly incomplete. To investigate this issue, we used a multistage drift–diffusion model (DDM) and also analyzed EEG β power lateralization (BPL). The latter served as a neural proxy for decision signals. We analyzed a large dataset (n = 863; 434 females and 429 males) from a speeded flanker task and data from an independent confirmation sample (n = 119; 70 females and 49 males). We showed that a DDM with collapsing decision thresholds, a process wherein the decision boundary reduces over time, captured participants' time-dependent decision policy more accurately than a model with fixed thresholds. Previous research suggests that BPL over motor cortices reflects features of a decision signal and that its peak, coinciding with the motor response, may serve as a neural proxy for the decision threshold. We show that BPL around the response decreased with increasing RTs. Together, our findings offer compelling evidence for the existence of collapsing decision thresholds in decision-making processes.

  • EEG
  • decision-making
  • drift diffusion model
  • decision boundary collapse

Significance Statement

This study uncovers compelling evidence to suggest that under high time pressure, dynamic decision thresholds drive the termination of decision formation, as seen in the β power lateralization (BPL) over the motor cortex during a speeded flanker task. This is demonstrated by behavioral modeling, showing that subjects’ time-dependent decision policy is best captured in drift–diffusion models that allow dynamic decision bounds and confirmed in the neural signal, whereby BPL over motor cortices reflects features of the modeled decision signal.

Introduction

We often make decisions under time pressure, sacrificing extensive evidence accumulation in favor of or against choice alternatives. For example, in the realm of financial trading, quick decision-making is paramount. A trader might start the day with a high threshold for certainty, seeking clear signals that a trade will be profitable. However, as the trading day nears its end, the trader may lower this threshold, opting to make trades with less certainty to avoid missing potential market gains. This scenario mirrors the collapsing decision thresholds concept we investigate in our research. Psychology and neuroscience have successfully utilized sequential sampling models such as the drift–diffusion model (DDM) to understand the underpinnings of decision-making (Ratcliff et al., 2016). DDMs assume that motor executions of decisions are triggered when the accumulated evidence for one option/alternative crosses a response threshold. Traditionally, diffusion models assume a time-invariant decision policy, which is reflected in fixed decision boundaries. However, this assumption may be ill-posed in situations where decisions must be made under time pressure. Consequently, more recently, diffusion models with dynamic decision bounds gained popularity. Here, as time passes, decisions are triggered at decreasing decision thresholds. This facilitates terminating decision-making when no solution can be reached within the desired time frame. Yet, evidence in favor of diffusion models with dynamic decision bounds is mixed (Hawkins et al., 2015a; Ratcliff et al., 2016; Palestro et al., 2018; Bond et al., 2021; Overmeyer et al., 2023).

Neurophysiological signals that implement time-dependent decision policies have recently been demonstrated in multiple species in the form of time-dependent build-up of movement-selective activity consistent with an additive urgency signal. This can be considered as a neural implementation of dynamic decision bounds (Hanks et al., 2014; Murphy et al., 2016; Thura & Cisek, 2016; Steinemann et al., 2018; Kelly et al., 2021), yet our understanding of the underlying mechanisms of such adaptations remains incomplete. It has recently been suggested that the motor cortex is involved in the decision-making process by continually sampling information in favor or against response options (Cisek & Kalaska, 2005; Pape & Siegel, 2016). In the EEG, this process is reflected in β band (13–25 Hz) activity over centroparietal electrodes. It has been shown that β power over the contralateral motor cortex decreases before any overt action and even reflects imaginary movements (Kuhn et al., 2006). The difference in β power between both motor cortices may thus encode the relative evidence in favor of respective response options in a lateralized fashion (Donner et al., 2009; Pape & Siegel, 2016) or, put more broadly, decisions in action-space (Cisek & Kalaska, 2005; Hunt et al., 2013). In this line of research, we recently demonstrated that model parameters of a multistage DDM and the lateralization of EEG β power convergently show a complex interplay that facilitates behavioral adaptations after erroneous responses in a flanker task (Fischer et al., 2018). Specifically, suppression of distracting evidence, together with increased response thresholds, appears to cause slower and more accurate performance in trials that followed erroneous responses. Based on this work demonstrating a remarkable similarity of the time course of the decision variable in the DDM and the β power lateralization (BPL) over the motor cortex, we argue that the BPL peak around response time can serve as a proxy for response thresholds.

Here, we follow up on this argumentation and investigate whether BPL can demonstrate a plausible neural mechanism for collapsing decision thresholds in a large sample of 863 healthy participants. We complemented our previous multistage DDMs (Fischer et al., 2018) with an additional free parameter that allowed decision bounds to collapse according to a cumulative Weibull distribution. This model makes distinct predictions about the decision threshold depending on speed: later decisions are made at a lower threshold. We hypothesized that if decision thresholds collapse, a decreased BPL peak, coinciding with the motor response, for slower responses should be present. Indeed, we found that BPL around the response decreased with increasing RT. Consistently, we found that the dynamic decision-bound DDM provided a better fit for the data. Taken together, model predictions and their independently measured neuronal proxies in β power convergently support the assumption of collapsing decision thresholds.

Materials and Methods

Participants

Eight hundred ninety-five healthy young adults were recruited into the study. After the exclusion of participants due to low task performance, recording failures, or poor data quality, 863 participants remained for subsequent analyses. The mean age of the sample including 434 female and 429 male participants was 24.2 years (range, 18–40). A detailed description of the sample characteristics and exclusion criteria can be found in the study of Fischer et al. (2018).

To replicate the key findings, we analyzed data from an independent confirmation sample of 119 subjects (mean age 22.89 years; range, 18–38; 70 females). These datasets were collected at the Otto-von-Guericke University Magdeburg as part of an ongoing genetic association study using the same approved study protocol.

Task

We employed a speeded arrow version of the Eriksen flanker paradigm in that participants were encouraged to respond as quickly and correctly as possible to the direction of the target arrow presented in the center of the screen (Fig. 1A). To elicit a large proportion of response errors, we presented distracting flanking stimuli in visual proximity and slightly ahead in time [stimulus-onset asymmetry (SOA); 83 ms] of the imperative, central target stimulus. On incongruent stimulus sets, a mismatch between the flanker and target arrow directions induced conflicting response tendencies. The response conflict was further enhanced by the manipulation of stimulus distances among each other (far distance, 6.5° and 4°, and close distance, 3.5° and 1.75° visual angle; Danielmeier et al., 2009) and the time between the trials [response–stimulus intervals (RSIs); short = 250 ms, long = 700 ms; Danielmeier and Ullsperger, 2011]. In total, the task comprised 1,088 trials. A short response window of 1,200 ms and short RSIs ensured high time pressure.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Task information, model features, and predictions. A, In the flanker task, either congruent or incongruent flankers (four surrounding arrows) in proximity or further away from the central target (inlet) are presented on each trial. B, Individual RT means split up by different trial types. Congruent stimuli usually yield faster responses (green) compared with incongruent stimuli, which leads to delayed responses (orange) or induces fast errors (red). C, Conditional accuracy function. Here, the points indicate the mean accuracy of trials sorted by RT into 25 equal-sized bins. The line shows the best fits of piece-wise logistic regressions to each subject's single-trial data. The shades indicate ±SEM. D, E, Illustration of the multistage DDMs [full model with (DDM6, right) and without (DDM5, left) decision-bound collapse]. We show the drift process based on three example trial types: congruent correct trials (green drift process), incongruent correct trials (orange drift process), and incongruent error trials (red drift process). Our DDMs consisted of five separate stages (depicted as bars underneath), and the full model (DDM6 on the right) included nine free parameters: drift rate (v), variance in drift rates (sv), boundary (a), variance in start points (sz), nondecision time (Ter, reflecting visual processing and motor execution times), variance in Ter (st), boundary collapse (k), flanker weighting (f), and variance in f (sf). The first stage reflects the prestimulus baseline. On each trial, the decision process starts with a random start point sz, which is drawn from a uniform distribution. Stage 2 starts after the flanker presentation (0 ms) and represents the nondecision time (Ter) per trial. Ter varies on each trial depending on st, which is also modeled as a uniform distribution. In this period, the decision process randomly drifts away from the start point. Stage 3 reflects a noisy diffusion in the flanker direction (sv), which varies from trial to trial (sf). Thus, on each trial in this stage, the decision process drifts in the flanker direction with drift rate v1 = vt x ftx fdt. Here, fdt reflects the direction (+1, flanker pointing to the left; −1, flanker pointing to the right) of the flankers. Consecutive to the flanker diffusion, evidence accumulation (v2t) was driven by the direction of the target stimulus until the response threshold was met (Stage 4). Finally, we modeled a consecutive return of the decision variable to zero according to an Ornstein–Uhlenbeck process to facilitate comparisons between the model and BPL. To speed up the model fitting procedure, we neither simulated baseline periods nor returned to zero during the fitting because these have no effect on model predictions. For the baseline, each time step of the decision signal was drawn from a normal distribution with a mean of zero and an SD of 0.001 and shifted by the start points at each trial. The return to zero was modeled as a process that reverts the decision signal from the decision threshold back to zero, with a fixed speed of 0.004 and volatility of 0.001. The height of the boundary parameter (a) determines how much evidence accumulation is required to cross the boundary and trigger a response. The boundary (a) can either be fixed (C) or collapse with increasing decision time (D). To simulate the temporal evolution of the single-trial decision process depicted in (C, D) we used the mean maximum likelihood parameters from the group fit obtained for DDM5 and DDM6. Note: The solid colored vertical lines indicate the onset of flanker drift of the respective example drift process. The dashed colored vertical lines indicate the onset of target drift of the respective example drift process.

Empirical conditional accuracy functions

To investigate time dependency in the decision process, we used empirical conditional accuracy functions (CAF). Analyses of the CAF were conducted using a slightly adapted code provided by Murphy et al. (2016). The procedure was as described by Murphy et al. (2016), and the following description is adapted therefrom.

To estimate mean accuracy as a function of RT (i.e., the CAF), we used a single-trial logistic regression. To account for fast errors and a possible decreasing CAF toward the response deadline, we constructed an algorithm that minimizes the combined sum of squared errors of piece-wise logistic regressions of accuracy (1 = correct, 0 = error) onto RT, splitting trials before and after a temporal inflection point α such that:Pcorrect={(1+e−(β0+β1×(RT−a)))−1,RT−∝≤0(1+e−(β0+β2×(RT−a)))−1,RT−∝>0. Here, β0 is the accuracy at α, β1 is the slope of the CAF before α, and β2 is the slope of the CAF after α. β1 was constrained to be ≥0 to reflect the fact that the left segment of the piece-wise fit accounts for the increasing portion of the CAF (i.e., this considers the fact that in our task, most response errors are made at fast RTs). The model was fit using Nelder–Mead simplex minimization to estimate β0, β1, and β2 parameters while conducting an exhaustive search of possible α values (step size = 10 ms ending at 1 s). Whichever piece-wise segment is fit first determines β0 and thus constrains the fit of the remaining segment; therefore, the algorithm was run twice (left segment fit first and right segment fit first) for each α to find the true minimum (Karşılar et al., 2014; Murphy et al., 2016). This single-trial regression approach allows for a reasonable approximation of the temporal evolution of the CAF throughout the whole trial length.

DDM and model comparison

Basic features of the multistage sequential sampling models and general fitting procedures were described by Fischer et al. (2018). The description is adapted therefrom.

DDM stages

Our DDM assumed that the decision signal in the flanker task undergoes several stages (Fig. 1D,E). The start point of each trial was modeled as a baseline period prior to flanker onset (Stage 1). Here, each time step of the decision signal was drawn from a normal distribution with a mean of zero and an SD of 0.001 and shifted by a model-free variance parameter (sz) reflecting start points at each trial. This was mainly done because we wanted to obtain predictions of the time course of the signal comparable to the recorded EEG data. This period was followed by a second stage during which the decision signal could drift randomly away from the start point for the duration of the nondecision time of each trial (Stage 2). We allowed for noise accumulation during this period because we previously demonstrated that quick random diffusion in the nondecision time partly explains simple response errors in response conflict tasks (Fischer et al., 2018). Thereafter, we assumed that evidence accumulation was driven by flanker direction for as long as these were displayed (Stage 3). Consecutive to the flanker diffusion, evidence accumulation was driven by the direction of the target stimulus until the response threshold was met (Stage 4). Finally, we modeled a consecutive return of the decision signal to zero according to an Ornstein–Uhlenbeck process (Uhlenbeck & Ornstein, 1930) to facilitate comparisons between the model and BPL (Stage 5). In short, the Ornstein–Uhlenbeck process is a stochastic process that reverts a signal to a mean, θ, with speed, κ, and volatility, σ. We fixed these parameters to θ = 0, κ = 0.004, and σ = 0.001.To speed up the model fitting procedure, we neither simulated baseline periods nor returned to zero during the fitting because these stages have no effect on model predictions and were merely added to the model to facilitate comparison between the predicted decision signal from the DDM to the time course of the EEG signal.

Model parameters and description

As in most DDMs, evidence accumulation in our model is governed by a Wiener process with stepwise increments according to a Gaussian distribution with mean v (called drift rate) and within-trial variance s (reflecting the system's noise, i.e., the amount of noise per computation step of the diffusion), which was fixed to 0.1. The step size for all models was set to 1 ms. A decision (response) is triggered when the diffusion reaches a criterion (Fig. 1D,E, threshold or boundary, blue and magenta lines, respectively, with values ± α). We used symmetrical boundaries that were defined as left-hand responses when the positive boundary was reached first and as right-hand responses when the negative boundary was reached first (Fig. 1D,E). To make the signal comparable to EEG β power, we flipped all diffusion epochs to reflect the given response as negative. Therefore, in all plots (except for Fig. 1D,E) of the diffusion signal, more negative values reflect evidence accumulation in accordance with the selected response option. While only flankers were on screen, the model's diffusion was driven by the direction of the flanking arrows (i.e., positive when left, negative when right), scaled by a free parameter f. Specifically, the drift rate during the flanker-only period (v1) on a given trial t was determined by the following:Flankerdrift(v1t)=vt×ft×fdt. Here, vt and ft on a given trial t were drawn from a normal distribution with mean v and f and their associated variances (sv and sf, respectively, see below); fdt reflects the direction (+1, flanker pointing to the left; −1, flanker pointing to the right) of the flankers on a given trial t. After the target onset, the diffusion was governed by the target direction. This drift rate (v2, whereby v2t=vt) reflects the combined influence of the target plus flanker on the decision signal. Moreover, we assumed that despite the disappearance of target and flanker arrows after 33 ms of common presentation, the decision continues to form with a constant speed (vt). This was based on several studies that indicate that constant drift rates account well even for cases in which visual input is masked after a certain period of time (Ratcliff & Rouder, 2000). Therefore, left-pointing incongruent flankers (Fig. 1D,E orange drift lines) are associated with a positive drift during the SOA, but a negative drift thereafter. Note that both periods are shifted in time by the nondecision time modeled as a free parameter (Ter). Ter here simulates the translation time of stimulus evidence into decision formation, which can be expected to mainly reflect visual processing and response mapping, and varies to a certain degree on every trial, for example, due to fluctuations of alertness (see variance parameter description below).

Variance parameters and dynamic decision boundaries

Across trials, we assume that the start point (z) of evidence accumulation (the baseline), nondecision time (Ter), drift rate (v), and distractor suppression (f) vary to some degree. As the task comprised exactly 50% left and right responses, we fixed the mean start point (z) of the diffusion process to 0. All models were thus unbiased regarding the average starting point across the experiment. However, each individual trial's starting point was allowed to randomly vary according to a uniform distribution with lower and upper limits fit as the free parameter sz × 0.5 (i.e., upper/lower limit = 0 ± sz/2). This variance in start points can be interpreted as the range of bias of a participant toward a left- or right-hand response that varies between trials. Moreover, we modeled the nondecision time (Ter) single-trial variance with parameter st as a uniform distribution with borders = Ter ± st/0.5. This simply reflects that in some trials, evidence may take longer to be processed, for example, via fluctuations in attention. To account for variance in the drift rate, we allowed drift rates to vary according to a zero-mean Gaussian distribution with variance sv. Those trials with higher drift rates will reach a decision quicker yet may also be more prone to reach the incorrect boundary on incongruent trials, where the flanking stimuli point away from the correct direction. Additionally, we assumed that the degree to which distractors influence the diffusion process may vary between trials, reflecting selective attention and attention slips. Therefore, we modeled trial-by-trial flanker suppression effects according to a zero-mean Gaussian distribution with variance sf. Finally, the free parameter k scaled a dynamic decision boundary collapse according to a Weibull distribution. Here, the dynamic boundary u at time t was calculated as follows:ut=a−(1−exp−(tk)s)×a2. In this equation, a represents the initial boundary value, and k scales the Weibull distribution. The shape parameter s was fixed at 3 to impose a “late collapse” decision strategy. This value/shape was informed by results from a previous study in a large human sample (Hawkins et al., 2015a).

In sum, the full model (DDM6) comprised five free parameters (v, f, a, Ter, k) and four trial-by-trial variance parameters (sv, sf, sz, st). Variance parameters and decision boundary were partly fixed for analyses of other, less complex models that are thus nested within DDM6. Specifically, in the base model (DDM1), f was fixed to 1, such that there was effectively only one drift rate during both the flanker-only period and the flanker-and-target period. In addition, the base model did neither include variance in f (sf set to 0) nor dynamic decision boundaries (here, we imposed a fixed decision boundary, by fixing parameter k to infinity). DDM2 extended the base model by allowing dynamic decision boundaries. DDM3 extended the base model by accommodating two separate drift rates, whereby v1 reflected the initial influence of just the advance flankers, and v2 reflected the combined influence of the target plus flankers. DDM4 added dynamic decision boundaries to DDM3. DDM5 was defined as the full model with fixed decision boundaries. Finally, DDM7 and DDM8 were variants of the full model with either no variance in the drift rate (DDM7) or no variance in both drift rate and f (DDM8).

Model fitting and comparison

We estimated the parameters for each model in our model space by fitting their parameters to RT and accuracy data observed using quantile maximum likelihood statistics (Heathcote et al., 2004) and differential evolution algorithms (Price et al., 2005). We excluded all trials where participants responded more than once (i.e., made correction responses) for model fitting and all other analyses (4.96% ± 4.86; confirmation sample 4.82% ± 4.61). Additionally, all congruent error trials were omitted for model fitting (2.54% ± 1.97; confirmation sample 2.35% ± 1.71), as is common practice when fitting sequential sampling models to RT (Vandekerckhove & Tuerlinckx, 2007). Specifically, for each subject, we split the RTs into 10 equal-sized quantiles to estimate quantile maximum likelihood statistics (which is similar to a χ2 statistic) minimizing the negative log-likelihood of the observed participant data given each set of model parameters. The likelihood of each single observed RT is determined by the model's likelihood of predicting an observation in the corresponding bin separated by correct and erroneous responses. Additionally, we used a mixture model assuming 2% contaminates that were distributed uniformly over the full range of RTs in correct and error responses. For model fitting in all iterations, we applied the following hard priors, which can be seen as boundary parameters: v (0.01–8.5), sv (0–1.5), a (0.01–0.45), sz (0.05–0.3), Ter (0.1–0.4), st (0–2), f (0.1–1.5), sf (0–1.5), and k (10e−4–3).

Because our likelihood statistic is not a true likelihood, as it relies on summary statistics, we call resulting BIC values approximate BIC values [approximated Bayesian Information Criterion (aBIC); akin to White et al. (2011)]. For model comparison, we first computed aBIC values according to the following:BIC=LL^−k2ln(T), where LL^ is the log-likelihood value at the best-fitting parameter settings, k is the number of free parameters, and T is the number of trials (Stephan et al., 2009). Next, we used these individual aBIC values to compute protected exceedance probability, which is the probability that each model is the most likely model across all participants taking into account the null possibility that differences in model evidence are due to chance (Rigoux et al., 2014). To test if we can distinguish between our dynamic and fixed decision boundary models, we conducted a model recovery study (DDM5 vs DDM6). To validate the best-fitting model, we tested whether we could reliably estimate our free parameters within a parameter recovery study. For these analyses, we randomly drew model parameters out of a Gaussian distribution with mean and variance equal to the observed fitted parameters of the respective model across the whole group to reduce parameter value combinations that were extremely unlikely to occur in human data. We simulated 1,000 parameter combinations and used the same differential evolution algorithm to recover the fitted models. Models that produced no errors at all or for which constraints were not met were discarded from analysis. Moreover, we conducted postpredictive checks to confirm that the best-fitting model captures the key aspects of the choice behavior.

For predictions of the neural signals and to simulate the temporal evolution of the modeled diffusion signal depicted in Figure 3, we used the mean maximum likelihood parameters from the group fit obtained for DDM6. For this simulation, we computed 5,000 simulated trials.

EEG processing

EEG was recorded at 500 Hz from 60 Ag/AgCl sintered electrodes with A1 as the reference channel and an electrode placed over the sternum as the ground electrode arranged in the extended 10-20 system using BrainAmp amplifiers (Brain Products). Impedances were kept below 5 kΩ. Preprocessing of the EEG data was done under Matlab 2021b (MathWorks) and the EEGlab 13 toolbox (Delorme & Makeig, 2004) using custom routines as described previously (Fischer et al., 2018). The preprocessing steps included the following: (1) filtering (0.5 Hz high- and 42 Hz low-pass filter), (2) rereferencing to common average, (4) segmentation into stimulus-locked epochs spanning from −1.5 to 2 s relative to target onset, and (5) automatic epoch rejection. Here, epochs contaminated with artifacts were rejected using a dynamically adjusted rejection threshold to remove at least one trial separately for error and correct responses and maximally 5% per condition. The algorithm rejects epochs containing artifacts based on the variance in the epochs compared with the average variance. It starts with a rejection criterion of five SD and consequently adjusts this criterion to meet the rejection criteria. Because our artifact rejection is variance based and error response–related EEG activity typically has more variance than correct response activity, we run the algorithm separately for both trial types [average number of rejected epochs: error trials, 5.49 ± 2.53; rejection threshold, 4.24 ± 0.37 (confirmation sample, 5.32 ± 2.24; rejection threshold, 4.19 ± 0.32); correct trials, 35.26 ± 7.28; rejection threshold, 4.22 ± 0.33 (confirmation sample, 34.20 ± 7.77; rejection threshold, 4.14 ± 0.28)]. The preprocessing steps also included (6) removal of blink, eye movement, and other, less homogenous artifact components [average number of removed components = 3.40 ± 2.07 (confirmation sample, 3.42 ± 2.20)] using adaptive mixture independent component analysis (Palmer et al., 2012). Finally, we extracted response-locked epochs spanning from −1 to 1 s from the stimulus-locked epochs.

EEG analyses

First, we convolved the artifact-free response-locked EEG signal with a series of complex Morlet wavelets between 4 and 25 Hz. We used 20 linearly spaced steps and a wavelet width of six cycles. Data were then log-transformed. In a first explorative analysis, we aimed to confirm the lateralization of the signal to the hemisphere initiating a response. Here, we used single-trial multiple robust regression and regressed the factor response hand (−1 = left, 1 = right) onto the convolved signal for every electrode and time point, while controlling for unspecific task effects. Noise regressors included congruency, flanker distance, response–stimulus interval (RSI), following RSI, trial number, and log-transformed RT. This analysis confirmed response-related β band lateralization, whereby beginning ∼100 ms before response onset, β power decreased over the sensorimotor cortex contralateral to the response. This was reflected in a positive covariation between the response hand and the signal in the β band, which was maximal at C4/CP4, and a negative covariation, which was maximal at C3/CP3 (Fig. 3A). We chose electrodes with a maximal effect (C3, C4, CP3, CP4) for all further analyses of BPL.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

DDM fits and model validation. A, Cumulated approximated Bayesian Information Criterion (aBIC) scores over participants of each candidate model. The higher values indicate a better fit of the models to the behavioral data. The results indicate that the dynamic bound model (DDM6) provides the best fit to the data. Protected exceedance probability (pEP; which is the probability that each model is the most likely model across all participants taking into account the null possibility that differences in model evidence are due to chance) similarly favors DDM6. Estimated model frequencies (i.e., the ratio of subjects assigned to each model in the model space) also suggest DDM6 to be the most frequent across participants. B, Plotted are correlations between simulated and recovered parameters for the full model (DDM6). The results indicate that parameter values that were used to simulate data from the full feature DDM (ordinate) tended to correlate with the parameter values best fit to those synthetic datasets (abscissa). The full feature DDM consisted of nine parameters: drift rate (v), variance in drift rates (sv), boundary (a), variance in start points (sz), nondecision time (Ter, reflecting visual processing and motor execution times), variance in Ter (st), bounds collapse (k), flanker suppression (f), and variance in f (sf). C–J, Quantile fits of the model (C–E, DDM5, orange; G–I, DDM6, green) against human RT data (light blue). F, J, Model and human accuracy (F, DDM6; I, DDM5). In all conditions (congruent and incongruent correct as well as incongruent error), both models capture the RT data in each quantile, suggesting a good fit to the data. However, the full model with collapsing boundaries appears to capture the variance in the data better (particularly in the later RT quantiles). B–J, Boxes = interquartile range (IQR), o = median, - = mean, whiskers =1.5 × IQR, gray dots = outlier. Differences in model fit become more evident when comparing CAFs and missed deadlines produced by the model. K–L, CAF and proportion of missed deadlines from the full fixed bound model (DDM5; K) and the full model with collapsing boundaries (DDM6; L). The points indicate the mean accuracy of trials sorted by RT into 25 equal-sized bins informed from the empirical data. The line shows the best fits of piece-wise logistic regressions to each subject's modeled single-trial data. The shades indicate ±SEM. Note that for model fitting, we excluded all missed trials from the empirical data. M, Model recovery analysis. We simulated data with DDM5 and DDM6 and then evaluated the simulated data using each model. The results indicate that the models can reliably be recovered.

For the analyses of BPL and mean β power, we collapsed the convolved signal across the frequency range of 13–25 Hz and normalized within each participant by dividing the power by its SD and subtracting the mean. When comparing the averaged signal over contralateral (electrodes C3/CP3 for the right and C4/CP4 for the left) and ipsilateral (electrodes C3/CP3 for the left and C4/CP4 for the right) motor cortices, we show an overall decrease in β power ∼300 ms prior to the response, which was followed by a consecutive increase in β power (i.e., β rebound) ∼400 ms postresponse (Fig. 3C). To derive BPL, we subtracted β band power over the inactive sensorimotor cortex (i.e., the electrode side ipsilateral to the hand that gave the response in the trial) from the β power recorded over the active (contralateral) sensorimotor cortex. This difference signal thus compares the degree of β power reduction between both hemispheres, presumably reflecting differential motor activation (Fig. 3). To investigate single-trial associations of β thresholds and RT, we used the response-locked single-trial signal (mean BPL ± 12 ms around the response) and regressed the log-transformed RT onto the β threshold while accounting for unspecific task effects according to the following equation:BPLthreshold=b0+b1×Congruency+b2×Accuracy+b3×Flankerdistance+b4×RSI+b5×Previousaccuracy+b6×Trial+b7×NDL+b8×log(RT)+e. NDL reflects negative log-distance in trials from last break.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Response-locked power spectra, β power, and DDM decision variable. A, The result of a single-trial regression analysis comparing responses with the left and right hand, while controlling for congruency, flanker distance, and RSIs, following RSI, trial number, and log-transformed RT. Data suggest that β power more strongly decreases when a response is given with the contralateral compared with the ipsilateral hand. This effect was most pronounced over the electrodes C3/4 and CP3/4. BPL signal (C) is composed of the activity difference recorded over contralateral and ipsilateral motor cortices (B). D–G, Neural stimulus processing locked to response separated by response speed, congruence, accuracy, and previous accuracy. First, we show that in BPL as well as the modeled signal α lateralization toward the given response is pronounced for fast response (D, H). Importantly, BPL at response execution is reduced for later responses. This observation confirms dynamically changing decision bounds within a trial and is also reflected in the modeled signal (H). In addition, when comparing RT-matched correct congruent and incongruent trials, we found a significant difference at response execution (E). Exploratory analysis of the lateralized signal (Fig. 5) demonstrated that this effect is caused by reduced activation (i.e., increased β power) of the ipsilateral hemisphere on congruent trials compared with incongruent ones. This likely reflects enhanced motoric readiness caused by the flanker stimuli. This effect is not captured by the DDM signal (I) because it does not include threshold differences between trials. When comparing RT-matched correct and error responses (F, K), we demonstrate that both conditions are associated with similar thresholds in BPL. This effect is similarly reflected in the modeled DDM signal. Finally, we demonstrate that BPL at response execution is increased on response-matched post-error trials (G). This may reflect strategic decision policy adaptations to increase accuracy after errors (Murphy et al., 2016; Fischer et al., 2018). Again, this effect is not reflected in the modeled DDM signal because this model does not include threshold differences between trials (K). Note: Contours in A represent significant clusters after cluster-based permutation testing (Maris & Oostenveld, 2007). The shades in B–K represent 99.9% CI. The gray backgrounds indicate the significant time points after Bonferroni correction. In H–F, we rectified the modeled diffusion signal to always plot the unchosen response up and the chosen response down. For details about the simulation, see Materials and Methods.

In an explorative analysis, we used the same model to test the separate contributions of each hemisphere (ipsilateral and contralateral β power).

Data and code availability

The conditions of our informed consent form do not permit public archiving of the raw data because participants did not provide sufficient consent. Researchers who wish to access processed and anonymized data from a reasonable perspective should contact the corresponding author. Data will be released to researchers if it is possible under the terms of the General Data Protection Regulation. The code of the toolbox we used for the regression analysis can be found at http://www.adrianfischer.de/teaching.html. The code for the DDM models is available on request. Data and analysis scripts of the confirmation sample can be downloaded on the Open Science Framework at https://osf.io/z9svf/.

Results

Behavioral analyses confirmed that general behavior effects were in accordance with typical results seen in flanker tasks. They reflected interference effects, with slower (ΔRT =  + 62 ms, t(862) = −81.79, p < 0.001, d = 1.03, BF10 > 100) and more erroneous (Δaccuracy = 20%, t(862) = 96.18, p < 0.001, d = 3.44, BF10 > 100) responses on incongruent trials, whereby these effects interacted with flanker distance. Moreover, all error-related behavioral adaptations associated with cognitive control (post-error slowing, post-error increase in accuracy, post-error reduction of interference) were present. Overall participants responded accurately (mean accuracy > 85%), and most errors were committed at faster RTs compared with correct responses (see Fischer et al., 2018 for a detailed description of these results). Importantly, we investigated time dependency in the decision process utilizing empirical CAF similar to Murphy et al. (2016). CAFs relate accuracy to RT. A time-dependent decision policy has been characterized by a combination of a few missed deadlines, negative CAF slopes, and decreased accuracy around the response deadline (Murphy et al., 2016). We employed single-trial logistic regression to estimate the shape of the empirical CAFs (Fig. 1C; Materials and Methods). After accounting for fast response errors, the estimated CAF slope was negative for slower responses (β2 = −4.41 ± 0.35; t(862) = −12.35, p < 0.001, d = 0. 42, BF10 > 100). Consistently, there was a significant difference in accuracy between the last two RT bins (0.98 ± 0.01 vs 0.97 ± 0.01; t(862) = 7.45, p < 0.001, d = 0.25, BF10 > 100). Moreover, using the regression fit to estimate the accuracy around the deadline revealed decreased accuracies compared with the peak decision accuracy (accuracy at the inflection point of the estimated CAFs: 0.98 ± 0.01 vs 0.79 ± 0.01; t(862) = 17.67, p < 0.001, d = 0.84, BF10 > 100). Finally, the amount of missed deadlines was low: 3.49% (SD = 3.59). A control analysis suggested that only 0.35% ± 0.49 of the responses are made between 1,100 and 1,200 ms. This result suggests that the majority of misses may result from a different response strategy or reflect outliers (i.e., disengagement of the task or lapses in attention). Hence, the total amount of misses likely overestimates true misses in our task. When combined, these findings indicate that participants employed a time-dependent decision policy.

An extended DDM with collapsing decision boundaries captures task behavior best

We have previously shown that an extended multistage conflict DDM (depicted in Fig. 1D,E) captures key behavioral effects of the task well (Fischer et al., 2018). Conceptually, our DDM is very similar to previously introduced DDMs that model behavioral effects in conflict tasks, suggesting that drift rates vary over time [e.g., the dual-stage (Hubner et al., 2010), the shrinking spotlight model (White et al., 2011), or, put more broadly, models suggesting attentional control mechanisms that help to filter out irrelevant information (Servant et al., 2015; White et al., 2018)]. Here, we use a simplified version of these DDMs that only assumes that attention shifts from flankers (v1) to the target once the target is on screen (v2) and that these drift rates vary from trial to trial (sf and sv). We have previously demonstrated that the approximation in our DDM, which assumes that evidence for response A counteracts evidence for response B, is valid and compatible with the neural signal reflected in BPL (Fischer et al., 2018). Specifically, we demonstrated a remarkable similarity of the time course of the decision variable in our DDM and BPL over the motor cortex, whereby β power first lateralized toward the response associated with the flanker and then toward the response associated with the target.

To investigate whether dynamic decision boundaries explain subjects' behavior better, we complemented our multistage DDMs with an additional free parameter that scaled decision-bound collapse according to a cumulative Weibull distribution. As can be seen in Figure 1D, this model makes a specific prediction: later decisions are formed at a lower threshold. Model comparison (Fig. 2C–L) revealed that our data are best described by the full model with collapsing decision bounds (DDM6; as measured by approximated BIC and achieving protected exceedance probabilities of 100%). This indicates that individuals apply a discounting function that decreases decision thresholds when time is running out (i.e., the response window of 1,200 ms closes). Maximum likelihood parameters of the winning model had the following values: drift rate (v) = 4.23 ± 1.46, variance in drift rates (sv) = 0.80 ± 0.54, boundary (a) = 0.37 ± 0.07, variance in start points (sz) = 0.17 ± 0.09, nondecision time (Ter) = 0.29 ± 0.02, variance in Ter (st) = 1.15 ± 0.41, flanker suppression (f) = 0.47 ± 0.27, variance in f = 0.75 ± 0.27, and boundary collapse (k) = 2.41 ± 0.60. Flanker suppression was confirmed by the value of f, which was significantly <1, where 1 would reflect equal processing of distractor and target information (t(862) = 81.92, p < 0.001, d = 2.79, BF10 > 100). The sufficiency of the winning model was evaluated through posterior predictive checks that matched the behavioral data on various summary measures. Specifically, the model captured correct versus error RT quantiles for congruent and incongruent trials and conflict-related accuracies (Fig. 2G–J). Additional model validation analyses indicated that the collapsing bound model tended to recover parameters that generated synthetic data (Fig. 2B) and that the full model with (DDM6) and without (DDM5) collapsing boundaries can reliably be recovered (Fig. 2M). While the full fixed boundary model (DDM5) also provided a good fit to the data (Fig. 2C–F), the collapsing boundary model appears to capture the human data slightly better in later quantiles. This difference becomes more evident in the CAFs that were fit to the modeled data (Fig. 2K,L). Here, the collapsing bound model captures the empirical data better. The poorer fit of the fixed bound model may be due to the fact that it is unsuited to generate negative CAF slopes without increasing the number of missed deadlines (Murphy et al., 2016). To lower the CAF slope, the fixed bound model must increase the drift rate variance, which increases the number of missed deadlines. Indeed, DDM5 produced significantly more misses (i.e., percentage of trials in which the diffusion process did not reach the decision boundary before the deadline of 1,200 ms: DDM5 vs DDM6, 0.44 ± 0.03 vs 0.1148 ± 0.01; t(862) = 11.65, p < 0.001, d = 0.53, BF10 > 100).

To characterize predictions of the collapsing boundary DDM6, we simulated 5,000 individual trials using the mean of the best-fitting individual parameters. When averaged across trials in a median-split analysis, the decision signal shows clear response threshold differences, whereby fast responses show a higher threshold (Fig. 3E). Interestingly, our results are at odds with previous research identifying evidence in favor of standard DDMs with fixed bounds in humans (Hawkins et al., 2015a, 2015b; Ratcliff et al., 2016). In these studies, other decision-making tasks were used (i.e., random dot motion, brightness discrimination, and dot separation), and DDMs treat (perceptual) interference trials as trials on which drift rates are low to capture the increased RT and uncertainty on (error) trials. However, lower drift rates per trial cannot produce fast errors. Yet, fast errors are a hallmark of speeded reaction time tasks, like our flanker task. Here, fast errors arise from an interaction of distractor-driven bias and trial-wise prelateralization of the decision variables as well as noise during stimulus processing (Fischer et al., 2018). Moreover, in contrast to most other decision-making tasks, after flanker and target presentation (which was only presented for 33 ms in our task), no further information could be sampled to inform decisions. Thus, decision policies might be task-dependent and affected by various factors. It is also possible that mild collapsing decision boundaries were present in previous investigations, but not clearly identifiable in model fit comparison (Murphy et al., 2016). Specifically, Murphy et al. (2016) argued that several popular fixed threshold DDMs include variability parameters that are able to capture similar behavioral effects as in dynamic boundary DDMs (Brown & Heathcote, 2008; Murphy et al., 2014), which precludes discrimination between the two model classes. The differences between fixed and dynamic boundary models therefore may exert a hard-to-detect influence on likelihood estimates in small samples and with methods commonly used for model comparison (e.g., summed BICs). Yet, predictions of different sequential sampling model classes can be highly informative to differentiate between different mechanistic accounts. In this line of thought, we also considered a different kind of dynamic decision model. In this model, decision-making under time pressure is facilitated by an urgency signal (Cisek et al., 2009; Hanks et al., 2014). Here, an urgency signal that increases with decision time is used as a multiplicative gain of the diffusion and noise. Both full dynamic bound DDMs capture RT quantiles and accuracies better than the time-invariant models (data not shown) but make distinct predictions regarding the response threshold. While the collapsing bound model predicts that slower responses are made at lower decision boundary thresholds, the urgency model would predict similar decision boundaries for slow versus fast responses. We test the neural plausibility of these accounts in the next section.

Cortical BPL reflects the neural implementation of decision boundary collapse

Next, we tested the hypothesis that the response-locked BPL represents a neural implementation for decision boundary collapse. First, we confirmed that β power decreased more at centroparietal electrodes contralateral to the initiated response (Fig. 3A–C), suggesting that BPL reflects differential motor activation (Pfurtscheller, 1981; Donner et al., 2009). When averaged across trials in a median-split analysis, differences in BPL at response execution (mean BPL ± 12 ms around the response) are clearly evident. Here, fast responses show a higher BPL (i.e., higher decision boundary; fast RT, −0.13 ± 0.003) compared with slow responses (fast RT vs slow RT, −0.13 ± 0.003 vs −0.10 ± 0.003; t(862) = −9.58, p < 0.001, d = 0.25, BF10 > 100; Fig. 3D). The representation of RT is similarly reflected in the modeled DDM signal (Fig. 2H). Thus, collapsing boundaries are confirmed by response-locked BPL, indicating that humans make use of a discounting function to decide earlier to fulfill a time criterion. To confirm this finding on a single-trial level, we regressed task-related behavioral factors onto measured individual single-trial BPL at response execution (mean BPL ± 12 ms around the response). This allows us to control for other task factors that may additionally influence BPL but are not of primary interest for this report (but see Fig. 4 for more details on these results). The regression analyses confirmed that BPL around the response decreased (i.e., boundary collapse) with increasing RTs [RT effect, 0.54 ± 0.04; t(862) = 13.26, p < 0.001, d = 0.45, CI (0.46–0.62), BF10 > 100; Fig. 5A]. Plotting the RT quantiles against the BPL at response execution confirms a boundary collapse that closely resembles a sigmoidal function (Fig. 5B). Yet, a correlation between BPL and RT by itself does not necessarily imply bound collapse on our task. Specifically, trials with slower RTs are likely to be ones in which there was greater response conflict, so one could predict greater build-up for the ultimately unchosen alternative and an overall reduction in lateralization. To address this issue, we conducted several control analyses. First, our GLM consisted of various task factors including congruency to control for confounds and the interdependence of effects. Second, in an explorative analysis, we split the regression analysis by ipsilateral (hemisphere that does not induce the choice on the current trial) and contralateral hemispheres to better understand which hemisphere drives RT-related BPL decreases (Fig. 5C). The results indicate that threshold decreases are partly due to slightly stronger decreases in β power over the contralateral hemisphere [contralateral RT effect, 0.31 ± 0.06, t(862) = 5.13, p < 0.001, d = 0.17, CI (0.19–0.42), BF10 > 100; ipsilateral RT effect, −0.22 ± 0.06, t(862) = −3.68, p < 0.001, d = 0.12, CI (−0.33 to −0.10), BF10 = 32.10]. However, the effect is stronger when the relative signal per trial is investigated than when either of the hemispheres’ β power is analyzed alone (contrast contralateral against BPL effect, t(862) = 3.87, p < 0.001, d = 0.23, BF10 = 63.47). These data suggest that peak BPL, coinciding with the motor response, may be an especially valid marker of response threshold modulations. However, it should be noted, that this assumption may be limited to speeded forced-choice tasks with short reaction times and evidence accumulation (Rogge et al., 2022). In the averaged β power signal, this effect is partly explained by a reduction of β power over the contralateral hemisphere (Fig. 5D), while on average β power decrease over the ipsilateral hemisphere is barely changed when responses are given later in a trial (Fig. 5E). A closer look at β power around the response (button press ± 12 ms) divided into five equally sized RT bins indicates that β power over both the contralateral and ipsilateral hemisphere to the response hand is RT dependent (Fig. 5D,E). Here, RT dependency in β power amplitudes follows an U-shaped curve. Considering previous research that characterized peak β power contralateral to the response as “motor threshold” (Kelly et al., 2021), these data suggest that fast response errors partly arise from decreased motor thresholds (see Fischer et al., 2018 for a detailed discussion on how fast response errors in simple tasks arise) and that collapsing decision boundaries terminate decision formation under time pressure, which potentially increases error likelihood. This pattern of results implies that β power around response execution is fairly variable, which adds to the accumulative evidence of showing time-dependent effects on β amplitudes (see Discussion). Critically, our results suggest that peak BPL (i.e., the relative β signal), coinciding with the motor response, may be an especially valid marker of collapsing decision thresholds, as it incorporates both contra- and ipsilateral RT-dependent motor preparation signals.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Single-trial regression on β lateralization at response. A, The results of a single-trial regression analysis on BPL at response (mean ± 12 ms surrounding button press), comparing relevant task factors. B, Raw values splits for Incongruence (congruent vs incongruent), accuracy (correct vs error), and previous accuracy (post-correct vs post-error). C, The results of separate regression analyses split by ipsilateral (hemisphere that does not induce the choice on the current trial) and contralateral hemispheres. The results indicate that incongruent trials cause reduced BPL [t(862) = 10.21, p < 0.001, d = 0.35, CI (0.31–0.45), BF10 > 100]. This effect is driven by changes over the ipsilateral hemisphere where β power was more strongly reduced in incongruent trials. This may reflect the preactivation of the competing response tendency. Similarly, errors were associated with reduced BPL [t(862) = 6.52, p < 0.001, d = 0.22, CI (0.17–0.32), BF10 > 100]. This result is partly driven by stronger increases in β power of the contralateral hemisphere. However, it should be noted that this effect is not evident, when comparing RT-matched BPL on correct versus error trials (Fig. 3F), which makes this effect difficult to interpret. Following errors, BPL is larger [more negative, t(862) = −9.34, p < 0.001, d = −0.32, CI (−0.4 to −0.26), BF10 > 100], which is caused by a stronger increase in BP over the ipsilateral hemisphere. This may reflect static decision policy adaptations to increase accuracy after errors (Murphy et al., 2016; Fischer et al., 2018). An additional interpretation is that following errors, the unchosen response is more effectively suppressed. Error bars reflect 99.9% CI. A, C, Mean within participants’ t values, and statistics are the results of t tests of individual within-subject regressions against zero. NDL reflects the distance from the last break in the task, and trial reflects the the general time on task; together with RSI, these regressors were used as noise regressors.

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Single-trial regression on BPL at response. A, The results of a single-trial regression analysis on BPL at response (mean ± 12 ms surrounding button press). This analysis confirms collapsing decision boundaries on a single-trial level while controlling for other task factors that potentially influence BPL. B, BPL raw values for RT quantiles plotted against BPL at response. Fitting the mean values trajectory confirms boundary collapse that closely resembles a sigmoidal function (black line expressed with ymin = −0.13, ymax = −0.09, x0 = 4.89, and slope = 0.49, adjusted R2 = 0.95). C, The results of separate regression analyses on β power split by ipsilateral (hemisphere that does not induce the choice on the current trial) and contralateral hemispheres. RT-related BPL decrease is partly due to slightly more decreases over the contralateral hemisphere. In the averaged β power signal, this effect is partly explained by a reduction of β power over the contralateral hemisphere (D), while on average β power over the ipsilateral hemisphere is barely changed when responses are given later in a trial (E). The boxes below show β power (mean ± 12 ms surrounding button press) divided into five equally sized (within-subject) RT bins. ∗ = uncorrected significant difference to mean β power (dashed lines). A, Boxes = IQR, o = median, - = mean, whiskers =1.5 × IQR, gray dots = outlier. Error bars indicate SEM. A, C, Mean within participants’ t values, and statistics are the results of t tests of individual within-subject regressions against zero. RSI reflects the response–stimulus interval, NDL reflects the distance from the last break in the task, and trial reflects the general time on task.

Specificity of the effects on the β frequency range

Similarly to BPL, lateralization in the α (8 −12 Hz, APL) and μ (8–14 Hz, MuPL) frequency band over the motor cortex has been related to a variety of decision processes, such as decision formation and preparation (Donner et al., 2009; Pape & Siegel, 2016; Rogge et al., 2022) or urgency of a decision (Murphy et al., 2016). In a control analysis, we therefore aimed to establish the specificity of our effects to the β frequency range. First, we confirm that sensorimotor α and μ power similarly reflected response-related signals and that this was specifically lateralized to the active motor cortex (Figs. 3A, 6). Yet, effects are smaller and peak slightly after responses are made (Fig. 6B,C). To quantify the effect differences on a single-trial level, we regressed response speed onto measured individual single-trial APL and MuPL at response execution (mean ± 12 ms around response) while controlling for other task factors using the same GLM as for the BPL analysis. The results revealed that response speed is similarly reflected in movement-selective α and μ suppression, but that this effect is more pronounced in the β frequency range (BPL vs APL, 0.54 ± 0.04 vs 0.23 ± 0.04; t(862) = 6.43, p < 0.001, d = 0.26, BF10 > 100; BPL vs MuPL, 0.54 ± 0.04 vs 0.28 ± 0.04; t(862) = 6.15< 0.001, d = 0.26, BF10 > 100). In addition to the α and μ frequency band, an opposite effect was present for lateralization in the theta frequency band (4–8 Hz) with a similar topography (Figs. 3A, 6A), which is not further interpreted in this report.

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

Movement-selective lateralization signals across frequency bands. Ai–Dii, Movement-selective lateralization signals across frequency bands. We chose electrodes with a maximal effect (C3, C4, CP3, CP4) for analyses of lateralized signals. First, we collapsed the convolved signal across the respective frequency range and normalized it within each participant by dividing the power by its SD and subtracting the mean. To obtain the lateralized signal, we subtracted the power in the respective band over the inactive sensorimotor cortex (i.e., the electrode side ipsilateral to the hand that gave the response in the trial) from the power recorded over the active (contralateral) sensorimotor cortex. This difference signal thus compares the degree of power reduction between both hemispheres, presumably reflecting differential motor activation. The results indicate that the effect is most pronounced in the β band frequency.

Confirmation sample

Behavioral analyses of the confirmation sample confirmed that general behavioral effects were in accordance with typical results seen in flanker tasks. They reflected interference effects, with slower (ΔRT =  + 42 ms, t(118) = −43.25, p < 0.001, d = 1.67, BF10 > 100) and more erroneous (Δaccuracy = 19%, t(118) = 34.51, p < 0.001, d = 3.71, BF10 > 100) responses on incongruent trials. The overall accuracy was 86% (SD = 3.94). Time dependency in the decision policy was confirmed by a combination of a few missed deadlines (3.15% ± 3.89) and negative CAF slopes (β2= −2.22 ± 0.45; t(118) = −4.89, p < 0.001, d = 0.45, BF10 > 100; Fig. 7).

Figure 7.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 7.

Model comparison and validation confirmation sample. A, Cumulated aBIC scores over participants of each candidate model. The higher values indicate a better fit of the models to the behavioral data. The results indicate that the dynamic bound model (DDM6) provides the best fit to the data. Protected exceedance probability (pEP; which is the probability that each model is the most likely model across all participants taking into account the null possibility that differences in model evidence are due to chance) similarly favors DDM6. Estimated model frequencies (i.e., the ratio of subjects assigned to each model in the model space) also suggest DDM6 to be the most frequent across participants. B–D, Quantile fits of the model against human RT data. E, Model and human accuracy. In all conditions (congruent and incongruent correct as well as incongruent error), the model captures the RT data in each quantile, suggesting a good fit to the data. Note: boxes = interquantile range (IQR), o = median, - = mean, whiskers =1.5 × IQR, gray dots = outlier. F, Conditional accuracy function. Here, the points indicate the mean accuracy of trials sorted by RT into 25 equal-sized bins. Line shows best fits of piece-wise logistic regressions to each subject's single-trial data. The shades indicate ±SEM. G, H, CAF and the proportion of missed deadlines from the full fixed bound model (DDM5; G) and the collapsing boundary model (DDM6; H). The points indicate the mean accuracy of trials sorted by RT into 25 equal-sized bins informed from the empirical data. The line shows the best fits of piece-wise logistic regressions to each subject's modeled single-trial data. The shades indicate ±SEM. Note that for model fitting, we excluded all missed trials from the empirical data; hence, the ground truth for missed deadlines of the data is zero.

To replicate key findings, we used the same analysis procedures as for the main sample. The results replicate the findings for the main sample, first, we show that the full dynamic decision boundary model outperformed static decision boundary conflict DDMs (Fig. 7A) and captured conflict-related RT and accuracy well (Fig. 7B–E). As in the main sample, the superior fit of the full model with boundary collapse as compared with the full model with fixed boundaries becomes evident in the CAFs that were fit to the modeled data (Fig. 7F–H). Moreover, as in the main sample, DDM5 produced significantly more misses (DDM5 vs DDM6, 0.42% ± 0.06 vs 0.24% ± 0.04; t(118) = 3.46, p < 0.001, d = 0.28, BF10 = 27.09).

Moreover, the EEG analysis validated our conclusion that the trough and lateralization of movement-selective β suppression around response time may reflect a collapsing decision bound (Fig. 8). First, when averaged across trials in a median-split analysis, the differences in BPL at response execution (i.e., mean BPL ± 12 ms around the response) are again clearly evident. Specifically, fast responses show a higher BPL (i.e., higher decision boundary; fast RT vs slow RT, −0.17 ± 0.01 vs −0.12 ± 0.01; t(118) = 4.75, p < 0.001, d = 0.39, BF10 > 100, Fig. 8Ci). Consistent with the results of the main sample, this effect is partly explained by a reduction of β power over the contralateral hemisphere (Fig. 4D), while on average β power decrease over the ipsilateral hemisphere is barely changed when responses are given later in a trial (Fig. 8Cii). Yet, we again show that β power over the contralateral and ipsilateral hemisphere to the response hand is RT dependent (Fig. 8Cii). Second, to investigate this finding on a single-trial level, we regressed task-related behavioral factors onto measured individual single-trial BPL at response execution (mean BPL ± 12 ms around the response). Again, this analysis confirmed that BPL around the response decreased with increasing RTs [0.61 ± 0.126, t(118) = 4.85, p < 0.001, d = 0.45, CI (0.36–0.86), BF10 > 100, Fig. 8D]. As in the main sample, the effect is stronger when the relative signal per trial is investigated, than when either of the hemispheres β power is analyzed alone [contralateral RT effect, 0.29 ± 0.12; t(118) = 2.39, p = 0.01, d = 0.22, CI (0.05–0.55), BF10 = 1.56; ipsilateral RT effect, −0.48 ± 0.16; t(118) = 3.05, p = 0.002, d = 0.28, CI (−0.81 to −0.17), BF10 = 8.11].

Figure 8.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 8.

EEG result confirmation sample. A, The result of a single-trial regression analysis comparing responses with the left and right hand, while controlling for congruency, flanker distance, and RSIs, following RSI, trial number, and log-transformed RT. Data suggest that β power more strongly decreases when a response is given with the contralateral compared with the ipsilateral hand. This effect was most pronounced over the electrodes C3/4 and CP3/4. B, BPL signal (Bii) is composed of the activity difference recorded over contralateral and ipsilateral motor cortices (Bi). Ci, BPL around response time may reflect collapsing decision thresholds, whereby faster responses (based on a median split) are made at higher thresholds. This effect is partly explained by a reduction of β power over the contralateral hemisphere (Cii, first left), while on average β power over the ipsilateral hemisphere is barely changed when responses are given later in a trial (Cii, second to the left). The boxes next to the average β power traces show β power (mean ± 12 ms surrounding button press) divided into five equally sized (within-subject) RT bins. These results show that β power over both the contralateral and ipsilateral hemispheres to the response hand is RT dependent.*, uncorrected significant difference to mean β power (dashed lines). D, The results of a single-trial regression analysis on BPL at response (mean ± 12 ms surrounding button press). This analysis confirms collapsing decision boundaries on a single-trial level while controlling for other task factors that potentially influence BPL. E, The results of separate regression analyses on β power split by ipsilateral (hemisphere that does not induce the choice on the current trial) and contralateral hemispheres. F, BPL raw values at response time plotted against RT. Note: Contours in A represent significant clusters after cluster-based permutation testing (Maris & Oostenveld, 2007). Error bars and shades reflect 99.9% CI. D, Mean within participants’ t values, and statistics are the results of t tests of individual within-subject regressions against zero. NDL reflects the distance from the last break in the task, and trial reflects the general time on task; together with RSI, these regressors were used as noise regressors.

Discussion

This study uncovers compelling evidence to suggest that under high time pressure, dynamic decision thresholds drive the termination of decision formation, as seen in the BPL over the motor cortex during a speeded flanker task. This is demonstrated by behavioral modeling, showing that subjects’ time-dependent decision policy is best captured in DDMs that allow dynamic decision bounds and confirmed in the neural signal, whereby BPL over motor cortices reflects features of the modeled decision signal.

Taken together, our results extend previous work on BPL as a neural signal that carries important features of a decision variable (Donner et al., 2009; Fischer et al., 2018; O’Connell & Kelly, 2021). Recent work has suggested that β power suppression around response contralateral to the chosen hand reaches a highly similar level, irrespective of urgency or response speed, suggesting a fixed response threshold (Murphy et al., 2016; Steinemann et al., 2018; Feuerriegel et al., 2021; Kelly et al., 2021; Corbett et al., 2023). In line with this account, it has been suggested the β power over the right and left motor cortex reflects two race-to-(response-)threshold motor preparation signals (O’Connell et al., 2012; Kelly et al., 2021; Corbett et al., 2023). Under time pressure, speeding may be facilitated by an urgency signal that is independent but, in addition to evidence, drives the signal toward the response threshold (Cisek et al., 2009; Hanks et al., 2014; Murphy et al., 2016; Steinemann et al., 2018; Corbett et al., 2023). Recent work in nonprimates (Hanks et al., 2014; Thura & Cisek, 2016) and humans (Murphy et al., 2016) demonstrated that under time pressure the neural urgency signal is characterized as a time-dependent increase in common activation for both the chosen and unchosen response alternatives, which translates into a diminishing contra-/ipsilateral μ lateralization with increasing RT. Our results add to this line of research in several ways. First, to the best of our knowledge, this is the first study to demonstrate a neural implementation of collapsing decision thresholds in a speeded flanker task. Second, we show that motor response thresholds contralateral to the response are more dynamic than previously thought. Third, we suggest that peak BPL (i.e., the relative β signal), coinciding with the motor response, may be an especially valid marker of response threshold modulations as it incorporates both contra- and ipsilateral motor preparation signals. Interestingly, BPL in our task resembles the characteristic of a signal that dynamically reduces the threshold it must reach to trigger a response. Such a signal has previously been demonstrated in the form of a centroparietal positivity in the EEG, which reduces in amplitude alongside RT, accuracy, and urgency (Steinemann et al., 2018; Kelly et al., 2021).

Importantly, we replicated all key findings in an independent confirmation sample. Thus, in this study, we provide strong, converging evidence that movement-selective BPL reflects a signal that dynamically adjusts response thresholds to terminate decision formation under time pressure. Our finding that BPL reflects collapsing decision boundaries under time pressure is consistent with the additive urgency signal account. In fact, in an abstract mathematical model, both accounts are very related. Yet, our finding of time-dependent motor thresholds contralateral to response is surprising. How can this discrepancy in findings be explained? One factor that likely contributes to the differences is the nature of our task. In our task, after flanker (83 ms) and target presentation (which was only presented for 33 ms in our task), no further information could be sampled to inform decisions. In contrast, other studies typically use paradigms in which evidence can be accumulated until the response is made (e.g., random dot motion paradigms). Hence, the differences in findings may highlight that decision policies may not be generalized across tasks. Indeed, there is evidence that response policies could be task-specific even in variants of conflict tasks like the Simon compared with the flanker task (Hübner & Töbel, 2019). It is also possible that within-trial adjustments of response thresholds are small and are more pronounced in relatively rare cases. For example, in our sample, very slow errors and slow responses in trials with no visual conflict are sparse. Here, we draw inference from a large sample and likely have the power to detect even small time-dependent changes and rare response constellations. In addition, β amplitudes vary quite widely across groups, and thus small effects might be difficult to detect in smaller samples. Indeed, even our confirmation sample is considerably larger than average EEG studies. Finally, our results do not exclude the possibility of an urgency signal as suggested by Thura and Cisek (2017). It may be that instead of increasing the gain in the decision process, the urgency signal in the globus pallidus modulates decision thresholds in the cortex. Yet, this argumentation remains speculative.

In summary, our findings build on previous research on movement-selective β power. They suggest that one mechanism for achieving fast-paced decisions under time pressure is through collapsing response thresholds, which facilitates dynamic decision policy adjustments according to task demands.

Footnotes

  • This research was supported by the Deutsche Forschungsgemeinschaft (KL 2337/2-1 to T.A.K., UL 196/3-3 to M.U.) and by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (ERC advanced grant agreement no. 101018805 to M.U.).

  • ↵*H.K. and A.G.F. contributed equally to this work.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Hans Kirschner at hans.kirschner{at}ovgu.de.

SfN exclusive license.

References

  1. ↵
    1. Bond K,
    2. Dunovan K,
    3. Porter A,
    4. Rubin JE,
    5. Verstynen T
    (2021) Dynamic decision policy reconfiguration under outcome uncertainty. Elife 10:e65540. doi:10.7554/eLife.65540
    OpenUrlCrossRef
  2. ↵
    1. Brown SD,
    2. Heathcote A
    (2008) The simplest complete model of choice response time: linear ballistic accumulation. Cogn Psychol 57:153–178. doi:10.1016/j.cogpsych.2007.12.002
    OpenUrlCrossRefPubMed
  3. ↵
    1. Cisek P,
    2. Kalaska JF
    (2005) Neural correlates of reaching decisions in dorsal premotor cortex: specification of multiple direction choices and final selection of action. Neuron 45:801–814. doi:10.1016/j.neuron.2005.01.027
    OpenUrlCrossRefPubMed
  4. ↵
    1. Cisek P,
    2. Puskas GA,
    3. El-Murr S
    (2009) Decisions in changing conditions: the urgency-gating model. J Neurosci 29:11560–11571. doi:10.1523/JNEUROSCI.1844-09.2009
    OpenUrlAbstract/FREE Full Text
  5. ↵
    1. Corbett EA,
    2. Martinez-Rodriguez LA,
    3. Judd C,
    4. O’Connell RG,
    5. Kelly SP
    (2023) Multiphasic value biases in fast-paced decisions. Elife 12:e67711. doi:10.7554/eLife.67711
    OpenUrlCrossRef
  6. ↵
    1. Danielmeier C,
    2. Ullsperger M
    (2011) Post-error adjustments. Front Psychol 2:233. doi:10.3389/fpsyg.2011.00233
    OpenUrlCrossRefPubMed
  7. ↵
    1. Danielmeier C,
    2. Wessel JR,
    3. Steinhauser M,
    4. Ullsperger M
    (2009) Modulation of the error-related negativity by response conflict. Psychophysiology 46:1288–1298. doi:10.1111/j.1469-8986.2009.00860.x
    OpenUrlCrossRefPubMed
  8. ↵
    1. Delorme A,
    2. Makeig S
    (2004) EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J Neurosci Methods 134:9–21. doi:10.1016/j.jneumeth.2003.10.009
    OpenUrlCrossRefPubMed
  9. ↵
    1. Donner TH,
    2. Siegel M,
    3. Fries P,
    4. Engel AK
    (2009) Buildup of choice-predictive activity in human motor cortex during perceptual decision making. Curr Biol 19:1581–1585. doi:10.1016/j.cub.2009.07.066
    OpenUrlCrossRefPubMed
  10. ↵
    1. Feuerriegel D,
    2. Jiwa M,
    3. Turner WF,
    4. Andrejevic M,
    5. Hester R,
    6. Bode S
    (2021) Tracking dynamic adjustments to decision making and performance monitoring processes in conflict tasks. Neuroimage 238:118265. doi:10.1016/j.neuroimage.2021.118265
    OpenUrlCrossRef
  11. ↵
    1. Fischer AG,
    2. Nigbur R,
    3. Klein TA,
    4. Danielmeier C,
    5. Ullsperger M
    (2018) Cortical beta power reflects decision dynamics and uncovers multiple facets of post-error adaptation. Nat Commun 9:5038. doi:10.1038/s41467-018-07456-8
    OpenUrlCrossRefPubMed
  12. ↵
    1. Hanks T,
    2. Kiani R,
    3. Shadlen MN
    (2014) A neural mechanism of speed-accuracy tradeoff in macaque area LIP. Elife 3:e02260. doi:10.7554/eLife.02260
    OpenUrlCrossRefPubMed
  13. ↵
    1. Hawkins GE,
    2. Forstmann BU,
    3. Wagenmakers EJ,
    4. Ratcliff R,
    5. Brown SD
    (2015a) Revisiting the evidence for collapsing boundaries and urgency signals in perceptual decision-making. J Neurosci 35:2476–2484. doi:10.1523/JNEUROSCI.2410-14.2015
    OpenUrlAbstract/FREE Full Text
  14. ↵
    1. Hawkins GE,
    2. Wagenmakers EJ,
    3. Ratcliff R,
    4. Brown SD
    (2015b) Discriminating evidence accumulation from urgency signals in speeded decision making. J Neurophysiol 114:40–47. doi:10.1152/jn.00088.2015
    OpenUrlCrossRefPubMed
  15. ↵
    1. Heathcote A,
    2. Brown S,
    3. Cousineau D
    (2004) QMPE: estimating lognormal, Wald, and Weibull RT distributions with a parameter-dependent lower bound. Behav Res Methods Instrum Comput 36:277–290. doi:10.3758/bf03195574
    OpenUrlCrossRefPubMed
  16. ↵
    1. Hubner R,
    2. Steinhauser M,
    3. Lehle C
    (2010) A dual-stage two-phase model of selective attention. Psychol Rev 117:759–784. doi:10.1037/a0019471
    OpenUrlCrossRefPubMed
  17. ↵
    1. Hübner R,
    2. Töbel L
    (2019) Conflict resolution in the Eriksen flanker task: similarities and differences to the Simon task. PLoS One 14:e0214203. doi:10.1371/journal.pone.0214203
    OpenUrlCrossRef
  18. ↵
    1. Hunt LT,
    2. Woolrich MW,
    3. Rushworth MF,
    4. Behrens TE
    (2013) Trial-type dependent frames of reference for value comparison. PLoS Comput Biol 9:e1003225. doi:10.1371/journal.pcbi.1003225
    OpenUrlCrossRefPubMed
  19. ↵
    1. Karşılar H,
    2. Simen P,
    3. Papadakis S,
    4. Balcı F
    (2014) Speed accuracy trade-off under response deadlines. Front Neurosci 8:248. doi:10.3389/fnins.2014.00248
    OpenUrlCrossRefPubMed
  20. ↵
    1. Kelly SP,
    2. Corbett EA,
    3. O’Connell RG
    (2021) Neurocomputational mechanisms of prior-informed perceptual decision-making in humans. Nat Hum Behav 5:467–481. doi:10.1038/s41562-020-00967-9
    OpenUrlCrossRefPubMed
  21. ↵
    1. Kuhn AA,
    2. Doyle L,
    3. Pogosyan A,
    4. Yarrow K,
    5. Kupsch A,
    6. Schneider GH,
    7. Hariz MI,
    8. Trottenberg T,
    9. Brown P
    (2006) Modulation of beta oscillations in the subthalamic area during motor imagery in Parkinson’s disease. Brain 129(Pt 3):695–706. doi:10.1093/brain/awh715
    OpenUrlCrossRefPubMed
  22. ↵
    1. Maris E,
    2. Oostenveld R
    (2007) Nonparametric statistical testing of EEG- and MEG-data. J Neurosci Methods 164:177–190. doi:10.1016/j.jneumeth.2007.03.024
    OpenUrlCrossRefPubMed
  23. ↵
    1. Murphy PR,
    2. Boonstra E,
    3. Nieuwenhuis S
    (2016) Global gain modulation generates time-dependent urgency during perceptual choice in humans. Nat Commun 7:13526. doi:10.1038/ncomms13526
    OpenUrlCrossRefPubMed
  24. ↵
    1. Murphy PR,
    2. Vandekerckhove J,
    3. Nieuwenhuis S
    (2014) Pupil-linked arousal determines variability in perceptual decision making. PLoS Comput Biol 10:e1003854. doi:10.1371/journal.pcbi.1003854
    OpenUrlCrossRef
  25. ↵
    1. O’Connell RG,
    2. Dockree PM,
    3. Kelly SP
    (2012) A supramodal accumulation-to-bound signal that determines perceptual decisions in humans. Nat Neurosci 15:1729–1735. doi:10.1038/nn.3248
    OpenUrlCrossRefPubMed
  26. ↵
    1. O’Connell RG,
    2. Kelly SP
    (2021) Neurophysiology of human perceptual decision-making. Annu Rev Neurosci 44:495–516. doi:10.1146/annurev-neuro-092019-100200
    OpenUrlCrossRef
  27. ↵
    1. Overmeyer R,
    2. Kirschner H,
    3. Fischer AG,
    4. Endrass T
    (2023) Unraveling the influence of trial-based motivational changes on performance monitoring stages in a flanker task. Sci. Rep. 13:19180. doi:10.1038/s41598-023-45526-0
    OpenUrlCrossRef
  28. ↵
    1. Palestro JJ,
    2. Weichart E,
    3. Sederberg PB,
    4. Turner BM
    (2018) Some task demands induce collapsing bounds: evidence from a behavioral analysis. Psychon Bull Rev 25:1225–1248. doi:10.3758/s13423-018-1479-9
    OpenUrlCrossRef
  29. ↵
    1. Palmer J,
    2. Kreutz-Delgado K,
    3. Makeig S
    (2012) AMICA: an adaptive mixture of independent component analyzers with shared components. San Diego: Swartz Center for Computational Neuroscience.
  30. ↵
    1. Pape AA,
    2. Siegel M
    (2016) Motor cortex activity predicts response alternation during sensorimotor decisions. Nat Commun 7:13098. doi:10.1038/ncomms13098
    OpenUrlCrossRefPubMed
  31. ↵
    1. Pfurtscheller G
    (1981) Central beta rhythm during sensorimotor activities in man. Electroencephalogr Clin Neurophysiol 51:253–264. doi:10.1016/0013-4694(81)90139-5
    OpenUrlCrossRefPubMed
  32. ↵
    1. Price K,
    2. Storn RM,
    3. Lampinen JA
    (2005) Differential evolution. Springer Science & Business Media.
  33. ↵
    1. Ratcliff R,
    2. Rouder JN
    (2000) A diffusion model account of masking in two-choice letter identification. J Exp Psychol Hum Percept Perform 26:127–140. doi:10.1037//0096-1523.26.1.127
    OpenUrlCrossRefPubMed
  34. ↵
    1. Ratcliff R,
    2. Smith PL,
    3. Brown SD,
    4. McKoon G
    (2016) Diffusion decision model: current issues and history. Trends Cogn Sci 20:260–281. doi:10.1016/j.tics.2016.01.007
    OpenUrlCrossRefPubMed
  35. ↵
    1. Rigoux L,
    2. Stephan KE,
    3. Friston KJ,
    4. Daunizeau J
    (2014) Bayesian model selection for group studies - revisited. Neuroimage 84:971–985. doi:10.1016/j.neuroimage.2013.08.065
    OpenUrlCrossRefPubMed
  36. ↵
    1. Rogge J,
    2. Jocham G,
    3. Ullsperger M
    (2022) Motor cortical signals reflecting decision making and action preparation. Neuroimage 263:119667. doi:10.1016/j.neuroimage.2022.119667
    OpenUrlCrossRef
  37. ↵
    1. Servant M,
    2. White C,
    3. Montagnini A,
    4. Burle B
    (2015) Using covert response activation to test latent assumptions of formal decision-making models in humans. J Neurosci 35:10371–10385. doi:10.1523/jneurosci.0078-15.2015
    OpenUrlAbstract/FREE Full Text
  38. ↵
    1. Steinemann NA,
    2. O’Connell RG,
    3. Kelly SP
    (2018) Decisions are expedited through multiple neural adjustments spanning the sensorimotor hierarchy. Nat Commun 9:3627. doi:10.1038/s41467-018-06117-0
    OpenUrlCrossRefPubMed
  39. ↵
    1. Stephan KE,
    2. Penny WD,
    3. Daunizeau J,
    4. Moran RJ,
    5. Friston KJ
    (2009) Bayesian model selection for group studies. Neuroimage 46:1004–1017. doi:10.1016/j.neuroimage.2009.03.025
    OpenUrlCrossRefPubMed
  40. ↵
    1. Thura D,
    2. Cisek P
    (2016) Modulation of premotor and primary motor cortical activity during volitional adjustments of speed-accuracy trade-offs. J Neurosci 36:938–956. doi:10.1523/JNEUROSCI.2230-15.2016
    OpenUrlAbstract/FREE Full Text
  41. ↵
    1. Thura D,
    2. Cisek P
    (2017) The basal ganglia do not select reach targets but control the urgency of commitment. Neuron 95:1160–1170 e1165. doi:10.1016/j.neuron.2017.07.039
    OpenUrlCrossRefPubMed
  42. ↵
    1. Uhlenbeck GE,
    2. Ornstein LS
    (1930) On the theory of the Brownian motion. Phys. Rev. 36:823–841. doi:10.1103/PhysRev.36.823
    OpenUrlCrossRefPubMed
  43. ↵
    1. Vandekerckhove J,
    2. Tuerlinckx F
    (2007) Fitting the Ratcliff diffusion model to experimental data. Psychon Bull Rev 14:1011–1026. doi:10.3758/bf03193087
    OpenUrlCrossRefPubMed
  44. ↵
    1. White CN,
    2. Ratcliff R,
    3. Starns JJ
    (2011) Diffusion models of the flanker task: discrete versus gradual attentional selection. Cogn Psychol 63:210–238. doi:10.1016/j.cogpsych.2011.08.001
    OpenUrlCrossRefPubMed
  45. ↵
    1. White CN,
    2. Servant M,
    3. Logan GD
    (2018) Testing the validity of conflict drift-diffusion models for use in estimating cognitive processes: a parameter-recovery study. Psychon Bull Rev 25:286–301. doi:10.3758/s13423-017-1271-2
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 44 (13)
Journal of Neuroscience
Vol. 44, Issue 13
27 Mar 2024
  • Table of Contents
  • About the Cover
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Cortical β Power Reflects a Neural Implementation of Decision Boundary Collapse in Speeded Decisions
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Cortical β Power Reflects a Neural Implementation of Decision Boundary Collapse in Speeded Decisions
Hans Kirschner, Adrian G. Fischer, Claudia Danielmeier, Tilmann A. Klein, Markus Ullsperger
Journal of Neuroscience 27 March 2024, 44 (13) e1713232024; DOI: 10.1523/JNEUROSCI.1713-23.2024

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Cortical β Power Reflects a Neural Implementation of Decision Boundary Collapse in Speeded Decisions
Hans Kirschner, Adrian G. Fischer, Claudia Danielmeier, Tilmann A. Klein, Markus Ullsperger
Journal of Neuroscience 27 March 2024, 44 (13) e1713232024; DOI: 10.1523/JNEUROSCI.1713-23.2024
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • EEG
  • decision-making
  • drift diffusion model
  • decision boundary collapse

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Functional alterations in parvalbumin-positive interneurons after traumatic brain injury in somatosensory cortex of mice
  • Disruption of grin2A, an epilepsy-associated gene, produces altered spontaneous swim behavior in zebrafish
  • Nicotine Withdrawal Drives Aversive Behaviors by Recruiting Inhibitory Interpeduncular Nucleus Inputs to the Laterodorsal Tegmentum in Mice
Show more Research Articles

Behavioral/Cognitive

  • Functional alterations in parvalbumin-positive interneurons after traumatic brain injury in somatosensory cortex of mice
  • Disruption of grin2A, an epilepsy-associated gene, produces altered spontaneous swim behavior in zebrafish
  • Nicotine Withdrawal Drives Aversive Behaviors by Recruiting Inhibitory Interpeduncular Nucleus Inputs to the Laterodorsal Tegmentum in Mice
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.