Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

Selective Attention and Decision-Making Have Separable Neural Bases in Space and Time

Denise Moerel, Anina N. Rich and Alexandra Woolgar
Journal of Neuroscience 18 September 2024, 44 (38) e0224242024; https://doi.org/10.1523/JNEUROSCI.0224-24.2024
Denise Moerel
1School of Psychological Sciences, Macquarie University, Sydney 2109, New South Wales, Australia
2Perception in Action Research Centre, Macquarie University, Sydney 2109, New South Wales, Australia
3The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney 2145, New South Wales, Australia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Anina N. Rich
1School of Psychological Sciences, Macquarie University, Sydney 2109, New South Wales, Australia
2Perception in Action Research Centre, Macquarie University, Sydney 2109, New South Wales, Australia
4Macquarie University Performance and Expertise Research Centre, Sydney 2109, New South Wales, Australia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Alexandra Woolgar
1School of Psychological Sciences, Macquarie University, Sydney 2109, New South Wales, Australia
2Perception in Action Research Centre, Macquarie University, Sydney 2109, New South Wales, Australia
5MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge CB2 7EF, United Kingdom
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • Peer Review
  • PDF
Loading

Abstract

Attention and decision-making processes are fundamental to cognition. However, they are usually experimentally confounded, making it difficult to link neural observations to specific processes. Here we separated the effects of selective attention from the effects of decision-making on brain activity obtained from human participants (both sexes), using a two-stage task where the attended stimulus and decision were orthogonal and separated in time. Multivariate pattern analyses of multimodal neuroimaging data revealed the dynamics of perceptual and decision-related information coding through time with magnetoencephalography (MEG), through space with functional magnetic resonance imaging (fMRI), and their combination (MEG-fMRI fusion). Our MEG results showed an effect of attention before decision-making could begin, and fMRI results showed an attention effect in early visual and frontoparietal regions. Model-based MEG-fMRI fusion suggested that attention boosted stimulus information in the frontoparietal and early visual regions before decision-making was possible. Together, our results suggest that attention affects neural stimulus representations in the frontoparietal regions independent of decision-making.

  • decision-making
  • fMRI
  • MEG
  • MVPA
  • selective attention

Significance Statement

Attention and decision-making processes are often experimentally confounded in neuroimaging studies, as participants are commonly asked to make categorical decisions about an attended stimulus only. Our study addresses this issue by separating the effects of selective attention from decision-making effects in human observers. We used multivariate pattern analyses to investigate the dynamics of perceptual and decision-related information coding through time (with MEG) and space (with fMRI) and applied a MEG-fMRI fusion analysis to combine data across neuroimaging modalities. Our results show that attention boosts stimulus information in the frontoparietal and early visual regions before decision-making was possible. These findings provide an important verification of claims that attention modulates information processing in the brain and highlights the importance of separating these processes.

Introduction

Selective attention is a mechanism that prioritizes relevant information from among competing sensory input. With clear behavioral benefits to attending to particular locations in space (Posner, 1980; Pestilli and Carrasco, 2005) or visual features (Sàenz et al., 2003; White and Carrasco, 2011), there is strong evidence that attention affects perception (Carrasco, 2011). However, in cognitive neuroscience experiments, attended information is also commonly the information that the participant makes a decision about, confounding the processes of selecting and maintaining relevant stimulus information with decision-making processes. In this study, we separate the effects of attention on the neural processing of information from the effects of decision-making and investigate the spatiotemporal correlates of each process separately.

According to the adaptive coding hypothesis (Duncan, 2001), single cells in the frontoparietal cortex flexibly adapt their response to code different information as needed for the task, providing a potential source of bias for more selective brain regions (Desimone and Duncan, 1995). In humans, a network of frontoparietal regions, which we will refer to as the “Multiple Demand” (MD) regions (Duncan and Owen, 2000; Duncan, 2010; Duncan et al., 2020), responds with a profile consistent with adaptive coding. Functional magnetic resonance imaging (fMRI) studies in human participants show that the MD regions respond to a wide range of cognitively demanding tasks (Duncan and Owen, 2000; Dosenbach et al., 2006; Fedorenko et al., 2013; Assem et al., 2020).

More specific evidence for an adaptive response in MD regions comes from fMRI studies using multivariate pattern analyses (MVPA). These studies show that the MD regions code for a range of task-relevant information in different contexts (Woolgar et al., 2016). In addition, several studies have found stronger coding in the MD cortex for cued (i.e., attended, task-relevant) objects (Woolgar et al., 2015b) or features (Jackson et al., 2016; Jackson and Woolgar, 2018) relative to equivalent distractors. In parallel, recent magnetoencephalography (MEG) and electroencephalography (EEG) studies have shown the coding of a cued feature at a cued location is sustained over time, while information coding for distractor information is not sustained (Kaiser et al., 2016; Battistoni et al., 2020; Grootswagers et al., 2021; Barnes et al., 2022; Goddard et al., 2022; Moerel et al., 2022; Noah et al., 2023). What we do not know, however, is to what extent this difference in information coding between cued and distractor stimuli is driven by decision-making processes pertaining to the cued information.

Underscoring the importance of this confound, there is clear evidence that decisions can drive decoding in early visual areas (Rens et al., 2017; Löffler et al., 2019), and the frontoparietal cortex (Löffler et al., 2019), although these studies do not explicitly investigate the role of attention, and the decision was not driven by visual information. Bode et al. (2012) found perceptual decisions could be decoded from 140 to 180 ms poststimulus onward from pure noise images, slightly earlier than feature-based attention effects (Goddard et al., 2022). Thus, decisions can be decoded in similar brain areas, with a time-course that could interact with that of attention effects, further complicating the interpretation of the extant literature. There is some evidence suggesting that attention effects can occur in the absence of decision-making (Hon et al., 2006). However, this study did not investigate what information was coded about the attended and unattended stimuli.

Here, we used MVPA of MEG and fMRI data to address three research questions. First, we used MEG to ask whether attention affects the coding of stimulus information when it is separated from decision-making processes in time, using a two-stage task to dissociate the coding of attended and unattended visual information from decision-related information. Second, we used fMRI to ask what type of information the MD regions hold. Finally, we used model-based MEG-fMRI fusion to formally combine the data from the two imaging modalities and examine the time-course with which the MD regions preferentially coded for cued compared with distractor information during the first phase of the trial.

Materials and Methods

MEG acquisition and analysis

Participants

The first part of this study consisted of a behavioral training session and a MEG session carried out at Macquarie University (Sydney, Australia). We tested 31 healthy volunteers in the behavioral training session. Of these, 21 performed well enough to participate in the MEG session (at least 90% accuracy in the final run of the training session). The MEG data from one participant were excluded due to a failure to complete the experiment, resulting in a final MEG sample of 20 participants (14 female/6 male; 18 right-handed/2 left-handed; mean age = 25.8 years; SD = 5.1). Participants received $15 AUD for participation in the training session and $30 AUD for participation in the MEG session. The study was approved by the Macquarie University Human Research Ethics Committee, and all participants provided informed consent.

Stimuli and experimental procedure

MEG session

Participants were instructed to keep fixation at a central bullseye throughout the experiment and were cued to attend to either blue or orange at the start of each block (32 trials). Figure 1 shows the stimuli and trial procedure. We used a two-stage task, to be able to separate the effect of attention from the decision-making process in time. In Stage 1, approximately equiluminant blue and orange oriented lines within a circular window were overlaid at fixation for 150 ms on a mid-gray background (Fig. 1A, C). Participants attended to the lines of the cued color, while ignoring the lines of the other color. After a 500 ms blank screen, a black comparison line was presented for 200 ms (Stage 2). The task was to determine which way the cued lines had to be rotated to match the orientation of the comparison line. After the comparison line was presented, there was a 500 ms blank screen, and then the response screen, consisting of arrows showing either a clockwise or an anticlockwise rotation, was shown until a response was given. If participants did not respond within 3 s, the next trial started. Participants pressed one of two buttons to indicate whether the rotation shown on the response screen was correct or incorrect. We used this response screen to separate the correct response button from the rotation direction decision, ensuring that rotation direction decoding could not be driven by motor preparation. The mapping of the buttons to indicate “correct” or “incorrect” was counterbalanced across participants. Participants received feedback on every trial; at the end of each trial, the white part of the fixation bullseye turned green (when correct) or red (when incorrect) for 250 ms to give feedback on accuracy.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Stimuli and design. The stimulus, shown in A, consisted of oriented blue and orange lines within a circular window. Participants attended to the lines of one color, while ignoring the lines of the other color. B, The 16 possible trial types, determined by a combination of cued orientation, distractor orientation, and rotation direction. The comparison orientation was not used in the analysis but is shown for completeness. The cued orientation, distractor orientation, and rotation direction are orthogonal dimensions in this design, only within the two pairs separately (top half and bottom half of the table). Decoding was done within each pair to ensure all of these factors are balanced. C, An example of a trial sequence. Participants were cued to a particular color for a block of 32 trials. On each trial, the cued and distractor oriented lines were shown for 150 ms followed by 500 ms delay, and then the comparison orientation was shown for 200 ms. After another 500 ms delay, the response screen appeared until a response (“correct” vs “incorrect”) was given or until the 3 s timeout. In this trial, for example, if the cued color was blue, the participant would have to decide whether the blue lines need to rotate clockwise or anticlockwise to match the comparison orientation. At the response screen, which in this trial shows clockwise, the correct response would be the button indicating “correct.”

The colored oriented lines had a circular window of 3 degrees of visual angle (DVA), had a spatial frequency of 2 cycles/degree, were of two possible colors (blue; RGB = 239, 159, 115 and orange; RGB = 72, 179, 217), and were phase randomized (Fig. 1A). The comparison line had a length of 0.8 DVA and a width of 0.1 DVA. The response mapping arrows were 1.5 DVA, and the central fixation bullseye had a diameter of 0.4 DVA. There were four possible orientations for the cued, distractor, and comparison orientations, which were divided into two pairs: 22.5° was paired with 112.5°, and 67.5° was paired with 157.5° (Fig. 1B). When the cued orientation was from one pair, the distractor and comparison orientation were always from the other pair. This ensured that the two overlaid orientations were never the same. The comparison orientation was always rotated 45° from the cued orientation, either in a clockwise or an anticlockwise direction.

Cued orientation (4) × distractor orientation (2) × rotation direction (2) × response screen (2) were counterbalanced within each block, and the cue color was counterbalanced over blocks. Participants were instructed which color to attend at the start of each block of 32 trials, and the cue color switched between blocks. Participants completed 16 experimental runs, with two blocks per run (one block of 32 trials per cue color). One participant completed 14 instead of 16 experimental runs. In addition to the feedback on every trial, participants received feedback on their accuracy at the end of each block. The order of the trial types was randomized within each block. To increase the number of correct trials, any trial on which the participant made an error was presented again later in the block. This was done a maximum of two times per block and only if it was not the last trial in the block. We replaced error trials with successful retakes in the analysis, thus increasing the number of correct trials available for the decoding analysis. We did not record eye movements, but to influence decoding accuracy, eye movements during the trials would have to systematically vary between the different conditions within each pair (Fig. 1B). This seems unlikely as all cued and distractor orientations were presented at fixation with no variance between conditions.

Training session

Due to the challenging task, participants completed a separate behavioral training session before doing the task in the MEG. The training session consisted of three parts. First, participants practiced the rotation task without selection. Only lines of the cued color were shown, and the task was slower: the cued stimulus and comparison line were both presented for 500 ms instead of 150 and 200 ms, respectively. Next, the selection element was introduced. Both the cued and the distractor color lines were shown, with the timing of the task again slowed as in Part 1 of the training. Finally, the task was presented at the same speed as used in the MEG session: the cued stimulus was shown for 150 ms and the comparison line was presented for 200 ms. Feedback was given at the end of each trial by the white part of the fixation bullseye turning green (correct) or red (incorrect) for 250 ms. At the end of each block, participants also received feedback about their overall accuracy for that block.

Each run within the training session consisted of two blocks of 32 trials, one block per cued color. The number of runs per training part was adjusted flexibly depending on task performance. When participants had finished at least two runs and achieved an average accuracy of at least 90% for the previous run, they moved onto the next part of training. Participants who were able to complete all three training steps within an hour-long session were invited to participate in the MEG session.

MEG acquisition

Participants performed the task in a supine position. The MEG signal was continuously sampled at 1,000 Hz on a whole-head MEG system with 160 axial gradiometers inside a magnetically shielded room (Model PQ1160R-N2, Kanazawa Institute of Technology (KIT)). All MEG recordings were obtained at the KIT-Macquarie Brain Research Laboratory, National Imaging Facility. Recordings were filtered online between 0.03 and 200 Hz. To track head movements during the experiment, participants were fitted with a cap with five marker coils. We recorded the head shape of participants using a Polhemus Fastrak Digitizer pen. The stimuli were projected onto the ceiling of the magnetically shielded room. To correct for delays in stimulus onsets of the projector compared with the triggers, photodiode responses were used to determine the onset of the trial. We adjusted the stimuli to correct for distortions introduced by the projector. The stimuli were presented using the Psychtoolbox extension in MATLAB (Brainard, 1997; Pelli, 1997; Kleiner et al., 2007). Participants used a two-button response pad to respond on each trial.

Preprocessing

The data were preprocessed using the FieldTrip extension in MATLAB (Oostenveld et al., 2011). The recordings were sliced into 3,100 ms epochs, from 100 ms before the onset of the first stimulus to 3,000 ms after stimulus onset. The signal was then down sampled to 200 Hz. Error trials that had a successful retake were exchanged for the correct retake. To keep the counterbalancing of all conditions intact, we included the few error trials that did not have a successful retake in the analysis (0.33% of trials on average).

Decoding analysis

We used MVPA to determine whether there was information present in the pattern of activation for each timepoint about (1) the cued orientation, (2) the distractor orientation, and (3) the rotation direction. Our two-stage task design allowed the effect of attention to be assessed from the first stage onward, whereas the decision can only be made from the second stage onward. This allows us to separate the effect of attention and decision-making in time. In Stage 1 of the task, the cued and distractor orientation were shown. Comparing the coding of the cued and distractor orientations allowed us to determine whether attended visual information was preferentially coded. In Stage 2, the comparison stimulus was shown. The coding of the rotation direction reflected the decision participants had to make about the stimulus. Critically, the cued orientation, distractor orientation, and rotation direction were all orthogonal dimensions, allowing us to investigate the coding of each of these separately. For each timepoint, we trained a support vector machine (SVM) classifier on the activation across all 160 sensor channels to distinguish the conditions of interest. Classifier performance was determined using leave-one-run-out cross-validation. Data were split into a training dataset, containing the data from 15 runs, and a testing dataset, containing the data from the left-out run. This was done 16 times, using a different testing run each time. The analysis was repeated for each point in time, resulting in a decoding accuracy over time. We used the CoSMoMVPA toolbox for MATLAB to conduct our MVPA analyses (Oosterhof et al., 2016).

Cued and distractor orientation

To examine whether there was information about the cued orientation, we trained a classifier to distinguish between the four possible cued orientations. These four orientations were divided into two pairs (Fig. 1B); 22.5° was paired with 112.5°, and 67.5° was paired with 157.5°. The classification of orientation was done within each pair separately, and the decoding accuracies were then averaged over pairs. This ensured that the cued orientation was orthogonal to the distractor orientation and comparison orientation. For example, in Pair 1, cued orientation 22.5° and 112.5° occurred equally often with each of distractor orientations 67.5° and 157.5° (Fig. 1B). Cued and distractor orientations were also orthogonal to the rotation decision, so that only the orientation of the stimulus, and not the participant's decision about it, could drive the classifier. Because the decoding was done within a pair, theoretical chance level was 50%. The decoding analysis for the distractor orientation was the same, as the possible cued and distractor orientations were identical.

Rotation direction

We used coding of the rotation direction as a measure for coding of the decision participants had to make. To determine whether there was information about the rotation direction, we trained a classifier to distinguish between a clockwise and anticlockwise rotation. The rotation direction was orthogonal to both the cued and distractor orientation conditions (e.g., a clockwise decision was equally associated with all four possible orientations). There were two possible rotation directions, giving a theoretical chance level of 50%.

Response button

For completeness, we also examined coding of the response button. For this, we trained a classifier to distinguish between the two buttons participants used to indicate a “correct” or “incorrect” rotation shown on the response screen. We chose to decode the correct response button, which should have been pressed on each trial, rather than the pressed response button to maintain the full counterbalancing of trial types. We do not expect error trials to introduce much noise, given the high behavioral accuracy of mean 99.67% (after replacing error trials where possible). The correct response button was orthogonal to the rotation direction decision, and decoding of the response button is likely driven by motor preparation and execution.

Channel searchlight

We performed an exploratory channel searchlight analysis to investigate which MEG sensors were driving the observed classification accuracies of (1) the cued orientation, (2) the distractor orientation, (3) the rotation direction, and 4) the response button. We followed an established pipeline (Grootswagers et al., 2019; Robinson et al., 2019, 2022), where we ran the decoding analysis described above, in the Decoding analysis section, for a cluster of sensors around each MEG sensor. We used a Linear Discriminant Analysis instead of SVM classifier for this analysis to reduce computing time. To obtain the clusters, we took the closest neighboring channels for each MEG channel, resulting in 2–7 neighbors per channel. We then ran the decoding for each sensor cluster and timepoint and stored the obtained decoding accuracies in the center channel, resulting in a time-by-channel decoding accuracy map for each participant and decoding analysis. For visualization purposes, we averaged the topographies across 200 ms time bins, with the exception of the first topography (−100 to 0 ms) and the last topography (2,400–2,500 ms), which were averaged over 100 ms time bins instead.

Previous work has shown that eye movements can contribute to decoding, including decoding of orientation (Mostert et al., 2018; Quax et al., 2019; Linde-Domingo and Spitzer, 2024). In the present study, participants were instructed to fixate on a bullseye presented during the whole trial, and all stimuli were presented at fixation. In addition, the stimulus was presented for 150 ms only, reducing the incentive to make saccades. However, we cannot fully rule out the contribution of eye movements. The channel searchlight analysis may provide some insight, as one would expect the decoding of visual information to be driven by posterior brain regions, whereas any eye movement-related signals would likely come from frontal sensors near the eyes.

Statistics

We used Bayesian statistics to determine the evidence for the coding of information under the null hypothesis (chance decoding) and the alternative hypothesis (above-chance decoding), for each timepoint (Kass and Raftery, 1995; Jeffreys, 1998; Wagenmakers, 2007; Rouder et al., 2009; Dienes, 2011), using the Bayes factor package in R (Morey and Rouder, 2018). To test whether the decoding accuracy was above chance, we used a half-Cauchy prior for the alternative, centered around d = 0, with the default width of 0.707 (Jeffreys, 1998; Rouder et al., 2009; Wetzels et al., 2011). To exclude irrelevant effect sizes, we excluded the d = 0–0.5 interval from the prior (Morey and Rouder, 2011; Teichmann et al., 2022) and used a point null at d = 0. To test whether there was stronger information coding for the cued compared with the distractor orientation, we calculated the difference between decoding accuracies for the cued and distractor orientation. We used a half-Cauchy prior, centered around 0, with the same width and null interval described above, for the alternative hypothesis to capture directional effects (cued > distractor).

Bayes factors (BFs) of <1 show evidence for the null hypothesis, while BFs of >1 show evidence for the alternative hypothesis. BFs between 1/3 and 3 are typically interpreted as showing insufficient evidence, BFs < 1/3 or BFs > 3 as substantial evidence, and BFs < 1/10 or BFs > 10 are interpreted as strong evidence (Wetzels et al., 2011). In this study, we defined the onset of strong evidence for above-chance decoding as the second consecutive timepoint with a BF > 10 and the return to baseline decoding as the second consecutive timepoint with a BF < 1/10.

fMRI acquisition and analysis

Participants

The second part of the study consisted of a behavioral training session and an fMRI session carried out at the MRC Cognition and Brain Sciences Unit (Cambridge, United Kingdom). Forty-two healthy volunteers participated in the initial training session. Of these, 27 volunteers performed well enough to participate in the fMRI session (at least 90% accuracy in the final run of the training session). The fMRI data from three participants were excluded due to excessive movement, failure to complete the experiment, or low task performance (accuracy < 80% in the fMRI session), resulting in a final fMRI sample of 24 participants (15 female/9 male; 23 right-handed/1 left-handed; mean age = 27.33 years; SD = 5.53). One of these participants also participated in the MEG experiment and 23 of the participants were new recruits. Participants received £6 for participation in the training session and £20 for participation in the fMRI session, as well as £2.50–£3 travel costs per session. All participants provided informed consent, and the study was approved by the Cambridge Psychology Research Ethics Committee.

Stimuli and experimental procedure

fMRI session

Participants performed the same task as in the MEG, except for the following three changes due to the lower temporal resolution of fMRI. First, the order of consecutive trial types was not fully randomized as it was in the MEG version but was instead counterbalanced within each run, making sure each combination of cued orientation × distractor orientation × comparison orientation was equally likely to follow each other combination as well as itself. The order of the rotation direction was balanced separately within each run. Secondly, to keep the order of consecutive trial types intact, error trials were not repeated. Thirdly, participants did not receive trial-wise feedback to avoid feedback of the previous trial influencing the signal of the next trial. Participants still received feedback about their accuracy at the end of each block. Participants completed eight runs, with four blocks per run, and 32 trials per block. The total number of blocks and the number of trials per block stayed the same as the MEG experiment; however, the blocks were divided into eight runs of four blocks, instead of 16 runs of two blocks. One participant completed seven instead of eight runs.

The timing of the task was kept the same at the MEG experiment. Due to the lower temporal resolution of fMRI, we are not able to separate the effect of attention from the decision-making processes in time. However, the rotation direction, about which participants made a decision, was an orthogonal dimension to both the cued and distractor orientation. This means that we can separate the effect of attention on stimulus coding from the decoding of the decision information.

Training session

The training session was the same as described in the MEG experiment section.

fMRI acquisition

fMRI scans were acquired using a Siemens 3 T Prisma-Fit scanner (Siemens Healthcare), with a 32-channel head coil at the MRC Cognition and Brain Sciences Unit, Cambridge, United Kingdom. We used a multiband T2*-weighted echoplanar imaging (EPI) acquisition sequence with the following parameters: repetition time (TR), 1,208 ms; echo time (TE), 30 ms; flip angle, 67°, field of view, 192 mm; multiband acceleration factor, 2; no in-plane acceleration; in-plane resolution, 3 × 3 mm; 38 interleaved slices of 3 mm slice thickness with 10% interslice gap. T1-weighted MPRAGE structural images were acquired at the start of the session (resolution, 1 × 1 × 1 mm). The stimuli were presented on a NNL LCD screen (resolution, 1,920 × 1,080; refresh rate, 60 Hz) using the Psychtoolbox extension in MATLAB (Brainard, 1997; Pelli, 1997; Kleiner et al., 2007). Participants used a two-button response pad to respond on each trial. To get comfortable with performing the task in the MRI scanner, participants completed two practice blocks of the task during the acquisition of the structural scan at the start of the scanning session. Feedback about accuracy was given at the end of each of these practice blocks.

Preprocessing and first-level model

The data were preprocessed using SPM 8 (Wellcome Department of Imaging Neuroscience) in MATLAB. EPI images were converted to NIFTII format, spatially realigned to the first image and slice-time-corrected (slice timing correction used the routine from SPM12 to allow for multiband acquisitions). Structural scans were coregistered to the mean EPI image and normalized to derive the normalization parameters needed for the definition of the regions of interest (ROIs).

To obtain activation patterns for the MVPA analysis, we estimated a general linear model (GLM) for each participant. There were 16 regressors per run, reflecting a combination of 4 cued orientations × 2 distractor orientations × 2 rotation directions (Fig. 1B). Whole trials were modeled as a single events, lasting from the onset of the first stimulus until the response was given, to account for trial-to-trial variability in response time (Woolgar et al., 2014). Regressors were convolved with the hemodynamic response function of SPM8.

Regions of interest

Thirteen frontal and parietal MD ROIs were taken from the parcellated map provided by Fedorenko et al. (2013), which is freely available online at imaging.mrc-cbu.cam.ac.uk/imaging/MDsystem. The definition of the MD network is activation based, as the map indexes regions that show a univariate increase in activation with increased task demands, across a range of tasks. This is an updated definition of the MD system, with a high degree of overlap with the previous definition, derived from meta-analytic data (Duncan and Owen, 2000) that we used in previous work (Woolgar et al., 2015a,b; Jackson et al., 2016; Jackson and Woolgar, 2018), and a more recent definition derived from multimodal imaging (Assem et al., 2020). The ROIs comprised the left and right anterior inferior frontal sulcus [aIFS; center of mass (COM) = ±35 47 19, volume = 5.0 cm3], left and right posterior inferior frontal sulcus (pIFS; COM ±40 32 27, 5.7 cm3), left and right premotor cortex (PM; COM ±28 −2 56, 9.0 cm3), left and right inferior frontal junction (IFJ; COM ±44 4 32, 10.1 cm3), left and right anterior insula/frontal operculum (AI/FO; COM ±34 19 2, 7.9 cm3), left and right intraparietal sulcus (IPS; COM ±29 −56 46, 34.0 cm3), and bilateral anterior cingulate cortex (ACC; COM 0 15 46, 18.6 cm3). Early visual cortex was defined as area BA 17 (center of mass = −13, −81, 3/16, −79, 3, volume = 54 cm3) from the Brodmann template provided with MRIcro (Rorden and Brett, 2000). ROIs were deformed into native space by applying the inverse of the normalization parameters for each participant.

Decoding analysis

We used the same decoding analysis as described in the MEG experiment, but in the MEG experiment, we used MVPA to determine what information is present in the pattern of activation for each point in time, whereas for the fMRI data, we determine what information is present in the pattern of activation for each ROI. Within each ROI, we trained an SVM classifier on the betas obtained from the GLM to distinguish the conditions of interest. The classifier performance was determined by using a leave-one-run-out cross-validation. Decoding accuracies were then averaged over hemispheres.

Statistics

To test for an effect of attention, MD region, and the interaction between these factors, we used a Bayesian analysis of variance (ANOVA) with attention (cued vs distractor orientation) and MD region (aIFS, pIFS, PM, IFJ, AI/FO, IPS, and ACC; data collapsed across hemispheres) as within-subject factors, using the default Jeffries prior of medium width (1/2; Rouder et al., 2012). We conducted further Bayesian t tests using the same parameters described in the MEG experiment. To determine the contribution of individual MD regions and V1 to the effect of attention on stimulus coding, we performed Bayesian t tests of the difference between decoding of the cued and distractor orientation per MD ROI and V1. In addition, we performed Bayesian t tests for the decoding of the cued orientation, distractor orientation, and rotation direction against chance for the mean MD regions, per individual MD region and for V1. We used the same parameters for these tests as described in the MEG section (Statistics).

Model-based MEG-fMRI fusion

To gain insight into the coding of attended information, unattended information, and decision-related information in both space and time simultaneously, we used model-based MEG-fMRI fusion (Cichy et al., 2014, 2016; Hebart et al., 2018). This method uses representational dissimilarity matrices (RDMs; Kriegeskorte et al., 2006, 2008) to abstract away from the imaging modality, allowing us to compare the pattern similarity across neuroimaging modalities. In addition, we can abstract away from the pattern of activation of specific participants, comparing the representational similarity across different groups. Model-based MEG-fMRI fusion further allows us to determine the match between the representational structure for each timepoint (MEG) and each ROI (fMRI), which can be uniquely explained by a theoretical model (Hebart et al., 2018; see Fig. 6A for an overview of this method). We used three orthogonal models, coding for the cued orientation, distractor orientation, and rotation direction.

The RDMs consisted of a 16 by 16 matrix comprising each combination of cued orientation (4) × distractor orientation (2) × rotation direction (2) (see Fig. 1B for an overview of combinations). For each cell in the RDM, we used decoding accuracy as our measure of dissimilarity, with greater decoding accuracy reflecting greater dissimilarity of activation patterns (i.e., distinctiveness of patterns between the conditions). Decoding accuracies were obtained for each cell in the RDM (i.e., pair of trial types) using the same cross-validation procedure as described above and were z-scored per ROI and timepoint. For the MEG data, we constructed an RDM at each timepoint. For each cell in the RDM, we obtained decoding accuracies using a 25 ms sliding window centered around each timepoint. For the fMRI data, we constructed an RDM for nine different ROIs: the mean MD, V1, aIFS, pIFS, PM, IFJ, AI/FO, IPS, and ACC. The mean MD ROI was obtained by calculating the RDM for each individual MD region and subsequently averaging RDMs across regions. We could instead have calculated the RDM by extracting the data from one large ROI including all MD regions; in practice this gave a similar RDM, so it was not considered further. The individual MD ROIs, except for the ACC, were obtained by calculating separate RDMs for the left and right hemispheres and then averaging across RDMs. We did this separately for each participant and then averaged the RDMs over participants, resulting in a single RDM per timepoint for MEG, and for each of the nine ROIs for fMRI.

We constructed three model RDMs: for the cued orientation, distractor orientation, and rotation direction (Fig. 6A). The cued and distractor orientation models coded for the following: the same orientation (0), a difference of 45° (0.5), or a difference of 90° (1); the rotation direction model coded for the same rotation (0) or a different rotation (1).

We used commonality analysis to estimate the shared variance between three RDMs: the MEG RDM for each point in time, the fMRI RDM for each ROI, and the model RDM. For each timepoint, ROI, and model, we calculated the difference between two squared semipartial correlation coefficients, using Spearman correlation. Both semipartial correlation coefficients reflected the proportion of variance shared between MEG and fMRI; one with all models partialled out from the MEG RDM, and the other with all models except the model of interest partialled out from the MEG RDM. By comparing the variance when our model of interest was included or not, we get a measure of the variance shared between MEG and fMRI that can be uniquely explained by each model (Hebart et al., 2018). In this way we derive a time-course of the model fit for each region.

Statistics

Because we averaged the RDMs over participants for both MEG and fMRI, we had a single RDM for each timepoint for MEG and a single RDM for each ROI for fMRI. It was therefore not possible to calculate random effects statistics at the group level, as was done for the MEG and fMRI decoding analyses. We therefore used a permutation test. To estimate the null distribution, we computed 10,000 permutations by shuffling the rows and columns of the group average MEG RDM and ran model-based MEG-fMRI fusion for each ROI using the permuted MEG matrices. For each ROI, we then determined a cluster-inducing threshold for each timepoint by choosing the 95th percentile. We determined the largest maximum cluster size in the null distribution. We did this over all ROIs to account for multiple comparisons over our ROIs. We compared the clusters in our fusion data to the maximum cluster size obtained from the null distribution (equivalent to p < 0.05, one-tailed, corrected for multiple comparisons for all the ROIs and all the timepoints).

Results

MEG results

Behavioral results for MEG session

Participants performed with high accuracy in the MEG session. The mean accuracy was 94.23% (SD = 3.90) before replacing error trials (see Materials and Methods), and 99.67% (SD = 0.56) after replacing error trials where possible, which means very few error trials were included in the analysis (0.33% of trials on average). The mean response time was 667 ms (SD = 104 ms).

Coding of the cued and distractor orientation over time

To examine the timepoints at which attended visual information was preferentially coded over unattended visual information, we compared the decoding accuracies for the same orientations when presented as the cued or distractor orientation for each point in time. Critically, the coding of the cued and distractor information could not be driven by decision-related information, as this was an orthogonal dimension in the design. The decoding accuracies for the cued and distractor orientations are shown in Figure 2; the topographies below the plots show the contribution of individual sensors to the decoding accuracy. There was strong evidence for above-chance orientation decoding for the cued orientation starting at 85 ms after stimulus onset, which was maintained until after the mean response time. The distractor stimulus orientation could be decoded with a very similar onset to the cued orientation, with strong evidence for above-chance coding from 90 ms after stimulus onset. However, unlike the coding of the cued information, coding of the distractor orientation was not sustained over time, dropping back to chance within 420 ms after stimulus onset. The difference between coding of the cued and distractor orientations, which reflects the attentional selection and maintenance of task-relevant information, was evident from 215 ms after stimulus onset. This effect of attention, with stronger coding of the cued compared with distractor orientation, was maintained until after the mean response time.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Decoding accuracy of the cued and distractor orientation over time. Classifiers were trained to discriminate stimulus orientation when the orientation was either cued (blue) or not (light green). Theoretical chance is 50% decoding accuracy, and shaded areas around the plot lines show the 95% confidence intervals. The gray shaded areas show from left to right: (1) when the stimulus was on the screen (oriented lines in cued and distractor colors; labeled Stimulus), (2) when the comparison line was on the screen (labeled Comparison), and (3) when the response screen was shown (labeled Response screen). The response screen remained on until participants responded (or 3 s). The vertical gray lines show the mean response time with 95% confidence intervals. The topographies below the plot show the sensor searchlight decoding results for the cued orientation (top, blue) and distractor orientation (bottom, light green). The topographies were averaged over 200 ms time bins, except for first topography (−100 to 0 ms) and the last topography (2,400–2,500 ms) which are based on a 100 ms time bin. The color bars next to the topographies show the scale of the decoding accuracies. There do not seem to be specific frontal contributions to the orientation decoding accuracy, suggesting that eye movements do not have a strong contribution to the observed orientation decoding. Bayes factors are given below the plot on a logarithmic scale for the cued orientation (blue), the distractor orientation (light green), and the difference between the cued and distractor orientation (red). BFs below 1/10 are shown in gray, indicating strong evidence for the null hypothesis. BFs above 10 are shown in the plot color (blue for the cued orientation, light green for the distractor orientation, and red for the effect of attention), indicating strong evidence for the alternative hypothesis. BF between 1/10 and 10 are shown in white. There was strong evidence for coding of the cued orientation from 85 ms after the onset of the stimulus which was maintained until after the response was given. There was strong evidence for coding of the distractor orientation from 90 ms after the onset of the stimulus, but this was not sustained over time. There was strong evidence for an effect of attention from 215 ms after the onset of the stimulus until after the mean response time.

Coding of the rotation direction over time

To examine the time-course of decision-related information coding, we determined the coding of the rotation direction over time by training a linear classifier to distinguish between clockwise and anticlockwise rotations. Participants had to actively manipulate the cued orientation in combination with the comparison orientation to determine the rotation. The decoding of the rotation direction over time is shown in Figure 3. We observed above-chance decoding of the rotation direction from 170 ms after the onset of the comparison line, 820 ms after the onset of the attended and unattended orientations, which was maintained until after the mean response time. For completeness, we also decoded the correct response button (Fig. 3). We observed information about the response button from 1,580 ms onward, 230 ms after the onset of the response screen.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Decoding accuracy of the rotation direction and the correct response button over time. Classifiers were trained to discriminate the rotation direction, which could be clockwise or anticlockwise. The rotation direction (orange), about which participants had to make a decision, could be decoded from 170 ms after the onset of the comparison line (820 ms after stimulus onset) and was maintained until after the mean response time. The time-course of decoding accuracies for the response button (purple) was included for completeness. The topographies for the decoding of the rotation direction (orange) and the response button (purple) are shown below the plot. Plotting conventions are as in Figure 2.

fMRI results

Behavioral results for fMRI session

Participants performed the task in the fMRI session with high accuracy (mean accuracy = 94.45%; SD = 4.91). Included participants all had a mean accuracy above 80%. The mean response time was 658 ms (SD = 140 ms) from response screen onset.

Coding of the cued and distractor orientation in the MD regions and V1

To examine whether the effect of attention observed in the MEG data pertained to V1 and the MD regions, we decoded the cued and distractor orientations for these ROIs. We were able to separate these conditions due to the orthogonal design. The decoding accuracies for the cued and distractor orientations, averaged across the MD regions and in V1, are shown in Figure 4A. We used a Bayesian ANOVA with attention (orientation coding of cued vs distractor stimuli) and MD region (aIFS, pIFS, PM, IFJ, AI/FO, IPS, and ACC; data collapsed across hemispheres) as within-subject factors. We found a main effect of attention (BF > 100), no main effect of region (BF = 0.44), and an interaction between attention and region (BF = 9.60), justifying a further analysis of the effect of attention in individual MD regions (Fig. 4B). There was strong evidence for an effect of attention (cued > distractor coding) in aIFS, pIFS, PM, IFJ, and IPS, and some evidence for an effect of attention in AI/FO and ACC (Fig. 4B).

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Decoding accuracy of the cued and distractor orientation (A) averaged across the MD regions and in V1 and (B) in individual MD regions. In both A and B, classifiers were trained to discriminate stimulus orientation when the orientation was either cued (blue) or not (light green). Theoretical chance is 50%, and error bars indicate 95% confidence intervals. The Bayes factors for the decoding accuracies are shown below the plot in blue (cued orientation) and light green (distractor orientation), and the BFs for the difference (cued > distractor coding) are shown in red. All BFs are shown on a logarithmic scale. BFs > 10 are marked in the plot color, BFs < 1/10 are marked in gray, and BFs in between this range are marked in white. In all of the MD regions, and in V1, there was strong evidence for coding of the cued orientation, but not for the distractor orientation. There was strong evidence for an effect of attention for most of the MD regions (aIFS, pIFS, PM, IFJ, AI/FO, and IPS) as well as average MD and V1.

To determine whether the orientation coding for the cued and distractor orientations was above chance, we performed Bayesian t tests for the mean MD regions as well as the individual MD regions. There was strong evidence for above-chance coding for the cued orientation in the mean MD regions (mean accuracy = 55.62%, BF > 100), while decoding of the distractor orientation was at chance (mean accuracy = 49.56%, BF < 0.01). There was above-chance decoding for the cued orientation in all of the MD regions individually. Strong evidence for chance level decoding for the distractor orientation was found for aIFS, pIFS, PM, IFJ, AI/FO, and IPS (BF < 0.01). There was evidence for chance decoding for the distractor orientation in ACC (BF = 0.20), and the 95% confidence intervals for this region overlapped with chance, consistent with evidence for the null.

We also found an effect of attention in V1 (BF > 100), with stronger coding for the cued orientations compared with the distractor orientations. V1 coded for the cued orientation (mean accuracy = 66.34%; BF > 100), while when these identical orientations were distractors, there was evidence for chance decoding (mean accuracy = 51.29%, BF = 0.28), and the 95% confidence intervals overlapped with chance, consistent with evidence for the null.

Coding of the rotation direction in the MD regions and V1

To examine whether there was information about the rotation direction, which is our indicator of the decision, in the MD regions and V1, we trained a linear classifier to distinguish between a clockwise and anticlockwise rotation direction. Since error trials were included in the analysis to maintain the orthogonality of the different features, on a small number of trials (∼6%), the decoded rotation direction does not reflect the decision of the participant, but the predominant influence should be the decision. Figure 5A shows the decoding accuracies for the rotation direction in the MD regions and V1. We used Bayesian t tests to determine the evidence for chance versus above-chance decoding. We observed strong evidence for above-chance coding of rotation direction across the MD regions on average (mean accuracy = 52.43%; BF = 45.89) and in PM and IPS individually (Fig. 5B). Although the average decoding accuracy found in V1 was similar to the MD regions, greater variance in V1 was reflected in an inconclusive BF (mean accuracy = 52.67%; BF = 1.85). There was substantial, but not strong, evidence for chance decoding in aIFS, pIFS, and IFJ and insufficient evidence for rotation direction coding in the other MD regions.

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Decoding accuracy of the rotation direction (decision) (A) averaged across the MD regions and in V1 and (B) in individual MD regions. In both A and B, classifiers were trained to discriminate the rotation direction, which could be clockwise or anticlockwise. Plotting conventions are as in Figure 4. The rotation direction could be decoded from the MD regions (average, PM and IPS), while there was insufficient information for rotation direction coding in V1.

Model-based MEG-fMRI fusion results

We used model-based MEG-fMRI fusion to ask whether the MD regions coded attended information before a decision was made. The commonality coefficient shown in Figure 6A refers to the part of the variance that is shared between the MEG and fMRI RDMs which can be uniquely explained by our models for the cued orientation, distractor orientation, and rotation direction. In other words, it plots the dynamics with which the MD pattern, observed in fMRI and explained by each of our theoretical models, arises in the MEG data. We ran this analysis for the mean MD regions, individual MD regions, and V1.

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

Model-based MEG-fMRI fusion methods (A) and results over time (B). A gives an overview of the model-based MEG-fMRI fusion method. We used RDMs to abstract away from the imaging modality. Each cell in the RDM corresponds to the dissimilarity between two specific trial types. A trial type is a combination of the cued orientation, distractor orientation, and rotation direction. Trial types 1–16 in the RDMs correspond to the 16 trial types listed in Figure 1B. We created an RDM for each region of interest (V1, mean MD regions, and individual MD regions) for the fMRI data, for each timepoint in the MEG data, and for the three models (cued orientation, distractor orientation, and rotation direction). We then determined commonality: the part of the variance that is shared between the MEG RDM for each timepoint and the fMRI RDM for each ROI, that can be uniquely explained by each theoretical model. This results in a time-course of commonality coefficients for each ROI and model. Note that the ROIs are plotted by projecting a 3D region onto the cortical surface. B, The time-course of commonality for all ROIs: the mean MD regions, V1, aIFS, pIFS, PM, IFJ, AI/FO, IPS, and ACC, respectively. The different lines depict the commonality explained by the model for the cued orientation (blue), the distractor orientation (light green), and the rotation direction (orange). Significant timepoints (equivalent to p < 0.05, one-tailed) were obtained with a cluster-corrected randomization test, corrected for multiple comparisons across ROIs and timepoints, and are shown at the bottom of the plots. Plotting conventions are as in Figures 2 and 3 except that the dots in the bottom of this plot show significant timepoints, not Bayes factors.

Cued orientation

In the MD regions, there was a significant cluster of commonality for the cued orientation from 555 to 640 ms after the onset of the stimulus (Fig. 6B). Critically, this cluster occurred before the onset of the comparison line and therefore before participants could make their decision. A second cluster started at 1,140 ms, 490 ms after the onset of the comparison line, and was maintained until after the average response time. The observed pattern was similar for all individual MD regions. However, the initial cluster before the onset of the comparison line only reached significance for PM, IFJ, and AI/FO. In V1, cued orientation commonality coefficients were significant from 225 to 505 ms after the onset of the stimulus and again from 1,210 ms, 560 ms after the onset of the comparison line, to after the mean response time.

Distractor orientation

There were no significant clusters of commonality for the distractor orientation in the mean MD region, V1, or in the individual MD regions except for aIFS. In aIFS, a brief cluster was observed between 1,180 and 1,270 ms after stimulus onset.

Rotation direction

We observed a cluster of significant commonality for the rotation direction, which is an index of the decision participants had to make, in the mean MD regions starting from 1,140 ms, 490 ms after the onset of the comparison line, which lasted until after the participant responded. Individual MD regions showed a similar pattern. V1 also showed a similar pattern over time, with a significant rotation direction cluster from 1,160 ms, 510 ms after comparison line onset.

Discussion

A wealth of fMRI, M/EEG, and nonhuman primate studies have shown preferential coding of cued compared with distractor information (Rao et al., 1997; Li et al., 2007; Sigala et al., 2008; Bode and Haynes, 2009; Stiers et al., 2010; Stokes et al., 2013; Harel et al., 2014; Waskom et al., 2014; Woolgar et al., 2015a,b; Jackson et al., 2016; Kaiser et al., 2016; Battistoni et al., 2020; Grootswagers et al., 2021; Barnes et al., 2022; Goddard et al., 2022; Moerel et al., 2022). However, most paradigms manipulate attention by requiring participants to respond to the cued (but not the distractor) stimulus, which means they cannot distinguish between preferential coding of cued information due to attentional selection and maintenance of relevant information, and coding due to decision-making processes performed on the attended information. We used MEG, fMRI, and model-based MEG-fMRI fusion to distinguish between these explanations. Our MEG data showed that attention affects stimulus processing ∼215 ms after stimulus onset, before participants had the necessary information to make an explicit decision in our task. Then, when the comparison line was presented, the decision about the rotation direction could be decoded ∼170 ms later. Our fMRI data showed the MD regions coded attended visual information and did so more strongly than distracting visual information, even though the decision made was orthogonal to the stimulus itself. The orthogonal decision representation was also present in the MD system, specifically in PM and IPS. We also found an effect of attention in V1, with stronger coding of the cued compared with distractor orientations, in line with previous studies (Jehee et al., 2012; Pratte et al., 2013; Jackson et al., 2016). Our model-based MEG-fMRI fusion analysis showed evidence for the coding of attended orientation information in the MD regions before participants could make a decision. These data are consistent with the interpretation of previous results as effects of attention on the processing of visual stimuli, with a stronger coding of attended compared with unattended visual information in the brain. This could be driven by an enhancement of the attended visual information, a suppression of the unattended visual information, or both.

The finding that the MD regions preferentially code for attended information over unattended information, even when the decision is orthogonal, is in line with the finding of Hon et al. (2006), who used univariate analyses to show that the average MD response reflected changes in cued information when no decision was needed, or when the decision was unrelated to the cued information. Our results take the inference beyond simple activation changes and confirm that decision-making processes are not required for the prioritization of cued over distractor information.

Our results also show that, in addition to coding the attended information, MD regions also code for decision-related information. The PM and IPS, which are part of the MD network, held information about the rotation direction, which was the decision participants had to make. The time-course with which this information emerged, from ∼170 ms after the onset of the comparison line, fits well with a previous study which found a perceptual decision on a pure noise image could be decoded ∼140–180 ms (Bode et al., 2012). The fast time-course suggests the coding of rotation direction information could reflect the accumulation of decision-related information before the participant is able to respond. This is in line with the description of decision-making as the accumulation of evidence toward one of two decision alternatives until a decision boundary is reached (Gold and Shadlen, 2007; Ratcliff and McKoon, 2007; Heekeren et al., 2008). We cannot comment on the exact delay between the onset of decision information coding and the response, as our task involved a delayed response—participants could only generate a response (which was again orthogonal to the decision) after the response screen came on.

In addition to separating the effects of attention and decision-making, our study adds to the literature by combining neuroimaging techniques, giving us resolution in both space and time for a single paradigm. Our MEG data show that the effect of attention emerges before participants have all of the information needed to make a decision, while our fMRI data show that the MD regions code both attended visual information and the decision participants have to make about the stimulus. Moreover, we formally related the two results using model-based MEG-fMRI fusion in order to estimate the time-course of response in the MD network and V1 separately. The data suggest that visual information arises first in V1 but that both V1 and the MD regions code for information about the attended orientation before participants can begin to make a decision. Orthogonal decision-related signals in MD cortex arose only after the comparison line was presented. The MEG-fMRI fusion method is an exciting step toward resolving neural processes in both time and space but is limited in sensitivity since it is only able to pick up effects that are present in both the MEG data and the fMRI data [see Cichy and Oliva (2020) for an overview of possible limitations]. For instance, although our MEG data revealed transient coding of unattended information, this was not evident in the fusion results, presumably because the much slower time resolution of fMRI obscured its detection. This highlights the utility of considering the data from each neuroimaging technique separately, as well as together.

Our findings emphasize the adaptable response of the MD regions, in line with the literature in nonhuman primates showing the same neural populations can simultaneously encode multiple task parameters (Rigotti et al., 2013; Fusi et al., 2016; Aoi et al., 2020). For example, Aoi et al. (2020) show that a single neural population in monkey prefrontal cortex can maintain information about relevant and irrelevant sensory information, as well as the saccade response, over the course of a trial. Another line of evidence for the flexible response of the MD regions comes from fMRI studies in humans. These studies show that multivariate patterns of activity code for a range of information in different tasks (Woolgar et al., 2016), with single MD voxels reused to code multiple task features between tasks (Jackson and Woolgar, 2018). In addition, Woolgar et al. (2015b) found stronger visual object coding in the MD regions for cued relative to distractor objects, and Jackson et al. (2016) reported stronger MD coding of cued compared with distractor features of the same object. Moreover, MD coding of cued information can be selectively impaired by noninvasive stimulation of the right dorsolateral prefrontal cortex, one of the nodes in the MD network (Jackson et al., 2021). Our study also provides evidence for a multifaceted response of the MD regions, although we cannot determine whether this is driven by the same population of neurons within those brain areas. The multifaceted MD response observed in our study, reflecting relevant visual information as well as decisions about this information, is consistent with the proposal that these regions play a key role in associating different types of information by integrating information over multiple sources (Duncan et al., 2020). The MD system is well placed to achieve this, being widely distributed and strongly interconnected across the cortex (Assem et al., 2020; Duncan et al., 2020). In addition, our results emphasize an explicit role of the MD regions in selective attention, with preferential coding of the information that is task relevant separately from functional decisions.

In summary, our MEG results show that attention affects the representation of visual stimuli in advance of decision-making processes, while the fMRI results show that the MD regions code for attended visual information as well as decisions about this information. The MEG-fMRI fusion results show the spatiotemporal unfolding of these processes, suggesting that the MD regions represent information about the attended stimulus before participants can begin to make a decision. These results emphasize a key role for the MD regions in selective attention and are consistent with the proposal that these regions integrate different types of information for decision-making. Our multimodal data demonstrate that selective attention and decision-making have separable bases in neural coding.

Footnotes

  • This work was supported by the Australian Research Council (ARC) Centre of Excellence in Cognition and its Disorders (CE110001021), International Research Training Program Scholarships from Macquarie University awarded to D.M., an ARC Discovery Project (DP170101840) awarded to A.N.R and A.W, an ARC Future Fellowship (FT230100119) awarded to A.N.R., and by the Medical Research Council (UK) intramural funding (SUAG/093/G116768) awarded to A.W. We acknowledge the facilities and scientific and technical assistance of the National Imaging Facility, a National Collaborative Research Infrastructure Strategy (NCRIS) capability, at Macquarie University. For the purpose of open access, we have applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising from this submission.

  • ↵*A.N.R. and A.W. contributed equally to this work.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Denise Moerel at denise.moerel{at}hdr.mq.edu.au.

SfN exclusive license.

References

  1. ↵
    1. Aoi MC,
    2. Mante V,
    3. Pillow JW
    (2020) Prefrontal cortex exhibits multidimensional dynamic encoding during decision-making. Nat Neurosci 23:1410–1420. https://doi.org/10.1038/s41593-020-0696-5 pmid:33020653
    OpenUrlCrossRefPubMed
  2. ↵
    1. Assem M,
    2. Glasser MF,
    3. Van Essen DC,
    4. Duncan J
    (2020) A domain-general cognitive core defined in multimodally parcellated human cortex. Cereb Cortex 30:4361–4380. https://doi.org/10.1093/cercor/bhaa023 pmid:32244253
    OpenUrlCrossRefPubMed
  3. ↵
    1. Barnes L,
    2. Goddard E,
    3. Woolgar A
    (2022) Neural coding of visual objects rapidly reconfigures to reflect subtrial shifts in attentional focus. J Cogn Neurosci 34:806–822. https://doi.org/10.1162/jocn_a_01832
    OpenUrlCrossRef
  4. ↵
    1. Battistoni E,
    2. Kaiser D,
    3. Hickey C,
    4. Peelen MV
    (2020) The time course of spatial attention during naturalistic visual search. Cortex 122:225–234. https://doi.org/10.1016/j.cortex.2018.11.018
    OpenUrlCrossRef
  5. ↵
    1. Bode S,
    2. Haynes J-D
    (2009) Decoding sequential stages of task preparation in the human brain. Neuroimage 45:606–613. https://doi.org/10.1016/j.neuroimage.2008.11.031
    OpenUrlCrossRefPubMed
  6. ↵
    1. Bode S,
    2. Sewell DK,
    3. Lilburn S,
    4. Forte JD,
    5. Smith PL,
    6. Stahl J
    (2012) Predicting perceptual decision biases from early brain activity. J Neurosci 32:12488–12498. https://doi.org/10.1523/JNEUROSCI.1708-12.2012 pmid:22956839
    OpenUrlAbstract/FREE Full Text
  7. ↵
    1. Brainard DH
    (1997) The Psychophysics Toolbox. Spat Vis 10:433–436. https://doi.org/10.1163/156856897X00357
    OpenUrlCrossRefPubMed
  8. ↵
    1. Carrasco M
    (2011) Visual attention: the past 25 years. Vision Res 51:1484–1525. https://doi.org/10.1016/j.visres.2011.04.012 pmid:21549742
    OpenUrlCrossRefPubMed
  9. ↵
    1. Cichy RM,
    2. Oliva A
    (2020) A M/EEG-fMRI fusion primer: resolving human brain responses in space and time. Neuron 107:772–781. https://doi.org/10.1016/j.neuron.2020.07.001 pmid:32721379
    OpenUrlCrossRefPubMed
  10. ↵
    1. Cichy RM,
    2. Pantazis D,
    3. Oliva A
    (2014) Resolving human object recognition in space and time. Nat Neurosci 17:455–462. https://doi.org/10.1038/nn.3635 pmid:24464044
    OpenUrlCrossRefPubMed
  11. ↵
    1. Cichy RM,
    2. Pantazis D,
    3. Oliva A
    (2016) Similarity-based fusion of MEG and fMRI reveals spatio-temporal dynamics in human cortex during visual object recognition. Cereb Cortex 26:3563–3579. https://doi.org/10.1093/cercor/bhw135 pmid:27235099
    OpenUrlCrossRefPubMed
  12. ↵
    1. Desimone R,
    2. Duncan J
    (1995) Neural mechanisms of selective visual attention. Annu Rev Neurosci 18:193–222. https://doi.org/10.1146/annurev.ne.18.030195.001205
    OpenUrlCrossRefPubMed
  13. ↵
    1. Dienes Z
    (2011) Bayesian versus orthodox statistics: which side are you on? Perspect Psychol Sci 6:274–290. https://doi.org/10.1177/1745691611406920
    OpenUrlCrossRefPubMed
  14. ↵
    1. Dosenbach NUF,
    2. Visscher KM,
    3. Palmer ED,
    4. Miezin FM,
    5. Wenger KK,
    6. Kang HC,
    7. Burgund ED,
    8. Grimes AL,
    9. Schlaggar BL,
    10. Petersen SE
    (2006) A core system for the implementation of task sets. Neuron 50:799–812. https://doi.org/10.1016/j.neuron.2006.04.031 pmid:16731517
    OpenUrlCrossRefPubMed
  15. ↵
    1. Duncan J
    (2001) An adaptive coding model of neural function in prefrontal cortex. Nat Rev Neurosci 2:820–829. https://doi.org/10.1038/35097575
    OpenUrlCrossRefPubMed
  16. ↵
    1. Duncan J
    (2010) The multiple-demand (MD) system of the primate brain: mental programs for intelligent behaviour. Trends Cogn Sci 14:172–179. https://doi.org/10.1016/j.tics.2010.01.004
    OpenUrlCrossRefPubMed
  17. ↵
    1. Duncan J,
    2. Assem M,
    3. Shashidhara S
    (2020) Integrated intelligence from distributed brain activity. Trends Cogn Sci 24:838–852. https://doi.org/10.1016/j.tics.2020.06.012 pmid:32771330
    OpenUrlPubMed
  18. ↵
    1. Duncan J,
    2. Owen AM
    (2000) Common regions of the human frontal lobe recruited by diverse cognitive demands. Trends Neurosci 23:475–483. https://doi.org/10.1016/S0166-2236(00)01633-7
    OpenUrlCrossRefPubMed
  19. ↵
    1. Fedorenko E,
    2. Duncan J,
    3. Kanwisher N
    (2013) Broad domain generality in focal regions of frontal and parietal cortex. Proc Natl Acad Sci U S A 110:16616–16621. https://doi.org/10.1073/pnas.1315235110 pmid:24062451
    OpenUrlAbstract/FREE Full Text
  20. ↵
    1. Fusi S,
    2. Miller EK,
    3. Rigotti M
    (2016) Why neurons mix: high dimensionality for higher cognition. Curr Opin Neurobiol 37:66–74. https://doi.org/10.1016/j.conb.2016.01.010
    OpenUrlCrossRefPubMed
  21. ↵
    1. Goddard E,
    2. Carlson TA,
    3. Woolgar A
    (2022) Spatial and feature-selective attention have distinct, interacting effects on population-level tuning. J Cogn Neurosci 34:290–312. https://doi.org/10.1101/530352
    OpenUrlCrossRefPubMed
  22. ↵
    1. Gold JI,
    2. Shadlen MN
    (2007) The neural basis of decision making. Annu Rev Neurosci 30:535–574. https://doi.org/10.1146/annurev.neuro.29.051605.113038
    OpenUrlCrossRefPubMed
  23. ↵
    1. Grootswagers T,
    2. Robinson AK,
    3. Shatek SM,
    4. Carlson TA
    (2019) Untangling featural and conceptual object representations. Neuroimage 202:116083. https://doi.org/10.1016/j.neuroimage.2019.116083
    OpenUrl
  24. ↵
    1. Grootswagers T,
    2. Robinson AK,
    3. Shatek SM,
    4. Carlson TA
    (2021) The neural dynamics underlying prioritisation of task-relevant information. Neuron Behav Data Anal Theory 5:1–17. https://doi.org/10.51628/001c.21174
    OpenUrl
  25. ↵
    1. Harel A,
    2. Kravitz DJ,
    3. Baker CI
    (2014) Task context impacts visual object processing differentially across the cortex. Proc Natl Acad Sci U S A 111:E962–E971. https://doi.org/10.1073/pnas.1312567111 pmid:24567402
    OpenUrlAbstract/FREE Full Text
  26. ↵
    1. Hebart MN,
    2. Bankson BB,
    3. Harel A,
    4. Baker CI,
    5. Cichy RM
    (2018) The representational dynamics of task and object processing in humans. Elife 7:e32816. https://doi.org/10.7554/eLife.32816 pmid:29384473
    OpenUrlCrossRefPubMed
  27. ↵
    1. Heekeren HR,
    2. Marrett S,
    3. Ungerleider LG
    (2008) The neural systems that mediate human perceptual decision making. Nat Rev Neurosci 9:467–479. https://doi.org/10.1038/nrn2374
    OpenUrlCrossRefPubMed
  28. ↵
    1. Hon N,
    2. Epstein RA,
    3. Owen AM,
    4. Duncan J
    (2006) Frontoparietal activity with minimal decision and control. J Neurosci 26:9805–9809. https://doi.org/10.1523/JNEUROSCI.3165-06.2006 pmid:16988051
    OpenUrlAbstract/FREE Full Text
  29. ↵
    1. Jackson JB,
    2. Feredoes E,
    3. Rich AN,
    4. Lindner M,
    5. Woolgar A
    (2021) Concurrent neuroimaging and neurostimulation reveals a causal role for dlPFC in coding of task-relevant information. Commun Biol 4:1–16. https://doi.org/10.1038/s42003-021-02109-x pmid:34002006
    OpenUrlPubMed
  30. ↵
    1. Jackson J,
    2. Rich AN,
    3. Williams MA,
    4. Woolgar A
    (2016) Feature-selective attention in frontoparietal cortex: multivoxel codes adjust to prioritize task-relevant information. J Cogn Neurosci 29:310–321. https://doi.org/10.1162/jocn_a_01039
    OpenUrlCrossRef
  31. ↵
    1. Jackson JB,
    2. Woolgar A
    (2018) Adaptive coding in the human brain: distinct object features are encoded by overlapping voxels in frontoparietal cortex. Cortex 108:25–34. https://doi.org/10.1016/j.cortex.2018.07.006 pmid:30121000
    OpenUrlCrossRefPubMed
  32. ↵
    1. Jeffreys H
    (1998) The theory of probability. OUP Oxford.
  33. ↵
    1. Jehee JFM,
    2. Ling S,
    3. Swisher JD,
    4. van Bergen RS,
    5. Tong F
    (2012) Perceptual learning selectively refines orientation representations in early visual cortex. J Neurosci 32:16747–16753. https://doi.org/10.1523/JNEUROSCI.6112-11.2012 pmid:23175828
    OpenUrlAbstract/FREE Full Text
  34. ↵
    1. Kaiser D,
    2. Oosterhof N,
    3. Peelen M
    (2016) The neural dynamics of attentional selection in natural scenes. J Neurosci 36:10522–10528. https://doi.org/10.1523/JNEUROSCI.1385-16.2016 pmid:27733605
    OpenUrlAbstract/FREE Full Text
  35. ↵
    1. Kass RE,
    2. Raftery AE
    (1995) Bayes factors. J Am Stat Assoc 90:773–795. https://doi.org/10.1080/01621459.1995.10476572
    OpenUrlCrossRefPubMed
  36. ↵
    1. Kleiner M,
    2. Brainard D,
    3. Pelli D
    (2007) What’s new in Psychtoolbox-3?
  37. ↵
    1. Kriegeskorte N,
    2. Goebel R,
    3. Bandettini P
    (2006) Information-based functional brain mapping. Proc Natl Acad Sci U S A 103:3863–3868. https://doi.org/10.1073/pnas.0600244103 pmid:16537458
    OpenUrlAbstract/FREE Full Text
  38. ↵
    1. Kriegeskorte N,
    2. Mur M,
    3. Bandettini PA
    (2008) Representational similarity analysis—connecting the branches of systems neuroscience. Front Syst Neurosci 2:249. https://doi.org/10.3389/neuro.06.004.2008 pmid:19104670
    OpenUrlCrossRefPubMed
  39. ↵
    1. Li S,
    2. Ostwald D,
    3. Giese M,
    4. Kourtzi Z
    (2007) Flexible coding for categorical decisions in the human brain. J Neurosci 27:12321–12330. https://doi.org/10.1523/JNEUROSCI.3795-07.2007 pmid:17989296
    OpenUrlAbstract/FREE Full Text
  40. ↵
    1. Linde-Domingo J,
    2. Spitzer B
    (2024) Geometry of visuospatial working memory information in miniature gaze patterns. Nat Hum Behav 8:336–348. https://doi.org/10.1038/s41562-023-01737-z pmid:38110511
    OpenUrlPubMed
  41. ↵
    1. Löffler A,
    2. Haggard P,
    3. Bode S
    (2019) Decoding changes of mind in voluntary action—dynamics of intentional choice representations. Cereb Cortex 30:1199–1212. https://doi.org/10.1093/cercor/bhz160
    OpenUrl
  42. ↵
    1. Moerel D,
    2. Grootswagers T,
    3. Robinson AK,
    4. Shatek SM,
    5. Woolgar A,
    6. Carlson TA,
    7. Rich AN
    (2022) The time-course of feature-based attention effects dissociated from temporal expectation and target-related processes. Sci Rep 12:6968. https://doi.org/10.1038/s41598-022-10687-x pmid:35484363
    OpenUrlPubMed
  43. ↵
    1. Morey RD,
    2. Rouder JN
    (2011) Bayes factor approaches for testing interval null hypotheses. Psychol Methods 16:406–419. https://doi.org/10.1037/a0024377
    OpenUrlCrossRefPubMed
  44. ↵
    1. Morey RD,
    2. Rouder JN
    (2018). BayesFactor: computation of Bayes factors for common designs. Available at: https://CRAN.R-project.org/package=BayesFactor
  45. ↵
    1. Mostert P,
    2. Albers AM,
    3. Brinkman L,
    4. Todorova L,
    5. Kok P,
    6. de Lange FP
    (2018) Eye movement-related confounds in neural decoding of visual working memory representations. eNeuro 5:ENEURO.0401-17.2018. https://doi.org/10.1523/ENEURO.0401-17.2018 pmid:30310862
    OpenUrlAbstract/FREE Full Text
  46. ↵
    1. Noah S,
    2. Meyyappan S,
    3. Ding M,
    4. Mangun GR
    (2023) Time courses of attended and ignored object representations. J Cogn Neurosci 35:645–658. https://doi.org/10.1162/jocn_a_01972 pmid:36735619
    OpenUrlCrossRefPubMed
  47. ↵
    1. Oostenveld R,
    2. Fries P,
    3. Maris E,
    4. Schoffelen J-M
    (2011) Fieldtrip: open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data. Comput Intell Neurosci 2011:156869. https://doi.org/10.1155/2011/156869 pmid:21253357
    OpenUrlCrossRefPubMed
  48. ↵
    1. Oosterhof NN,
    2. Connolly AC,
    3. Haxby JV
    (2016) CoSMoMVPA: multi-modal multivariate pattern analysis of neuroimaging data in matlab/GNU octave. Front Neuroinform 10:27. https://doi.org/10.3389/fninf.2016.00027 pmid:27499741
    OpenUrlCrossRefPubMed
  49. ↵
    1. Pelli DG
    (1997) The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spat Vis 10:437–442. https://doi.org/10.1163/156856897X00366
    OpenUrlCrossRefPubMed
  50. ↵
    1. Pestilli F,
    2. Carrasco M
    (2005) Attention enhances contrast sensitivity at cued and impairs it at uncued locations. Vision Res 45:1867–1875. https://doi.org/10.1016/j.visres.2005.01.019
    OpenUrlCrossRefPubMed
  51. ↵
    1. Posner MI
    (1980) Orienting of attention. Q J Exp Psychol 32:3–25. https://doi.org/10.1080/00335558008248231
    OpenUrlCrossRefPubMed
  52. ↵
    1. Pratte MS,
    2. Ling S,
    3. Swisher JD,
    4. Tong F
    (2013) How attention extracts objects from noise. J Neurophysiol 110:1346–1356. https://doi.org/10.1152/jn.00127.2013 pmid:23803331
    OpenUrlCrossRefPubMed
  53. ↵
    1. Quax SC,
    2. Dijkstra N,
    3. van Staveren MJ,
    4. Bosch SE,
    5. van Gerven MAJ
    (2019) Eye movements explain decodability during perception and cued attention in MEG. Neuroimage 195:444–453. https://doi.org/10.1016/j.neuroimage.2019.03.069
    OpenUrl
  54. ↵
    1. Rao SC,
    2. Rainer G,
    3. Miller EK
    (1997) Integration of what and where in the primate prefrontal cortex. Science 276:821–824. https://doi.org/10.1126/science.276.5313.821
    OpenUrlAbstract/FREE Full Text
  55. ↵
    1. Ratcliff R,
    2. McKoon G
    (2007) The diffusion decision model: theory and data for two-choice decision tasks. Neural Comput 20:873–922. https://doi.org/10.1162/neco.2008.12-06-420 pmid:18085991
    OpenUrlCrossRefPubMed
  56. ↵
    1. Rens N,
    2. Bode S,
    3. Burianová H,
    4. Cunnington R
    (2017) Proactive recruitment of frontoparietal and salience networks for voluntary decisions. Front Hum Neurosci 11:610. https://doi.org/10.3389/fnhum.2017.00610 pmid:29311873
    OpenUrlPubMed
  57. ↵
    1. Rigotti M,
    2. Barak O,
    3. Warden MR,
    4. Wang X-J,
    5. Daw ND,
    6. Miller EK,
    7. Fusi S
    (2013) The importance of mixed selectivity in complex cognitive tasks. Nature 497:585–590. https://doi.org/10.1038/nature12160 pmid:23685452
    OpenUrlCrossRefPubMed
  58. ↵
    1. Robinson AK,
    2. Grootswagers T,
    3. Carlson TA
    (2019) The influence of image masking on object representations during rapid serial visual presentation. Neuroimage 197:224–231. https://doi.org/10.1016/j.neuroimage.2019.04.050
    OpenUrl
  59. ↵
    1. Robinson AK,
    2. Rich AN,
    3. Woolgar A
    (2022) Linking the brain with behavior: the neural dynamics of success and failure in goal-directed behavior. J Cogn Neurosci 34:639–654. https://doi.org/10.1162/jocn_a_01818
    OpenUrlCrossRef
  60. ↵
    1. Rorden C,
    2. Brett M
    (2000) Stereotaxic display of brain lesions. Behav Neurol 12:191–200. https://doi.org/10.1155/2000/421719
    OpenUrl
  61. ↵
    1. Rouder JN,
    2. Morey RD,
    3. Speckman PL,
    4. Province JM
    (2012) Default Bayes factors for ANOVA designs. J Math Psychol 56:356–374. https://doi.org/10.1016/j.jmp.2012.08.001
    OpenUrlCrossRef
  62. ↵
    1. Rouder JN,
    2. Speckman PL,
    3. Sun D,
    4. Morey RD,
    5. Iverson G
    (2009) Bayesian t tests for accepting and rejecting the null hypothesis. Psychon Bull Rev 16:225–237. https://doi.org/10.3758/PBR.16.2.225
    OpenUrlCrossRefPubMed
  63. ↵
    1. Sàenz M,
    2. Buraĉas GT,
    3. Boynton GM
    (2003) Global feature-based attention for motion and color. Vision Res 43:629–637. https://doi.org/10.1016/S0042-6989(02)00595-3
    OpenUrlCrossRefPubMed
  64. ↵
    1. Sigala N,
    2. Kusunoki M,
    3. Nimmo-Smith I,
    4. Gaffan D,
    5. Duncan J
    (2008) Hierarchical coding for sequential task events in the monkey prefrontal cortex. Proc Natl Acad Sci U S A 105:11969–11974. https://doi.org/10.1073/pnas.0802569105 pmid:18689686
    OpenUrlAbstract/FREE Full Text
  65. ↵
    1. Stiers P,
    2. Mennes M,
    3. Sunaert S
    (2010) Distributed task coding throughout the multiple demand network of the human frontal–insular cortex. Neuroimage 52:252–262. https://doi.org/10.1016/j.neuroimage.2010.03.078
    OpenUrlCrossRefPubMed
  66. ↵
    1. Stokes MG,
    2. Kusunoki M,
    3. Sigala N,
    4. Nili H,
    5. Gaffan D,
    6. Duncan J
    (2013) Dynamic coding for cognitive control in prefrontal cortex. Neuron 78:364–375. https://doi.org/10.1016/j.neuron.2013.01.039 pmid:23562541
    OpenUrlCrossRefPubMed
  67. ↵
    1. Teichmann L,
    2. Moerel D,
    3. Baker CI,
    4. Grootswagers T
    (2022) An empirically-driven guide on using Bayes factors for M/EEG decoding. Apert Neuro 1:1–10. https://doi.org/10.1101/2021.06.23.449663
    OpenUrl
  68. ↵
    1. Wagenmakers E-J
    (2007) A practical solution to the pervasive problems ofp values. Psychon Bull Rev 14:779–804. https://doi.org/10.3758/BF03194105
    OpenUrlCrossRefPubMed
  69. ↵
    1. Waskom ML,
    2. Kumaran D,
    3. Gordon AM,
    4. Rissman J,
    5. Wagner AD
    (2014) Frontoparietal representations of task context support the flexible control of goal-directed cognition. J Neurosci 34:10743–10755. https://doi.org/10.1523/JNEUROSCI.5282-13.2014 pmid:25100605
    OpenUrlAbstract/FREE Full Text
  70. ↵
    1. Wetzels R,
    2. Matzke D,
    3. Lee MD,
    4. Rouder JN,
    5. Iverson GJ,
    6. Wagenmakers E-J
    (2011) Statistical evidence in experimental psychology: an empirical comparison using 855 t tests. Perspect Psychol Sci 6:291–298. https://doi.org/10.1177/1745691611406923
    OpenUrlCrossRefPubMed
  71. ↵
    1. White AL,
    2. Carrasco M
    (2011) Feature-based attention involuntarily and simultaneously improves visual performance across locations. J Vis 11:15. https://doi.org/10.1167/11.6.15 pmid:21602553
    OpenUrlAbstract/FREE Full Text
  72. ↵
    1. Woolgar A,
    2. Afshar S,
    3. Williams MA,
    4. Rich AN
    (2015a) Flexible coding of task rules in frontoparietal cortex: an adaptive system for flexible cognitive control. J Cogn Neurosci 27:1895–1911. https://doi.org/10.1162/jocn_a_00827
    OpenUrl
  73. ↵
    1. Woolgar A,
    2. Golland P,
    3. Bode S
    (2014) Coping with confounds in multivoxel pattern analysis: what should we do about reaction time differences? A comment on Todd, Nystrom & Cohen 2013. Neuroimage 98:506–512. https://doi.org/10.1016/j.neuroimage.2014.04.059 pmid:24793832
    OpenUrlCrossRefPubMed
  74. ↵
    1. Woolgar A,
    2. Jackson J,
    3. Duncan J
    (2016) Coding of visual, auditory, rule, and response information in the brain: 10 years of multivoxel pattern analysis. J Cogn Neurosci 28:1433–1454. https://doi.org/10.1162/jocn_a_00981
    OpenUrlCrossRefPubMed
  75. ↵
    1. Woolgar A,
    2. Williams MA,
    3. Rich AN
    (2015b) Attention enhances multi-voxel representation of novel objects in frontal, parietal and visual cortices. Neuroimage 109:429–437. https://doi.org/10.1016/j.neuroimage.2014.12.083
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 44 (38)
Journal of Neuroscience
Vol. 44, Issue 38
18 Sep 2024
  • Table of Contents
  • About the Cover
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Selective Attention and Decision-Making Have Separable Neural Bases in Space and Time
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Selective Attention and Decision-Making Have Separable Neural Bases in Space and Time
Denise Moerel, Anina N. Rich, Alexandra Woolgar
Journal of Neuroscience 18 September 2024, 44 (38) e0224242024; DOI: 10.1523/JNEUROSCI.0224-24.2024

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Selective Attention and Decision-Making Have Separable Neural Bases in Space and Time
Denise Moerel, Anina N. Rich, Alexandra Woolgar
Journal of Neuroscience 18 September 2024, 44 (38) e0224242024; DOI: 10.1523/JNEUROSCI.0224-24.2024
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • Peer Review
  • PDF

Keywords

  • decision-making
  • fMRI
  • MEG
  • MVPA
  • selective attention

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Co-release of GABA and ACh from Medial Olivocochlear Neurons as a Fine Regulatory Mechanism of Cochlear Efferent Inhibition
  • Collapsing Perisomatic Inhibition Leads to Epileptic Fast-Ripple Oscillations Caused by Pseudosynchronous Firing of CA3 Pyramidal Neurons
  • Behavior-Relevant Periodized Neural Representation of Acoustic But Not Tactile Rhythm in Humans
Show more Research Articles

Behavioral/Cognitive

  • Behavior-Relevant Periodized Neural Representation of Acoustic But Not Tactile Rhythm in Humans
  • Neural Sensitivity to the Heartbeat Is Modulated by Fluctuations in Affective Arousal during Spontaneous Thought
  • Medial Frontal Theta Reduction Impairs Rule Switching via Prediction Error
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.