Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

Common Neural Mechanisms Control Attention and Working Memory

Ying Zhou (周颖), Clayton E. Curtis, Kartik K. Sreenivasan and Daryl Fougnie
Journal of Neuroscience 14 September 2022, 42 (37) 7110-7120; DOI: https://doi.org/10.1523/JNEUROSCI.0443-22.2022
Ying Zhou (周颖)
1Program in Psychology, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates 129188
2Department of Psychology, New York University, New York, New York 10003
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Clayton E. Curtis
2Department of Psychology, New York University, New York, New York 10003
3Center for Neural Science, New York University, New York, New York 10003
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Kartik K. Sreenivasan
1Program in Psychology, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates 129188
2Department of Psychology, New York University, New York, New York 10003
4Program in Biology, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates 129188
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Daryl Fougnie
1Program in Psychology, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates 129188
2Department of Psychology, New York University, New York, New York 10003
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

Although previous studies point to qualitative similarities between working memory (WM) and attention, the degree to which these two constructs rely on shared neural mechanisms remains unknown. Focusing on one such potentially shared mechanism, we tested the hypothesis that selecting an item within WM utilizes similar neural mechanisms as selecting a visible item via a shift of attention. We used fMRI and machine learning to decode both the selection among items visually available and the selection among items stored in WM in human subjects (both sexes). Patterns of activity in visual, parietal, and to a lesser extent frontal cortex predicted the locations of the selected items. Critically, these patterns were strikingly interchangeable; classifiers trained on data during attentional selection predicted selection from WM, and classifiers trained on data during selection from memory predicted attentional selection. Using models of voxel receptive fields, we visualized topographic population activity that revealed gain enhancements at the locations of the externally and internally selected items. Our results suggest that selecting among perceived items and selecting among items in WM share a common mechanism. This common mechanism, analogous to a shift of spatial attention, controls the relative gains of neural populations that encode behaviorally relevant information.

SIGNIFICANCE STATEMENT How we allocate our attention to external stimuli that we see and to internal representations of stimuli stored in memory might rely on a common mechanism. Supporting this hypothesis, we demonstrated that not only could patterns of human brain activity predict which items were selected during perception and memory, but that these patterns were interchangeable during external and internal selection. Additionally, this generalized selection mechanism operates by changes in the gains of the neural populations both encoding attended sensory representations and storing relevant memory representations.

  • attention
  • decoding
  • fMRI
  • selection
  • working memory

Introduction

Although theories of attention and working memory (WM) often emphasize their interrelatedness (Cowan, 1998a; Awh and Jonides, 2001; Oberauer, 2002; Chun, 2011; Gazzaley and Nobre, 2012), they are typically studied in isolation. Several lines of empirical evidence highlight commonalities between attention and WM. For example, attention and WM share similar capacity and resource limitations (Cowan, 1998b; Marois and Ivanoff, 2005; but see Fougnie and Marois, 2006) and engage similar brain regions (LaBar et al., 1999; Pollmann and von Cramon, 2000; Awh and Jonides, 2001; Ranganath and D'Esposito, 2005; Ikkai and Curtis, 2011; Jerde et al., 2012). Moreover, the process of rehearsing items (Awh et al., 1999, 2000; Awh and Jonides, 2001; Jha, 2002; Postle et al., 2004; Theeuwes et al., 2005; Katus et al., 2014; Shen et al., 2015; Souza et al., 2020) and suppressing distracting information (Gazzaley et al., 2005; Sreenivasan and Jha, 2007) in WM may be attention-based. In turn, representations in WM can guide how we attend to sensory information (Olivers et al., 2006; Woodman and Luck, 2007; Gayet et al., 2013; Bahle et al., 2020; Sasin and Fougnie, 2020; Williams et al., 2022). However, since attention and WM are complex processes involving multiple cognitive operations, it remains unclear which underlying components may be shared between the two. Here, we focus on the process of selection, which refers to how task-relevant information is prioritized over task-irrelevant information in both attention and WM. Critically, selection helps mitigate the strict resource or capacity limitations of attention and WM (Marois and Ivanoff, 2005).

The notion that a common process both selects among external sensory information and among internal WM representations is intriguing because it appeals to an intuitive common mechanism used to highlight relevant information. Nonetheless, there is little evidence linking attentional selection and WM selection. Researchers have intensively investigated attentional selection using pre-cueing paradigms, where a cue indicates which forthcoming stimulus to attend (Posner, 1980; Eriksen and Yeh, 1985; Murphy and Eriksen, 1987). Behaviorally, pre-cueing benefits the processing of selected information (Carrasco et al., 2000; Pestilli et al., 2009) at the cost of the processing of unselected information (Pestilli and Carrasco, 2005; Pestilli et al., 2007). Both neurons (Bushnell et al., 1981; Luck et al., 1997; Reynolds et al., 2000; Bisley and Goldberg, 2003) and voxels (Gandhi et al., 1999; Hopfinger et al., 2001; Liu et al., 2005; Serences and Boynton, 2007; Silver et al., 2007; Ikkai and Curtis, 2008) with receptive fields that match the locations of pre-cued items exhibit increased activity relative to those that match the locations of unattended items. Theories of selective attention, including computational models, posit that the benefits of attention stem from gain enhancements within the populations of neurons encoding selected task-relevant stimuli (Reynolds and Heeger, 2009; Carrasco, 2011). To study the process of selection in WM, researchers have used retro-cueing experimental paradigms, where a cue presented after WM items have been encoded signifies which memorandum will later be tested (Griffin and Nobre, 2003). Behaviorally, the quality of memory is better for the retroactively cued item compared with noncued items (Griffin and Nobre, 2003; Landman et al., 2003; Souza et al., 2016; Li et al., 2021). Neurally, WM representations are enhanced within the neural populations encoding the selected WM item (Lepsien et al., 2011; Sprague et al., 2016; Ester et al., 2018; Yoo et al., 2022).

We directly tested the hypothesis that a common neural mechanism underlies attentional and WM selection using fMRI and machine learning. Remarkably, classifiers trained on each type of selection were interchangeable in predicting the other, providing novel quantitative evidence for theories that posit a shared mechanism (Chun et al., 2011). In addition, using models of voxel receptive fields to visualize population activity, we observed elevated responses corresponding to the locations of the externally and internally selected items, suggesting that the shared selection mechanism involves differential gain.

Materials and Methods

Participants

Eleven neurologically healthy participants (ages 23–53; six females) with normal or corrected-to-normal vision participated in this experiment. The sample size was determined using previous fMRI studies comparing selection on perceptual and WM representations (Nobre et al., 2004; Tamber-Rosenau et al., 2011), and is equal to or larger than previous studies that compared within-condition and across-condition decoding performance of classifiers trained on fMRI data (Jerde et al., 2012; Rademaker et al., 2019; Kwak and Curtis, 2022), as well as those that used population receptive field (pRF)-weighted reconstruction analysis (Kwak and Curtis, 2022; Yoo et al., 2022). Participants provided written informed consent in accordance with procedures approved by the Institutional Review Board at New York University.

Experimental design

We generated stimuli and interfaced with the MRI scanner, button box, and eye tracker using MATLAB software (The MathWorks) and Psychophysics Toolbox 3 (Brainard, 1997). Stimuli were presented using a PROPixx DLP LED projector (VPixx) located outside the scanner room and projected through a waveguide and onto a translucent screen located at the head of the scanner bore. Participants viewed the screen at a total viewing distance of 63 cm through a mirror attached to the head coil. The display was a circular aperture with an ∼32-degree of visual angle (dva) diameter. A trigger pulse from the scanner synchronized the onsets of stimulus presentation and image acquisition.

Participants performed a pre-cue task and a retro-cue task in the two scanning sessions. The task procedures are illustrated in Figure 1A. The fixation symbol in both tasks was a centrally-presented filled circle with a 0.3-dva radius. Subjects were required to maintain fixation in the center of the screen. Each pre-cue trial began with a 750-ms colored central fixation (0.4-dva radius) with three black placeholders. The color of the fixation indicated the target location in the upcoming stimulus screen. The distance from the screen center to the center of each placeholder was 6 dva, and the diameter of each placeholder was 8 dva. The pre-cue was followed by a 1500-ms ISI (Inter-stimulus interval), then by the stimulus for 1500 ms. Stimulus presentation consisted of three Gabor patches, one in each placeholder. The three placeholders were in three different colors, and subjects had to select the target Gabor in the placeholder with the pre-cued color. The three colors used in each trial were randomly selected from four colors (RGB = [255, 0, 0], [0, 200, 0], [0, 0, 255], [255, 165, 0]) and randomly distributed across the three locations (Left, Right, Bottom) so that the target location could not be predicted by the representations of the pre-cue. The stimulus presentation was followed by a 750-ms mask to diminish iconic memory (Sperling, 1960), then by a 3000-ms delay. This was followed by the presentation of a probe, which consisted of a circle and an oriented line. The length of the line and the diameter of the circle were both 6 dva. Subjects had to judge if the line was rotated clockwise or counterclockwise compared with the orientation of the selected Gabor. We adjusted the difference between the orientations of target and probe to titrate the behavioral performance to ∼80%. Specifically, the difference between the orientations of the probe and target started at 20° and either increased by 1° after each error trial or decreased by 1° after four continuous correct trials (cf. Levitt, 1971). Subjects responded by pressing “1” for clockwise or “2” for counterclockwise. The probe screen lasted for 2250 ms regardless of subjects' responses. Subjects then received feedback consisting of the selected Gabor overlaid with the probe. The color of the probe indicated whether the response was correct (green: correct; red: incorrect). The intertrial interval lasted for 9750 ms. Each retro-cue trial began with a 1500-ms stimulus screen, which contained three Gabor patches in three black placeholders. The stimulus was followed by a 750-ms mask and a 1500-ms ISI. Following the ISI, subjects saw a retro-cue consisting of three colored placeholders surrounding a colored fixation. The color of the fixation point matched the color of one of the placeholders and indicated the location of the target Gabor. The colors on each trial were randomly selected from the four possible colors (see above) and randomly distributed across the three locations. The delay, probe, feedback, and intertrial interval were the same as those in the pre-cue task. Each subject completed two scanning sessions consisting of 10 runs each, with five pre-cue runs and five retro-cue runs presented in an interleaved order. Each run contained 18 trials, yielding 180 trials per condition (90 per session). Each run started with 13 dummy TRs (9750 ms) of a central fixation screen to allow for magnetic field stabilization.

Eye movements

Eye position coordinates (x,y) and pupil size were recorded at 500 Hz in the scanner using an EyeLink 2k (SR Research). Before each scanning session, eye position was calibrated using a 9-point calibration. Eye data were preprocessed in MATLAB using the freely available iEye toolbox (https://github.com/clayspacelab/iEye) using the following steps. (1) Data were transformed from raw pixel screen coordinates into dva. (2) Extreme values associated with loss of track and blinks were removed. (3) Data were smoothed with a Gaussian kernel (5-ms SD). (4) Each trial was drift-corrected by taking the mean over known epochs when the participant was fixating (the whole delay in pre-cue task, and the first delay before the cue in the retro-cue task) and subtracting that value from the entire trial.

fMRI methods

MRI data acquisition

Participants underwent three fMRI scanning sessions, two for the main selection experiment and one for retinotopic mapping. All data were acquired at the NYU Center for Brain Imaging on a 3T Siemens Prisma Scanner with a 64-channel head/neck coil. Functional scans for the selection experiment were acquired using an EPI pulse sequence with 44 slices and a voxel size of 2.53 mm (4× simultaneous-multi-slice acceleration; FoV = 200 × 200 mm, no in-plane acceleration, TE/TR = 30/750 ms, flip angle = 50°, bandwidth = 2290 Hz/pixel, 0.56-ms echo spacing, P → A phase encoding). Data for the retinotopic mapping session was acquired in a separate session at a higher resolution, with 56 slices and a voxel size of 2.03 mm (4× simultaneous multi-slice acceleration, FoV = 208 × 208 mm, no in-plane acceleration, TE/TR = 36/1200 ms, flip angle = 66°, bandwidth = 2604 Hz/pixel, 0.51 ms echo spacing, P → A phase encoding). To correct for local spatial distortions in the functional data offline, we estimated a field map of the field inhomogeneities by acquiring pairs of spin echo images with normal and reversed phase-encoding directions with an identical slice prescription to the functional data and no simultaneous-multi-slice acceleration (TE/TR = 45.6/3537 ms, three volumes per phase encoding direction). To enable precise localization of functional data, we collected T1-weighted whole-brain anatomic scans using a MP-RAGE sequence with 192 slices and a voxel size of 0.83 mm (FoV = 256 × 240 mm, TE/TR = 2.24/2400 ms, flip angle = 8°, bandwidth = 210 Hz/pixel) for each participant.

MRI data preprocessing

T1-weighted anatomic images were segmented and cortical surfaces were constructed using the recon_all command in Freesurfer (version 6.0). Functional data were preprocessed with custom scripts using functions provided by AFNI. First, we applied the B0 field map correction and reverse-polarity phase-encoding (reverse blip) correction. Next, we performed motion correction using a six-parameter affine transform, aligned the functional data to the anatomic images, and projected the data to the cortical surface. Spatial smoothing (5-mm FWHM on the cortical surface) was applied to the retinotopic mapping data, but no explicit smoothing was applied to the data from the selection experiment. Data from the selection experiment was re-projected into volume space, which incurs a small amount of smoothing along vectors perpendicular to the cortical surface. Finally, we removed linear trends from the time series data, and then normalized (z score) across all the time points within each run.

Estimating selection-related blood oxygen level-dependent (BOLD) activity

To identify activity related to selection in both our tasks, we used the 3dDeconvolve and 3dLSS commands in AFNI (https://afni.nimh.nih.gov/) to implement a least-squares-separate general linear model (LSS-GLM) to the preprocessed BOLD time series of the functional data. LSS-GLM has been shown to isolate single-trial activity in rapid event-related designs (Mumford et al., 2012; Arco et al., 2018). In the pre-cue condition, we modeled the selection, delay, and response events. The stimulus event was not modeled because it overlapped with the selection event. Based on previous estimates that symbolic central cues direct selection within 300 ms (Carrasco, 2011), we set the duration of the selection event equal to one TR (750 ms). The retro-cue model was similar to the pre-cue model, except the selection event was time-locked to the 750-ms retro-cue screen. Each event was modeled as a boxcar with the duration of the event convolved with a hemodynamic response function (HRF; GAM(10.9, 0.54) in 3dDeconvolve). The β coefficients for the selection event in each trial were estimated separately, resulting in 180 GLM iterations for each task condition. In each iteration, we modeled the selection event on the trial of interest with a single regressor and the selection events on all other trials with a separate regressor. The delay and response events were modeled with one regressor each. Thus, the GLM for each task condition included four regressors on each iteration (selection for the trial of interest, selection for all other trials, delay, and response). In addition, the model included six regressors for head motion and four regressors for data drift. The procedure was repeated with each trial in turn serving as the trial of interest, resulting in 180 selection betas per condition. Selection betas were normalized via z-scoring on a voxel-by-voxel and run-by-run basis before further analysis.

pRF mapping

pRF mapping was conducted using the procedures described previously (Mackey et al., 2017). Participants maintained central fixation while covertly tracking a bar aperture that swept across the screen in discrete steps in one of four orientations/directions: oriented vertically and traveling from left to right or right to left, or oriented horizontally and traveling from top to bottom or bottom to top. The bar aperture was divided into three rectangular segments (a central segment and two flanking segments) of equal sizes, each containing a random dot kinematogram (RDK). Participants' task was to identify which of the two flanking RDKs had the same direction of motion as the central RDK. The dot motions of all the three segments changed with each discrete step. Participants reported their answer with a button press. We adjusted the coherence of the RDK using a staircase procedure to maintain accuracy at ∼75%. Each session contained eight to nine runs, each 5-min run consisted of 12 sweeps, and each sweep consisted of 12 discrete steps (one step every 2 s). The order of the four sweep directions was randomized within each run.

We fit the preprocessed BOLD time series for each voxel for each participant using the compressive spatial summation model (Kay et al., 2013): r̂(t)=γ[∫∫S(x,y)N((x,y),Iσ)dxxy]n, where S is a binary stimulus image (1 in where the stimulus was presented and 0 in where the stimulus was not presented), N((x,y),Iσ) is a normal distribution with mean (x,y) and variance Iσ2, where I is a two-dimensional identity matrix describing a circular, symmetric Gaussian. The parameters of this model are the voxel's receptive field center (x,y) in dva, SD σ in dva, amplitude γ, and compressive spatial summation factor n (where n≤1). Parameters were fit with a GPU-accelerated course grid search over parameters, followed by a local optimization method.

Voxels' preferred phase angle (arctan(xy)) and eccentricity (x2 + y2) were visualized on the cortical surface. To define retinotopically organized regions of interest (ROIs), we restricted our analysis to voxels with >10% response variability explained by the pRF model. We then drew ROIs on each participant's cortical surface using reversals of the voxels' preferred phase angle as boundaries between neighboring visual regions (Mackey et al., 2017). We defined four ROIs in anterior and dorsal visual areas (V1–V3AB), four ROIs along the caudal–rostral intraparietal sulcus (IPS0–IPS3), and two ROIs along the dorsal-ventral precentral sulcus region (sPCS and iPCS), each with a full visual field representation (for an illustration of the ROIs for one subject, see Fig. 2A). Further fMRI analyses were conducted in ROIs that combined regions which share a foveal confluence (V1, V2, and V3; IPS0 and IPS1; IPS2 and IPS3; Wandell et al., 2005, 2007; Mackey et al., 2017). We also combined voxels in sPCS and iPCS into a single PCS ROI to roughly match the size of our other ROIs, although the results were comparable for the individual PCS ROIs.

Decoding analyses

We examined whether the machine learning algorithm trained by selection-related activity in the pre-cue task can decode the internally selected location in the retro-cue task, and vice versa. To decode the selected location, we trained sparse multinomial logistic regression (SMLR) classifiers in MATLAB using the Princeton MVPA toolbox (http://www.csbmb.princeton.edu/mvpa). The SMLR classifier is widely used to decode multiclass conditions (Krishnapuram et al., 2005; Pereira et al., 2009). The z-scored β coefficients for the selection event in each trial estimated using the LSS-GLM were used to train the classifiers. First, the classifiers were tested on the dataset in the same task condition as training (i.e., within-condition decoding) to examine whether the neural representations we extracted could predict the selected location (Fig. 1C). Since there were three possible locations, we used a leave-three-trials-out cross-validation scheme, in which the classifier was trained on the data from all but three trials (one for each location) and then tested on the left-out trials in one task condition for each cross-validation fold, resulting in 60 cross-validation folds. Decoding accuracy was computed by comparing the true location labels with the classified labels across the 60 cross validations. Next, the classifier was trained on all of the data from one task condition and tested on each trial from the other task condition (i.e., across-condition decoding) to examine whether the neural representations were comparable in the two task conditions. To increase the reliability of our decoding estimates, we repeated the entire procedure 10 times, taking the mean of the decoding accuracy across iterations as the final within-condition and across-condition decoding accuracy.

To estimate the BOLD activity for selection and memory phases in the task, we computed the time series of decoding performance. We segmented the z-scored preprocessed BOLD time series from 0 to 21 s (i.e., the end of ITI (Inter-trial interval)) relative to the trial onset, and trained classifiers separately using the BOLD signal averaged across every 2 TRs. One set of classifiers was trained to decode target location, signifying the selection process, while the other set of classifiers was trained to decode target orientation, signifying the maintenance process. Decoding of the target location was conducted using the leave-three-trials-out cross-validation scheme described above, while decoding of target orientation was conducted using a leave-six-trials-out cross-validation scheme, corresponding to the six possible Gabor orientations.

Estimating population-level activity modulation

We tested whether external and internal selection both cause an activity increase in voxels whose receptive fields match the selected item's location relative to the unselected items' locations. To estimate the response of the neural populations at the selected location, we reconstructed a pRF-weighted map using the selection-related activity. This procedure essentially projects voxel activity in each ROI to visual space in screen coordinates (Kok and de Lange, 2014; Kwak and Curtis, 2022; Yoo et al., 2022). In each ROI, for every selected location (left, right, bottom) in each task condition (pre-cue, retro-cue), we created a reconstructed map using ∑wi((x,y),Iσ)βi, where wi((x,y),Iσ) is the weight associated with the ith voxel at location (x,y), and βi is the averaged GLM-acquired selection period βat voxel i in trials with the same selected location. We defined wi as the receptive field of voxel i, which was a two-dimensional Gaussian with mean (μx,μy) and variance Iσ2, where I. is a two-dimensional identity matrix describing a circular, symmetric Gaussian (Fig. 3A). For each pRF-weighted selection activity map, we calculated the mean of selection-related activity in the selected and nonselected locations, and averaged across three target location conditions. The activity difference between the selected and nonselected locations was taken as the activity modulation because of selection.

Relationship between decoding results and population-level activity modulation

To investigate the relationship between our decoding results and population reconstructions, we conducted a median split of trials based on the magnitude of population-level modulation for the selected location and compared the relative classifier activation for the selected location for low-modulation and high-modulation trials. Classifier activation for each spatial location was calculated as the sum over voxels of the classifier training weights assigned to each voxel multiplied by the BOLD activity on a given trial, and was normalized to sum to 1 over the three classes. We used the relative classifier activation (where higher activation can be taken as greater classifier evidence for that class) for the selected location as a measure of the strength of classifier evidence for each of the three possible locations.

Distinguishing between possible mechanisms underlying enhanced population response

To explore whether observed increases in population response were consistent with relative gain modulation, we used a simulation to compare the expected influence on the spatiotopic population response under two plausible mechanisms, multiplicative gain (Reynolds and Heeger, 2009; Herrmann et al., 2010) and pRF shifts (Vo et al., 2017), with our observed population-level modulations. Each voxel's activity was simulated by the cumulative distribution function Activity=N(r−μ)=∫−∞r−μ1σ2πe−x22σ2dx, where r denotes the radius of the selected Gabor stimulus, μ denotes voxel's estimated pRF center, and σ denotes voxel's estimated pRF size (given by the SD of the best-fit Gaussian; Fig. 4A). In the multiplicative gain model (Fig. 4B, left), selection-related modulation, m, of activity for voxels with pRF centers within the selected Gabor was given by m=1 + 0.6 p(d)p(0)=1 + 0.6e−d2−3.522σ2, where d denotes the distance between voxel's pRF center and the center of the selected Gabor and σ denotes voxel's estimated pRF size; (cf. Reynolds and Heeger, 2009; see their Fig. 1; Herrmann et al., 2010; see their Fig. 5). In the pRF shift model (Fig. 4B, right), we assumed that voxels' pRF centers would shift a distance of s toward the center of the selected location according to s=0.2×(1−e−1.5d), where d denotes the distance between voxel's pRF center and the center of the selected Gabor (cf. Vo et al., 2017; see their Fig. 2e). To assess which model best described our data, we calculated the sum of squared error between simulated and observed activity modulation.

Statistical analysis

For our behavioral analyses, we used paired-sample t tests to examine differences between performance across tasks. For our fMRI analyses, all statistics were calculated using a nonparametric permutation analysis (Rademaker et al., 2019). This method is appropriate here because there was no a priori reason to believe that the data would be normally distributed. Specifically, we repeated each analysis with shuffled trial labels (i.e., selected location: left, right, bottom) 1000 times to build an empirical null distribution of the test statistic of interest (e.g., decoding accuracy). For individual-level analyses, the percentage of the empirical null distribution that was equal to or larger than the real data were taken as the p-value. For group-level analyses, our test statistic was the t-value derived from a paired t test of the real data (vs zero or chance), and the empirical null distribution was the corresponding t value for each of the 1000 iterations. The p-value was the percentage of the null distribution equal to or larger than the true t score.

Data availability

Behavioral, eye-tracking, and MRI data, and analysis code are available at https://osf.io/jqu95/.

Results

Behavioral performance

The accuracy of memory judgments in the pre-cue and retro-cue tasks indicated that participants performed each task well (Fig. 1B). Each condition used a separate staircase with a target accuracy of 80%. As expected, we found no significant difference in accuracy between the pre-cue (81%) and retro-cue (78%) conditions, t(10) = 2.084, p = 0.064, Cohen's d = 0.628, BF10= 1.421. At threshold, the angular differences between the sample and probe orientations in the pre-cue (13.64°) and retro-cue (15.76°) conditions did not significantly differ, t(10) = 1.856, p = 0.093, Cohen's d = 0.560, BF10= 1.074, suggesting that performance was well-matched across the two tasks.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Methods, behavior, decoding analysis, and decoding results. A, Pre-cue task: fixation cued the task-relevant color. Then, three oriented Gabors appeared within a colored placeholder ring. Participants encoded the Gabor whose ring color matched fixation. After a mask and delay, participants reported whether the orientation of the probe was tilted clockwise (CW; respond with button 1) or counterclockwise (CCW; respond with button 2) to the memorized target orientation. Retro-cue task: participants encoded the orientations of three Gabors that were followed by a mask and short ISI. Then a retro-cue indicated the task-relevant color. Participants were instructed to remember the orientation of the item at the location matching the fixation color and later compare to the probe orientation. B, Probe task accuracy and probe change magnitude in pre-cue (blue) and retro-cue (red) conditions. Gray lines indicate individual subjects. The error bars denote 95% bootstrap confidence interval (CI). C, Retinotopy for a representative subject. Color depicts voxels' preferred polar angle projected to the cortical surface of the left hemisphere. D, Decoding performance across ROIs, for within-condition decoding of pre-cued location (upper left), retro-cued location (lower right), and cross-condition decoding of location (gray backgrounds). Gray dashed line indicates chance accuracy (33.3%; colored: individual performance was significantly higher than chance, p < 0.05; one-sided, FDR false detection rate (FDR) corrected across all ROIs for each decoding condition independently). The jittered dots indicate individual performance, filled dots indicate participants from whom decoding performance was higher than chance (p < 0.05, one-sided), and unfilled dots indicate participants with at-chance decoding performance. The horizontal line in the violin indicates the mean of accuracy. The stars in the bottom of each plot indicate the group-level performance is significantly higher than chance (***p < 0.001, **p < 0.01, *p < 0.05; one-sided, FDR corrected across all ROIs for each decoding condition independently).

Decoding target location from isolated selection-related BOLD activity

First, we estimated selection-related activity on each trial using a GLM that modeled each of the trial components for each cueing condition (Materials and Methods). Second, for each condition separately, we trained classifiers to predict the cued locations (i.e., left, right, down; Fig. 1C). For both pre-cue and retro-cue trials, most of the ROIs could successfully decode the selected location (Fig. 1D), confirming that our decoding procedures were robust. Third, to test our main hypothesis regarding a shared neural selection mechanism, we performed across-condition decoding. We trained classifiers using pre-cue data and attempted to decode the cued location using retro-cue data. Similarly, we trained classifiers using retro-cue data and decoded the cued locations using pre-cue data. Critically, we found that most of the ROIs could successfully decode the selected location across cueing conditions (Fig. 1D). The performance of across-condition decoding was ∼90% of within-condition decoding, suggesting that the neural activity patterns were nearly interchangeable across the two tasks. We ruled out that these results were because of gaze shifts during selection (Theeuwes et al., 2005, 2009; Hedge and Leonards, 2013); classifiers trained on eye position were unable to predict the selected location (all FDR corrected ps ≥ 0.05).

Temporal evolution of selection and memory

To confirm that our decoding results indeed represented selection as opposed to incidental maintenance of the selected location, we examined the time course of decoding accuracy. This analysis was motivated by the fact that location information was essential during the selection process itself, but ceased to be task-relevant once selection was complete (as the memory probe was presented at fixation). Thus, we predicted that the time course of decoding accuracy for location would rise following the onset of the selection cue and return to chance once selection was complete, while the time course of orientation decoding (signifying the maintenance, retrieval, and response processes) would peak after selection and remain above chance for the duration of the trial. To formalize this prediction, we separately convolved the selection regressor and the combined delay + response + feedback regressors from our GLM with the HRF to simulate the expected time courses of the selection and memory processes, respectively (Fig. 2A). We then compared these simulated time courses with the actual decoding time course for location and orientation. Critically, the decoding time course analysis was completely independent from the GLM used to derive the simulated time courses (see Materials and Methods). The decoding accuracy time series for location and orientation (Fig. 2B) closely matched our simulated time courses. Specifically, the rapid rise and fall of location decoding in visual and parietal cortex matched the simulated time course of selection, whereas orientation decoding in visual cortex mirrored the simulated memory time course. Moreover, location decoding accuracy peaked earlier in the pre-cue relative to the retro-cue condition, consistent with the relative timing of selection in the two tasks. Together, these observations support our claim that location decoding reflected the selection process while orientation decoding reflected memory processes.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Temporal evolution of selection and memory decoding. A, Simulating selection and memory-related activity and predicting the performance of decoding target location and orientation. The neural activities for selection and memory were simulated by convolving the hypothetical duration with an HRF. The performance prediction was proportional to the amplitude of neural activities in arbitrary units. B, The time series of decoding performance for target location and orientation. The gray dashed lines key the chance for decoding location (33.3%) and orientation (16.7%). Blue and red lines stand for decoding location in pre-cue and retro-cue conditions, respectively. Yellow and green lines stand for decoding orientation in pre-cue and retro-cue conditions, respectively. The colored lines depict the mean of the performance for all subjects. The filled area around the line keys the 95% bootstrap CI. The dots on the top and bottom indicate the performance at the corresponding time points is significantly higher than chance (the darkest color means p < 0.001; the medium color means p < 0.01; the lightest color means p < 0.05; one-sided, FDR corrected across all time points for each ROI and decoding condition independently).

Reconstructing maps of selection-related activity

Although these results derived from machine learning approaches provide strong evidence for interchangeable patterns of activity during attentional and WM selection, it is notoriously difficult to make direct inferences about neural mechanisms based on significant classification (Freeman et al., 2011; Naselaris et al., 2011; Serences and Saproo, 2012). We used each voxel's pRF parameters (i.e., position and size) to project selection-related activity within each ROI to the screen coordinates of visual space (Fig. 3A). We used these reconstructions to visualize how selection impacted the distribution of activity within the retinotopically organized maps. Notably, we found increased activity in the portions of the maps containing the selected target relative to the distractors for both the pre-cue and retro-cue conditions (Fig. 3B). To quantify these results, we compared the mean activity within the selected and nonselected locations. For both pre-cue and retro-cue conditions, the mean activity at the selected location was higher in V1–V3, V3AB, and IPS0/1 (Fig. 3C). To investigate whether this relative difference in activity explained the significant cross-decoding between pre-cued and retro-cued selection, we conducted a median split of trials for each subject based on the magnitude of population modulation for the selected location and compared classifier evidence (quantified as the classifier activation for the selected location relative to the activation for the nonselected locations) for low-modulation and high-modulation trials. Both within-task and across-task classifier evidence were significantly greater for high-modulation relative to low-modulation trials in V1–V3, V3AB, and IPS0/1 (Fig. 3D). Such relative difference in spatiotopic activity modulation explains why we found significant cross-decoding between selection during pre-cue and retro-cue tasks. Most provocatively, these findings are highly consistent with the effects of multiplicative gain enhancement that have been observed at the population (Corbetta et al., 1990; Sprague and Serences, 2013) and single-neuron (Connor et al., 1997; McAdams and Maunsell, 1999; Treue and Martínez Trujillo, 1999) level; as such, our results point to a plausible neural mechanism underlying a shared mechanism of external and internal selection.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Reconstructing the topographic patterns of selection. A, Procedure for reconstructing the pRF-weighted selection activity maps. For each condition and ROI, we multiplied the GLM-obtained selection-related β coefficients for each voxel, for each location (i.e., left, right, down) separately, by its pRF model parameters (i.e., position, size) and summed across all voxels. This procedure projects the relative population activity within the ROI in brain space into screen coordinates of visual space. B, The pRF-weighted reconstruction maps for each condition and ROI. The circles in the map mark the three placeholder rings shown during the experiment, and the red ring keys the selected location. C, The mean of pRF-weighted selection-related activity in the selected and nonselected locations in each task and ROI for individual subjects (gray) and across subjects (blue for pre-cue and red for retro-cue). These results indicate that changes in population gain underlie attentional and WM selection. D, The relationship between the activity modulation observed in C and decoding results. Within-task (left) and across-task (right) decoding strength in the form of classifier activation is shown separately for trials with low and high modulation. Error bars are the 95% bootstrap CI. Stars in the top indicate the significance of the difference between the two conditions indicated in the x-axis (***p < 0.001, **p < 0.01, *p < 0.05; one-sided, FDR corrected across all ROIs for each task condition independently).

To provide further support for the notion that gain enhancement is the mechanism underlying our observed changes in population response, we used simulations to directly compare two mechanisms of selection. We simulated fMRI data expected under multiplicative gain (Fig. 4B, left) and under pRF shifts (Fig. 4B, right) and compared these data to our observed data in Figure 3C. Critically, while both multiplicative gain (Reynolds and Heeger, 2009; Herrmann et al., 2010) and RF shifts (Vo et al., 2017) have been observed with fMRI during spatial selection, these mechanisms make different predictions about the spatial pattern of modulation in voxels that represent the selected location. Specifically, the gain model predicts that the largest enhancement in response would occur in voxels whose RFs cover the center of the selected location, while the RF shift model predicts a larger enhancement in voxels whose RFs cover the periphery of the selected location (i.e., voxels whose RF centers shift from outside the selected location to inside the selected location). The simulated topographic patterns of activity shown in Figure 4C are consistent with the prediction of the multiplicative gain mechanism. The sum of squared error between observed and simulated activity enhancement modulation suggests that our observed data were significantly more consistent with the multiplicative gain model in V1–V3, V3AB, and IPS1/0 (all FDR corrected ps ≤ 0.004), but not in IPS2/3 or PCS where the two models did not significantly differ (all FDR corrected ps ≥ 0.080; Fig. 4D). These findings bolster the claim that gain modulation supports a common selection operation.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Simulating multiplicative gain and pRF shift mechanisms. A, Schematic illustrating the simulation of selection-related activity. B, Schematic illustrating the simulation of the two possible mechanisms. In the multiplicative gain model, activity enhancement was modulated by multiplying the selection-related activity of voxels with pRF centers in the selected location. In the pRF shift model, voxels' pRF centers were shifted toward the center of the selected location. C, The simulated activity modulation maps for the two possible mechanisms. We reconstructed the pRF-weighted map using the simulated selection-related activity using the same procedure as that used to generate Figure 3B, isolated activity modulation effect by subtracting the mean of the pRF-weighted map for the two nonselected locations from that for the selected location [e.g., middle – (left + right)/2], and collapsed the modulation maps across three selected locations. The circles in the map mark the three placeholder rings shown during the experiment, and the red ring keys the selected location. D, Comparison between simulated and observed data. The modulation effect decays as a voxel's pRF center increases in distance from the selected location. The blue and red lines are for multiplicative gain and pRF shift mechanism, respectively, and the black line is for the observed data. The filled area around the line indicates the 95% bootstrap CI. The stars in the top of each plot indicate that the sum of squared error between the multiplication simulation data and real data were significantly lower than that between the pRF shift simulation data and real data (***p < 0.001, **p < 0.01, *p < 0.05; one-sided, FDR corrected across all ROIs). The observed data were significantly more consistent with the multiplicative gain model in most ROIs.

Discussion

While both attention and WM involve the preferential processing of task relevant information, here we addressed the extent to which they draw on shared mechanisms. Importantly, the intuitive and appealing notion that attentional and WM selection reflect a single underlying process (Chun et al., 2011; Kiyonaga and Egner, 2013) lacks direct evidence. Behavioral studies using dual task paradigms report inconsistent effects of a concurrent attention-demanding task on WM selection (Janczyk and Berryhill, 2014; Lin et al., 2021; but see Hollingworth and Maxcey-Richard, 2013; Rerko et al., 2014; Makovski and Pertzov, 2015). Furthermore, while neural studies consistently observe that attentional and WM selection evoke activity in overlapping brain regions (Griffin and Nobre, 2003; Nobre et al., 2004; Kuo et al., 2009; Gazzaley and Nobre, 2012), this is a qualitative and not a quantitative conclusion, and crucially ignores the possibility that selection processes may draw on distinct neural mechanisms that coexist in the same brain regions. As an illustrative example, perception of first-order and second-order motion have been found to activate nearly identical regions in V1 (Nishida et al., 2003), but a quantitative comparison demonstrated that different populations were responsible for each (Ashida et al., 2007). Our findings establish a stronger case for the overlap of selection operations for perceptual and mnemonic information than prior work by identifying interchangeable patterns of activity in human visual, parietal, and to a lesser extent frontal cortices during attentional and WM selection. An important distinction between our work and existing MVPA studies comparing attention, perception, and WM is that these other studies have focused on overlap in perceptual and mnemonic representations (Serences et al., 2009; Jerde et al., 2012), as opposed to the operations that facilitate the use of these representations to guide behavior. In addition, our results represent an advance over previous studies that found task-generalized decoding only in prefrontal cortex (Panichello and Buschman, 2021) or not at all (Tamber-Rosenau et al., 2011) by providing evidence for a generalized selection mechanism that spans multiple cortical regions. An important area for future studies will be to identify the specific contexts that engage this shared mechanism.

Another key theoretical advance of our work lies in identifying a putative mechanism, activity enhancement of spatiotopic population-level responses in the selected location (i.e., gain enhancement), that underlies selection in attention and WM. Enhanced sensory responses during attentional selection is a well-established finding at the population level (Corbetta et al., 1990; Mangun et al., 1993; Sprague and Serences, 2013), most likely because of a multiplicative scaling of neuronal responses within the attended receptive field (Connor et al., 1997; McAdams and Maunsell, 1999; Treue and Martínez Trujillo, 1999; Williford and Maunsell, 2006) that drives preferential processing of perceptual information. In contrast, there is comparatively little evidence for the role of gain enhancement in WM selection besides qualitative similarities between WM-induced modulations of visual processing and the enhancement of sensory processing during attentional selection (Awh et al., 2000; Sreenivasan and Jha, 2007; Sreenivasan et al., 2007; Merrikhi et al., 2017). Our findings suggest that the vast literature on multiplicative gain can be leveraged to better understand how we select information from within WM. According to theory, increases in neural gain enhance the signal-to-noise ratio, and therefore precision, of neural representations (Zemel et al., 1998; Ma et al., 2006). Thus, the gain increases we observed associated with selection might control which items are prioritized in WM. Notably, while gain enhancement is the most plausible mechanism based on our simulation results (Fig. 4), we do not claim to have conclusively ruled out all mechanisms of selection. Importantly, our results constrain theory by demonstrating that any plausible mechanism would need to produce equivalent relative activity modulation at the populational level.

Do our decoding results reflect the selection process itself or the outcome of the selection process (i.e., the consequence of having selected a particular location; Myers et al., 2017)? We considered this question in two ways – first by examining the time course of location decoding to distinguish between selection and memory for the selected information. We found that the strongest decoding of location was time-locked to the selection events in both tasks, with decoding accuracy quickly falling to chance once the location information was no longer relevant. In contrast, orientation decoding peaked later after selection and remained above chance for the entire memory delay. This pattern of findings indicates a transient process by which the task-relevant location was selected followed by prolonged maintenance of the target's orientation. Thus, location decoding in our data likely represents the selection process itself. The time-limited representation of selection that we observed may help explain important discrepancies in the behavioral literature – some studies find dual-task costs between attention and selection in WM (Janczyk and Berryhill, 2014; Lin et al., 2021) and others do not (Hollingworth and Maxcey-Richard, 2013; Rerko et al., 2014; Makovski and Pertzov, 2015). We argue that studies that have failed to observe interference generally assume that attention is continuously applied to maintain selection in WM (Hollingworth and Maxcey-Richard, 2013), while those that find interference generally put the secondary task temporally near the selection cue (Janczyk and Berryhill, 2014). Further elucidation of the temporal profiles of selection in WM should be an important area for future investigation.

Second, despite the fact that our study was not specifically designed to distinguish the sources controlling selection from the effects of selection, our findings intriguingly point to potentially distinct roles of visual and association cortex. In visual cortex, not only could we decode the selected location, but in later time points we could decode the target orientation held in memory, suggesting a role in WM storage (Curtis and D'Esposito, 2003; Serences et al., 2009; Sreenivasan et al., 2014), potentially as a consequence of receiving top-down selection signals (Sprague et al., 2016; Rahmati et al., 2018; Yoo et al., 2022). On the other hand, in parietal and frontal cortex, we had robust decoding of selected location, while decoding of memorized orientation was inconsistent across the delay and across ROIs (Fig. 2B), consistent with the idea that gain enhancement in topographically organized regions of parietal and frontal cortex reflects the sources of top-down signals controlling which locations are selected. While we cannot completely rule out the possibility that unsuccessful orientation decoding was because of larger RF sizes or increased spatial heterogeneity in the representation of features in these regions (for a successful demonstration of orientation decoding in frontoparietal cortices, see Ester et al., 2015), our findings are reminiscent of the role of frontoparietal cortex in orienting attention and prioritization (Corbetta and Shulman, 2002; Serences and Yantis, 2006; Jerde et al., 2012) and gating (Frank et al., 2001; Chatham et al., 2014), and spatial cognition more broadly (Heilman et al., 1985; Mesulam, 1999; Corbetta et al., 2000; Vandenberghe et al., 2001; Yantis et al., 2002; Srimal and Curtis, 2008; Szczepanski et al., 2010; Mackey et al., 2016). It is worth noting that a previous study that failed to find a common multivoxel pattern across attentional and WM selection used different features (location and object) for selection across tasks (Tamber-Rosenau et al., 2011). Given that neural substrates of selection are sensitive to differences in the medium of selection (Giesbrecht et al., 2003), conclusions about a common selection mechanism ought to be drawn from comparisons which rigorously match the attention and WM tasks. Here, we compare across the same selection medium (location), equating relevant features, as well as behavioral performance. Consistent with the idea that WM selection relies on internally-directed shifts of attention that highlight task-relevant information, our results suggest that a common mechanism underlies selection during attention and WM.

Footnotes

  • This work was supported by National Eye Institute (NEI) Grants R01 EY-016407 (to C.E.C.) and R01 EY-027925 (to C.E.C.), the Abu Dhabi Department of Education and Knowledge (ADEK) Abu Dhabi Award for Research Excellence (AARE) Grant AARE19-230 (to K.K.S.), and the New York University Abu Dhabi Research Enhancement Fund Grant RE177 (to D.F.). We thank New York University's Center for Brain Imaging for technical support.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Daryl Fougnie at darylfougnie{at}nyu.edu or Kartik K. Sreenivasan at kartik.sreenivasan{at}nyu.edu

SfN exclusive license.

References

  1. ↵
    1. Arco JE,
    2. González-García C,
    3. Díaz-Gutiérrez P,
    4. Ramírez J,
    5. Ruz M
    (2018) Influence of activation pattern estimates and statistical significance tests in fMRI decoding analysis. J Neurosci Methods 308:248–260. doi:10.1016/j.jneumeth.2018.06.017 pmid:30352691
    OpenUrlCrossRefPubMed
  2. ↵
    1. Ashida H,
    2. Lingnau A,
    3. Wall MB,
    4. Smith AT
    (2007) FMRI adaptation reveals separate mechanisms for first-order and second-order motion. J Neurophysiol 97:1319–1325. doi:10.1152/jn.00723.2006 pmid:17065251
    OpenUrlCrossRefPubMed
  3. ↵
    1. Awh E,
    2. Jonides J
    (2001) Overlapping mechanisms of attention and spatial working memory. Trends Cogn Sci 5:119–126. doi:10.1016/S1364-6613(00)01593-X pmid:11239812
    OpenUrlCrossRefPubMed
  4. ↵
    1. Awh E,
    2. Jonides J,
    3. Smith EE,
    4. Buxton RB,
    5. Frank LR,
    6. Love T,
    7. Wong EC,
    8. Gmeindl L
    (1999) Rehearsal in spatial working memory: evidence from neuroimaging. Psychol Sci 10:433–437. doi:10.1111/1467-9280.00182
    OpenUrlCrossRef
  5. ↵
    1. Awh E,
    2. Anllo-Vento L,
    3. Hillyard SA
    (2000) The role of spatial selective attention in working memory for locations: evidence from event-related potentials. J Cogn Neurosci 12:840–847. doi:10.1162/089892900562444 pmid:11054925
    OpenUrlCrossRefPubMed
  6. ↵
    1. Bahle B,
    2. Thayer DD,
    3. Mordkoff JT,
    4. Hollingworth A
    (2020) The architecture of working memory: features from multiple remembered objects produce parallel, coactive guidance of attention in visual search. J Exp Psychol Gen 149:967–983. doi:10.1037/xge0000694 pmid:31589068
    OpenUrlCrossRefPubMed
  7. ↵
    1. Bisley JW,
    2. Goldberg ME
    (2003) Neuronal activity in the lateral intraparietal area and spatial attention. Science 299:81–86. doi:10.1126/science.1077395 pmid:12511644
    OpenUrlAbstract/FREE Full Text
  8. ↵
    1. Brainard DH
    (1997) The psychophysics toolbox. Spat Vis 10:433–436. pmid:9176952
    OpenUrlCrossRefPubMed
  9. ↵
    1. Bushnell MC,
    2. Goldberg ME,
    3. Robinson DL
    (1981) Behavioral enhancement of visual responses in monkey cerebral cortex. I. Modulation in posterior parietal cortex related to selective visual attention. J Neurophysiol 46:755–772. doi:10.1152/jn.1981.46.4.755 pmid:7288463
    OpenUrlCrossRefPubMed
  10. ↵
    1. Carrasco M
    (2011) Visual attention: the past 25 years. Vision Res 51:1484–1525. doi:10.1016/j.visres.2011.04.012 pmid:21549742
    OpenUrlCrossRefPubMed
  11. ↵
    1. Carrasco M,
    2. Penpeci-Talgar C,
    3. Eckstein M
    (2000) Spatial covert attention increases contrast sensitivity across the CSF: support for signal enhancement. Vision Res 40:1203–1215. doi:10.1016/S0042-6989(00)00024-9 pmid:10788636
    OpenUrlCrossRefPubMed
  12. ↵
    1. Chatham CH,
    2. Frank MJ,
    3. Badre D
    (2014) Corticostriatal output gating during selection from working memory. Neuron 81:930–942. doi:10.1016/j.neuron.2014.01.002 pmid:24559680
    OpenUrlCrossRefPubMed
  13. ↵
    1. Chun MM
    (2011) Visual working memory as visual attention sustained internally over time. Neuropsychologia 49:1407–1409. doi:10.1016/j.neuropsychologia.2011.01.029 pmid:21295047
    OpenUrlCrossRefPubMed
  14. ↵
    1. Chun MM,
    2. Golomb JD,
    3. Turk-Browne NB
    (2011) A taxonomy of external and internal attention. Annu Rev Psychol 62:73–101. doi:10.1146/annurev.psych.093008.100427 pmid:19575619
    OpenUrlCrossRefPubMed
  15. ↵
    1. Connor CE,
    2. Preddie DC,
    3. Gallant JL,
    4. Van Essen DC
    (1997) Spatial attention effects in macaque area V4. J Neurosci 17:3201–3214. pmid:9096154
    OpenUrlAbstract/FREE Full Text
  16. ↵
    1. Corbetta M,
    2. Shulman GL
    (2002) Control of goal-directed and stimulus-driven attention in the brain. Nat Rev Neurosci 3:201–215. doi:10.1038/nrn755 pmid:11994752
    OpenUrlCrossRefPubMed
  17. ↵
    1. Corbetta M,
    2. Miezin FM,
    3. Dobmeyer S,
    4. Shulman GL,
    5. Petersen SE
    (1990) Attentional modulation of neural processing of shape, color, and velocity in humans. Science 248:1556–1559. doi:10.1126/science.2360050 pmid:2360050
    OpenUrlAbstract/FREE Full Text
  18. ↵
    1. Corbetta M,
    2. Kincade JM,
    3. Ollinger JM,
    4. McAvoy MP,
    5. Shulman GL
    (2000) Voluntary orienting is dissociated from target detection in human posterior parietal cortex. Nat Neurosci 3:292–297. doi:10.1038/73009 pmid:10700263
    OpenUrlCrossRefPubMed
  19. ↵
    1. Cowan N
    (1998a) Attention and memory: an integrated framework. Oxford: Oxford University Press.
  20. ↵
    1. Cowan N
    (1998b) Visual and auditory working memory capacity. Trends Cogn Sci 2:77. doi:10.1016/S1364-6613(98)01144-9 pmid:21227076
    OpenUrlCrossRefPubMed
  21. ↵
    1. Curtis CE,
    2. D'Esposito M
    (2003) Persistent activity in the prefrontal cortex during working memory. Trends Cogn Sci 7:415–423. doi:10.1016/S1364-6613(03)00197-9 pmid:12963473
    OpenUrlCrossRefPubMed
  22. ↵
    1. Eriksen CW,
    2. Yeh YY
    (1985) Allocation of attention in the visual field. J Exp Psychol Hum Percept Perform 11:583–597. doi:10.1037//0096-1523.11.5.583 pmid:2932532
    OpenUrlCrossRefPubMed
  23. ↵
    1. Ester EF,
    2. Sprague TC,
    3. Serences JT
    (2015) Parietal and frontal cortex encode stimulus-specific mnemonic representations during visual working memory. Neuron 87:893–905. doi:10.1016/j.neuron.2015.07.013 pmid:26257053
    OpenUrlCrossRefPubMed
  24. ↵
    1. Ester EF,
    2. Nouri A,
    3. Rodriguez L
    (2018) Retrospective cues mitigate information loss in human cortex during working memory storage. J Neurosci 38:8538–8548. doi:10.1523/JNEUROSCI.1566-18.2018 pmid:30126971
    OpenUrlAbstract/FREE Full Text
  25. ↵
    1. Fougnie D,
    2. Marois R
    (2006) Distinct capacity limits for attention and working memory: evidence from attentive tracking and visual working memory paradigms. Psychol Sci 17:526–534. doi:10.1111/j.1467-9280.2006.01739.x pmid:16771804
    OpenUrlCrossRefPubMed
  26. ↵
    1. Frank MJ,
    2. Loughry B,
    3. O'Reilly RC
    (2001) Interactions between frontal cortex and basal ganglia in working memory: a computational model. Cogn Affect Behav Neurosci 1:137–160. doi:10.3758/cabn.1.2.137 pmid:12467110
    OpenUrlCrossRefPubMed
  27. ↵
    1. Freeman J,
    2. Brouwer GJ,
    3. Heeger DJ,
    4. Merriam EP
    (2011) Orientation decoding depends on maps, not columns. J Neurosci 31:4792–4804. doi:10.1523/JNEUROSCI.5160-10.2011 pmid:21451017
    OpenUrlAbstract/FREE Full Text
  28. ↵
    1. Gandhi SP,
    2. Heeger DJ,
    3. Boynton GM
    (1999) Spatial attention affects brain activity in human primary visual cortex. Proc Natl Acad Sci U S A 96:3314–3319. doi:10.1073/pnas.96.6.3314 pmid:10077681
    OpenUrlAbstract/FREE Full Text
  29. ↵
    1. Gayet S,
    2. Paffen CLE,
    3. Van der Stigchel S
    (2013) Information matching the content of visual working memory is prioritized for conscious access. Psychol Sci 24:2472–2480. doi:10.1177/0956797613495882 pmid:24121415
    OpenUrlCrossRefPubMed
  30. ↵
    1. Gazzaley A,
    2. Nobre AC
    (2012) Top-down modulation: bridging selective attention and working memory. Trends Cogn Sci 16:129–135. doi:10.1016/j.tics.2011.11.014 pmid:22209601
    OpenUrlCrossRefPubMed
  31. ↵
    1. Gazzaley A,
    2. Cooney JW,
    3. Rissman J,
    4. D'Esposito M
    (2005) Top-down suppression deficit underlies working memory impairment in normal aging. Nat Neurosci 8:1298–1300. doi:10.1038/nn1543 pmid:16158065
    OpenUrlCrossRefPubMed
  32. ↵
    1. Giesbrecht B,
    2. Woldorff MG,
    3. Song AW,
    4. Mangun GR
    (2003) Neural mechanisms of top-down control during spatial and feature attention. Neuroimage 19:496–512. doi:10.1016/S1053-8119(03)00162-9
    OpenUrlCrossRefPubMed
  33. ↵
    1. Griffin IC,
    2. Nobre AC
    (2003) Orienting attention to locations in internal representations. J Cogn Neurosci 15:1176–1194. doi:10.1162/089892903322598139 pmid:14709235
    OpenUrlCrossRefPubMed
  34. ↵
    1. Hedge C,
    2. Leonards U
    (2013) Using eye movements to explore switch costs in working memory. J Vis 13:18. doi:10.1167/13.4.18 pmid:23525134
    OpenUrlAbstract/FREE Full Text
  35. ↵
    1. Heilman KM,
    2. Bowers D,
    3. Coslett HB,
    4. Whelan H,
    5. Watson RT
    (1985) Directional hypokinesia: prolonged reaction times for leftward movements in patients with right hemisphere lesions and neglect. Neurology 35:855–859. doi:10.1212/wnl.35.6.855 pmid:4000486
    OpenUrlAbstract/FREE Full Text
  36. ↵
    1. Herrmann K,
    2. Montaser-Kouhsari L,
    3. Carrasco M,
    4. Heeger DJ
    (2010) When size matters: attention affects performance by contrast or response gain. Nat Neurosci 13:1554–1559. doi:10.1038/nn.2669 pmid:21057509
    OpenUrlCrossRefPubMed
  37. ↵
    1. Hollingworth A,
    2. Maxcey-Richard AM
    (2013) Selective maintenance in visual working memory does not require sustained visual attention. J Exp Psychol Hum Percept Perform 39:1047–1058. doi:10.1037/a0030238 pmid:23067118
    OpenUrlCrossRefPubMed
  38. ↵
    1. Hopfinger JB,
    2. Woldorff MG,
    3. Fletcher EM,
    4. Mangun GR
    (2001) Dissociating top-down attentional control from selective perception and action. Neuropsychologia 39:1277–1291. doi:10.1016/S0028-3932(01)00117-8 pmid:11566311
    OpenUrlCrossRefPubMed
  39. ↵
    1. Ikkai A,
    2. Curtis CE
    (2008) Cortical activity time locked to the shift and maintenance of spatial attention. Cereb Cortex 18:1384–1394. doi:10.1093/cercor/bhm171 pmid:17921456
    OpenUrlCrossRefPubMed
  40. ↵
    1. Ikkai A,
    2. Curtis CE
    (2011) Common neural mechanisms supporting spatial working memory, attention and motor intention. Neuropsychologia 49:1428–1434. doi:10.1016/j.neuropsychologia.2010.12.020 pmid:21182852
    OpenUrlCrossRefPubMed
  41. ↵
    1. Janczyk M,
    2. Berryhill ME
    (2014) Orienting attention in visual working memory requires central capacity: decreased retro-cue effects under dual-task conditions. Atten Percept Psychophys 76:715–724. doi:10.3758/s13414-013-0615-x pmid:24452383
    OpenUrlCrossRefPubMed
  42. ↵
    1. Jerde TA,
    2. Merriam EP,
    3. Riggall AC,
    4. Hedges JH,
    5. Curtis CE
    (2012) Prioritized maps of space in human frontoparietal cortex. J Neurosci 32:17382–17390. doi:10.1523/JNEUROSCI.3810-12.2012 pmid:23197729
    OpenUrlAbstract/FREE Full Text
  43. ↵
    1. Jha AP
    (2002) Tracking the time-course of attentional involvement in spatial working memory: an event-related potential investigation. Brain Res Cogn Brain Res 15:61–69. doi:10.1016/S0926-6410(02)00216-1 pmid:12506934
    OpenUrlCrossRefPubMed
  44. ↵
    1. Katus T,
    2. Andersen SK,
    3. Müller MM
    (2014) Common mechanisms of spatial attention in memory and perception: a tactile dual-task study. Cereb Cortex 24:707–718. doi:10.1093/cercor/bhs350 pmid:23172773
    OpenUrlCrossRefPubMed
  45. ↵
    1. Kay KN,
    2. Winawer J,
    3. Mezer A,
    4. Wandell BA
    (2013) Compressive spatial summation in human visual cortex. J Neurophysiol 110:481–494. doi:10.1152/jn.00105.2013 pmid:23615546
    OpenUrlCrossRefPubMed
  46. ↵
    1. Kiyonaga A,
    2. Egner T
    (2013) Working memory as internal attention: toward an integrative account of internal and external selection processes. Psychon Bull Rev 20:228–242. doi:10.3758/s13423-012-0359-y pmid:23233157
    OpenUrlCrossRefPubMed
  47. ↵
    1. Kok P,
    2. de Lange FP
    (2014) Shape perception simultaneously up- and downregulates neural activity in the primary visual cortex. Curr Biol 24:1531–1535. doi:10.1016/j.cub.2014.05.042 pmid:24980501
    OpenUrlCrossRefPubMed
  48. ↵
    1. Krishnapuram B,
    2. Carin L,
    3. Figueiredo MAT,
    4. Hartemink AJ
    (2005) Sparse multinomial logistic regression: fast algorithms and generalization bounds. IEEE Trans Pattern Anal Mach Intell 27:957–968. doi:10.1109/TPAMI.2005.127 pmid:15943426
    OpenUrlCrossRefPubMed
  49. ↵
    1. Kuo BC,
    2. Rao A,
    3. Lepsien J,
    4. Nobre AC
    (2009) Searching for targets within the spatial layout of visual short-term memory. J Neurosci 29:8032–8038. doi:10.1523/JNEUROSCI.0952-09.2009 pmid:19553443
    OpenUrlAbstract/FREE Full Text
  50. ↵
    1. Kwak Y,
    2. Curtis CE
    (2022) Unveiling the abstract format of mnemonic representations. Neuron 110:1822–1828.e5.
    OpenUrl
  51. ↵
    1. LaBar KS,
    2. Gitelman DR,
    3. Parrish TB,
    4. Mesulam M
    (1999) Neuroanatomic overlap of working memory and spatial attention networks: a functional MRI comparison within subjects. Neuroimage 10:695–704. doi:10.1006/nimg.1999.0503 pmid:10600415
    OpenUrlCrossRefPubMed
  52. ↵
    1. Landman R,
    2. Spekreijse H,
    3. Lamme VAF
    (2003) Large capacity storage of integrated objects before change blindness. Vision Res 43:149–164. doi:10.1016/S0042-6989(02)00402-9 pmid:12536137
    OpenUrlCrossRefPubMed
  53. ↵
    1. Lepsien J,
    2. Thornton I,
    3. Nobre AC
    (2011) Modulation of working-memory maintenance by directed attention. Neuropsychologia 49:1569–1577. doi:10.1016/j.neuropsychologia.2011.03.011 pmid:21420421
    OpenUrlCrossRefPubMed
  54. ↵
    1. Levitt H
    (1971) Transformed up-down methods in psychoacoustics. J Acoust Soc Am 49:467–477. doi:10.1121/1.1912375
    OpenUrlCrossRefPubMed
  55. ↵
    1. Li Z,
    2. Tong M,
    3. Chen S,
    4. Qian J
    (2021) Effect of attentional selection on working memory for depth in a retro-cueing paradigm. Mem Cognit 49:747–757. doi:10.3758/s13421-020-01123-4 pmid:33415712
    OpenUrlCrossRefPubMed
  56. ↵
    1. Lin YT,
    2. Sasin E,
    3. Fougnie D
    (2021) Selection in working memory is resource-demanding: concurrent task effects on the retro-cue effect. Atten Percept Psychophys 83:1600–1612. doi:10.3758/s13414-020-02239-0 pmid:33608857
    OpenUrlCrossRefPubMed
  57. ↵
    1. Liu T,
    2. Pestilli F,
    3. Carrasco M
    (2005) Transient attention enhances perceptual performance and FMRI response in human visual cortex. Neuron 45:469–477. doi:10.1016/j.neuron.2004.12.039 pmid:15694332
    OpenUrlCrossRefPubMed
  58. ↵
    1. Luck SJ,
    2. Chelazzi L,
    3. Hillyard SA,
    4. Desimone R
    (1997) Neural mechanisms of spatial selective attention in areas V1, V2, and V4 of macaque visual cortex. J Neurophysiol 77:24–42. doi:10.1152/jn.1997.77.1.24 pmid:9120566
    OpenUrlCrossRefPubMed
  59. ↵
    1. Ma WJ,
    2. Beck JM,
    3. Latham PE,
    4. Pouget A
    (2006) Bayesian inference with probabilistic population codes. Nat Neurosci 9:1432–1438. doi:10.1038/nn1790 pmid:17057707
    OpenUrlCrossRefPubMed
  60. ↵
    1. Mackey WE,
    2. Devinsky O,
    3. Doyle WK,
    4. Golfinos JG,
    5. Curtis CE
    (2016) Human parietal cortex lesions impact the precision of spatial working memory. J Neurophysiol 116:1049–1054. doi:10.1152/jn.00380.2016 pmid:27306678
    OpenUrlCrossRefPubMed
  61. ↵
    1. Mackey WE,
    2. Winawer J,
    3. Curtis CE
    (2017) Visual field map clusters in human frontoparietal cortex. Elife 6:e22974. doi:10.7554/eLife.22974
    OpenUrlCrossRefPubMed
  62. ↵
    1. Makovski T,
    2. Pertzov Y
    (2015) Attention and memory protection: interactions between retrospective attention cueing and interference. Q J Exp Psychol (Hove) 68:1735–1743. doi:10.1080/17470218.2015.1049623 pmid:25980784
    OpenUrlCrossRefPubMed
  63. ↵
    1. Mangun GR,
    2. Hillyard SA,
    3. Luck SJ
    (1993) Electrocortical substrates of visual selective attention. In: Attention and performance XIV: synergies in experimental psychology, artificial intelligence, and cognitive neuroscience, pp 219–243. Cambridge: The MIT Press.
  64. ↵
    1. Marois R,
    2. Ivanoff J
    (2005) Capacity limits of information processing in the brain. Trends Cogn Sci 9:296–305. doi:10.1016/j.tics.2005.04.010 pmid:15925809
    OpenUrlCrossRefPubMed
  65. ↵
    1. McAdams CJ,
    2. Maunsell JH
    (1999) Effects of attention on orientation-tuning functions of single neurons in macaque cortical area V4. J Neurosci 19:431–441. doi:10.1523/JNEUROSCI.19-01-00431.1999
    OpenUrlAbstract/FREE Full Text
  66. ↵
    1. Merrikhi Y,
    2. Clark K,
    3. Albarran E,
    4. Parsa M,
    5. Zirnsak M,
    6. Moore T,
    7. Noudoost B
    (2017) Spatial working memory alters the efficacy of input to visual cortex. Nat Commun 8:15041. doi:10.1038/ncomms15041 pmid:28447609
    OpenUrlCrossRefPubMed
  67. ↵
    1. Mesulam MM
    (1999) Spatial attention and neglect: parietal, frontal and cingulate contributions to the mental representation and attentional targeting of salient extrapersonal events. Philos Trans R Soc Lond B Biol Sci 354:1325–1346. doi:10.1098/rstb.1999.0482 pmid:10466154
    OpenUrlCrossRefPubMed
  68. ↵
    1. Mumford JA,
    2. Turner BO,
    3. Ashby FG,
    4. Poldrack RA
    (2012) Deconvolving BOLD activation in event-related designs for multivoxel pattern classification analyses. Neuroimage 59:2636–2643. doi:10.1016/j.neuroimage.2011.08.076 pmid:21924359
    OpenUrlCrossRefPubMed
  69. ↵
    1. Murphy TD,
    2. Eriksen CW
    (1987) Temporal changes in the distribution of attention in the visual field in response to precues. Percept Psychophys 42:576–586. doi:10.3758/bf03207989 pmid:3696952
    OpenUrlCrossRefPubMed
  70. ↵
    1. Myers NE,
    2. Stokes MG,
    3. Nobre AC
    (2017) Prioritizing information during working memory: beyond sustained internal attention. Trends Cogn Sci 21:449–461. doi:10.1016/j.tics.2017.03.010 pmid:28454719
    OpenUrlCrossRefPubMed
  71. ↵
    1. Naselaris T,
    2. Kay KN,
    3. Nishimoto S,
    4. Gallant JL
    (2011) Encoding and decoding in fMRI. Neuroimage 56:400–410. doi:10.1016/j.neuroimage.2010.07.073 pmid:20691790
    OpenUrlCrossRefPubMed
  72. ↵
    1. Nishida S,
    2. Sasaki Y,
    3. Murakami I,
    4. Watanabe T,
    5. Tootell RB
    (2003) Neuroimaging of direction-selective mechanisms for second-order motion. J Neurophysiol 90:3242–3254. doi:10.1152/jn.00693.2003 pmid:12917391
    OpenUrlCrossRefPubMed
  73. ↵
    1. Nobre AC,
    2. Coull JT,
    3. Maquet P,
    4. Frith CD,
    5. Vandenberghe R,
    6. Mesulam MM
    (2004) Orienting attention to locations in perceptual versus mental representations. J Cogn Neurosci 16:363–373. doi:10.1162/089892904322926700 pmid:15072672
    OpenUrlCrossRefPubMed
  74. ↵
    1. Oberauer K
    (2002) Access to information in working memory: exploring the focus of attention. J Exp Psychol Learn Mem Cogn 28:411–421. pmid:12018494
    OpenUrlCrossRefPubMed
  75. ↵
    1. Olivers CNL,
    2. Meijer F,
    3. Theeuwes J
    (2006) Feature-based memory-driven attentional capture: visual working memory content affects visual attention. J Exp Psychol Hum Percept Perform 32:1243–1265. doi:10.1037/0096-1523.32.5.1243 pmid:17002535
    OpenUrlCrossRefPubMed
  76. ↵
    1. Panichello MF,
    2. Buschman TJ
    (2021) Shared mechanisms underlie the control of working memory and attention. Nature 592:601–605. doi:10.1038/s41586-021-03390-w pmid:33790467
    OpenUrlCrossRefPubMed
  77. ↵
    1. Pereira F,
    2. Mitchell T,
    3. Botvinick M
    (2009) Machine learning classifiers and fMRI: a tutorial overview. Neuroimage 45:S199–S209. doi:10.1016/j.neuroimage.2008.11.007 pmid:19070668
    OpenUrlCrossRefPubMed
  78. ↵
    1. Pestilli F,
    2. Carrasco M
    (2005) Attention enhances contrast sensitivity at cued and impairs it at uncued locations. Vision Res 45:1867–1875. doi:10.1016/j.visres.2005.01.019 pmid:15797776
    OpenUrlCrossRefPubMed
  79. ↵
    1. Pestilli F,
    2. Viera G,
    3. Carrasco M
    (2007) How do attention and adaptation affect contrast sensitivity? J Vis 7:9.1–12. doi:10.1167/7.7.9 pmid:17685805
    OpenUrlCrossRefPubMed
  80. ↵
    1. Pestilli F,
    2. Ling S,
    3. Carrasco M
    (2009) A population-coding model of attention's influence on contrast response: estimating neural effects from psychophysical data. Vision Res 49:1144–1153. doi:10.1016/j.visres.2008.09.018 pmid:18926845
    OpenUrlCrossRefPubMed
  81. ↵
    1. Pollmann S,
    2. von Cramon DY
    (2000) Object working memory and visuospatial processing: functional neuroanatomy analyzed by event-related fMRI. Exp Brain Res 133:12–22. doi:10.1007/s002210000396 pmid:10933206
    OpenUrlCrossRefPubMed
  82. ↵
    1. Posner MI
    (1980) Orienting of attention. Q J Exp Psychol 32:3–25. doi:10.1080/00335558008248231 pmid:7367577
    OpenUrlCrossRefPubMed
  83. ↵
    1. Postle BR,
    2. Awh E,
    3. Jonides J,
    4. Smith EE,
    5. D'Esposito M
    (2004) The where and how of attention-based rehearsal in spatial working memory. Brain Res Cogn Brain Res 20:194–205. doi:10.1016/j.cogbrainres.2004.02.008 pmid:15183391
    OpenUrlCrossRefPubMed
  84. ↵
    1. Rademaker RL,
    2. Chunharas C,
    3. Serences JT
    (2019) Coexisting representations of sensory and mnemonic information in human visual cortex. Nat Neurosci 22:1336–1344. doi:10.1038/s41593-019-0428-x
    OpenUrlCrossRefPubMed
  85. ↵
    1. Rahmati M,
    2. Saber GT,
    3. Curtis CE
    (2018) Population dynamics of early visual cortex during working memory. J Cogn Neurosci 30:219–233. doi:10.1162/jocn_a_01196 pmid:28984524
    OpenUrlCrossRefPubMed
  86. ↵
    1. Ranganath C,
    2. D'Esposito M
    (2005) Directing the mind's eye: prefrontal, inferior and medial temporal mechanisms for visual working memory. Curr Opin Neurobiol 15:175–182. doi:10.1016/j.conb.2005.03.017 pmid:15831399
    OpenUrlCrossRefPubMed
  87. ↵
    1. Rerko L,
    2. Souza AS,
    3. Oberauer K
    (2014) Retro-cue benefits in working memory without sustained focal attention. Mem Cognit 42:712–728. doi:10.3758/s13421-013-0392-8 pmid:24442551
    OpenUrlCrossRefPubMed
  88. ↵
    1. Reynolds JH,
    2. Heeger DJ
    (2009) The normalization model of attention. Neuron 61:168–185. doi:10.1016/j.neuron.2009.01.002 pmid:19186161
    OpenUrlCrossRefPubMed
  89. ↵
    1. Reynolds JH,
    2. Pasternak T,
    3. Desimone R
    (2000) Attention increases sensitivity of V4 neurons. Neuron 26:703–714. doi:10.1016/S0896-6273(00)81206-4 pmid:10896165
    OpenUrlCrossRefPubMed
  90. ↵
    1. Sasin E,
    2. Fougnie D
    (2020) Memory-driven capture occurs for individual features of an object. Sci Rep 10:19499. pmid:33177574
    OpenUrlPubMed
  91. ↵
    1. Serences JT,
    2. Boynton GM
    (2007) Feature-based attentional modulations in the absence of direct visual stimulation. Neuron 55:301–312. doi:10.1016/j.neuron.2007.06.015 pmid:17640530
    OpenUrlCrossRefPubMed
  92. ↵
    1. Serences JT,
    2. Saproo S
    (2012) Computational advances towards linking BOLD and behavior. Neuropsychologia 50:435–446. doi:10.1016/j.neuropsychologia.2011.07.013 pmid:21840553
    OpenUrlCrossRefPubMed
  93. ↵
    1. Serences JT,
    2. Yantis S
    (2006) Selective visual attention and perceptual coherence. Trends Cogn Sci 10:38–45. doi:10.1016/j.tics.2005.11.008 pmid:16318922
    OpenUrlCrossRefPubMed
  94. ↵
    1. Serences JT,
    2. Ester EF,
    3. Vogel EK,
    4. Awh E
    (2009) Stimulus-specific delay activity in human primary visual cortex. Psychol Sci 20:207–214. doi:10.1111/j.1467-9280.2009.02276.x pmid:19170936
    OpenUrlCrossRefPubMed
  95. ↵
    1. Shen M,
    2. Huang X,
    3. Gao Z
    (2015) Object-based attention underlies the rehearsal of feature binding in visual working memory. J Exp Psychol Hum Percept Perform 41:479–493. doi:10.1037/xhp0000018 pmid:25602968
    OpenUrlCrossRefPubMed
  96. ↵
    1. Silver MA,
    2. Ress D,
    3. Heeger DJ
    (2007) Neural correlates of sustained spatial attention in human early visual cortex. J Neurophysiol 97:229–237. doi:10.1152/jn.00677.2006 pmid:16971677
    OpenUrlCrossRefPubMed
  97. ↵
    1. Souza AS,
    2. Rerko L,
    3. Oberauer K
    (2016) Getting more from visual working memory: retro-cues enhance retrieval and protect from visual interference. J Exp Psychol Hum Percept Perform 42:890–910. doi:10.1037/xhp0000192 pmid:26752731
    OpenUrlCrossRefPubMed
  98. ↵
    1. Souza AS,
    2. Czoschke S,
    3. Lange EB
    (2020) Gaze-based and attention-based rehearsal in spatial working memory. J Exp Psychol Learn Mem Cogn 46:980–1003. doi:10.1037/xlm0000771 pmid:31580120
    OpenUrlCrossRefPubMed
  99. ↵
    1. Sperling G
    (1960) The information available in brief visual presentations. Psychol Monogr 74:1–29. doi:10.1037/h0093759
    OpenUrlCrossRef
  100. ↵
    1. Sprague TC,
    2. Serences JT
    (2013) Attention modulates spatial priority maps in the human occipital, parietal and frontal cortices. Nat Neurosci 16:1879–1887. doi:10.1038/nn.3574 pmid:24212672
    OpenUrlCrossRefPubMed
  101. ↵
    1. Sprague TC,
    2. Ester EF,
    3. Serences JT
    (2016) Restoring latent visual working memory representations in human cortex. Neuron 91:694–707. doi:10.1016/j.neuron.2016.07.006 pmid:27497224
    OpenUrlCrossRefPubMed
  102. ↵
    1. Sreenivasan KK,
    2. Jha AP
    (2007) Selective attention supports working memory maintenance by modulating perceptual processing of distractors. J Cogn Neurosci 19:32–41. doi:10.1162/jocn.2007.19.1.32 pmid:17214561
    OpenUrlCrossRefPubMed
  103. ↵
    1. Sreenivasan KK,
    2. Katz J,
    3. Jha AP
    (2007) Temporal characteristics of top-down modulations during working memory maintenance: an event-related potential study of the N170 component. J Cogn Neurosci 19:1836–1844. doi:10.1162/jocn.2007.19.11.1836 pmid:17958486
    OpenUrlCrossRefPubMed
  104. ↵
    1. Sreenivasan KK,
    2. Curtis CE,
    3. D'Esposito M
    (2014) Revisiting the role of persistent neural activity during working memory. Trends Cogn Sci 18:82–89. doi:10.1016/j.tics.2013.12.001 pmid:24439529
    OpenUrlCrossRefPubMed
  105. ↵
    1. Srimal R,
    2. Curtis CE
    (2008) Persistent neural activity during the maintenance of spatial position in working memory. Neuroimage 39:455–468. doi:10.1016/j.neuroimage.2007.08.040 pmid:17920934
    OpenUrlCrossRefPubMed
  106. ↵
    1. Szczepanski SM,
    2. Konen CS,
    3. Kastner S
    (2010) Mechanisms of spatial attention control in frontal and parietal cortex. J Neurosci 30:148–160. doi:10.1523/JNEUROSCI.3862-09.2010 pmid:20053897
    OpenUrlAbstract/FREE Full Text
  107. ↵
    1. Tamber-Rosenau BJ,
    2. Esterman M,
    3. Chiu Y-C,
    4. Yantis S
    (2011) Cortical mechanisms of cognitive control for shifting attention in vision and working memory. J Cogn Neurosci 23:2905–2919. doi:10.1162/jocn.2011.21608 pmid:21291314
    OpenUrlCrossRefPubMed
  108. ↵
    1. Theeuwes J,
    2. Olivers CNL,
    3. Chizk CL
    (2005) Remembering a location makes the eyes curve away. Psychol Sci 16:196–199. doi:10.1111/j.0956-7976.2005.00803.x pmid:15733199
    OpenUrlCrossRefPubMed
  109. ↵
    1. Theeuwes J,
    2. Belopolsky A,
    3. Olivers CNL
    (2009) Interactions between working memory, attention and eye movements. Acta Psychol (Amst) 132:106–114. doi:10.1016/j.actpsy.2009.01.005 pmid:19233340
    OpenUrlCrossRefPubMed
  110. ↵
    1. Treue S,
    2. Martínez Trujillo JC
    (1999) Feature-based attention influences motion processing gain in macaque visual cortex. Nature 399:575–579. doi:10.1038/21176 pmid:10376597
    OpenUrlCrossRefPubMed
  111. ↵
    1. Vandenberghe R,
    2. Gitelman DR,
    3. Parrish TB,
    4. Mesulam MM
    (2001) Functional specificity of superior parietal mediation of spatial shifting. Neuroimage 14:661–673. doi:10.1006/nimg.2001.0860 pmid:11506539
    OpenUrlCrossRefPubMed
  112. ↵
    1. Vo VA,
    2. Sprague TC,
    3. Serences JT
    (2017) Spatial tuning shifts increase the discriminability and fidelity of population codes in visual cortex. J Neurosci 37:3386–3401. doi:10.1523/JNEUROSCI.3484-16.2017 pmid:28242794
    OpenUrlAbstract/FREE Full Text
  113. ↵
    1. Wandell BA,
    2. Brewer AA,
    3. Dougherty RF
    (2005) Visual field map clusters in human cortex. Philos Trans R Soc Lond B Biol Sci 360:693–707. doi:10.1098/rstb.2005.1628 pmid:15937008
    OpenUrlCrossRefPubMed
  114. ↵
    1. Wandell BA,
    2. Dumoulin SO,
    3. Brewer AA
    (2007) Visual field maps in human cortex. Neuron 56:366–383. doi:10.1016/j.neuron.2007.10.012 pmid:17964252
    OpenUrlCrossRefPubMed
  115. ↵
    1. Williams JR,
    2. Brady TF,
    3. Störmer VS
    (2022) Guidance of attention by working memory is a matter of representational fidelity. J Exp Psychol Hum Percept Perform 48:202–231. doi:10.1037/xhp0000985
    OpenUrlCrossRef
  116. ↵
    1. Williford T,
    2. Maunsell JHR
    (2006) Effects of spatial attention on contrast response functions in macaque area V4. J Neurophysiol 96:40–54. doi:10.1152/jn.01207.2005 pmid:16772516
    OpenUrlCrossRefPubMed
  117. ↵
    1. Woodman GF,
    2. Luck SJ
    (2007) Do the contents of visual working memory automatically influence attentional selection during visual search? J Exp Psychol Hum Percept Perform 33:363–377. doi:10.1037/0096-1523.33.2.363 pmid:17469973
    OpenUrlCrossRefPubMed
  118. ↵
    1. Yantis S,
    2. Schwarzbach J,
    3. Serences JT,
    4. Carlson RL,
    5. Steinmetz MA,
    6. Pekar JJ,
    7. Courtney SM
    (2002) Transient neural activity in human parietal cortex during spatial attention shifts. Nat Neurosci 5:995–1002. doi:10.1038/nn921 pmid:12219097
    OpenUrlCrossRefPubMed
  119. ↵
    1. Yoo AH,
    2. Bolaños A,
    3. Hallenbeck GE,
    4. Rahmati M,
    5. Sprague TC,
    6. Curtis CE
    (2022) Behavioral prioritization enhances working memory precision and neural population gain. J Cogn Neurosci 34:365–379. doi:10.1162/jocn_a_01804 pmid:34942647
    OpenUrlCrossRefPubMed
  120. ↵
    1. Zemel RS,
    2. Dayan P,
    3. Pouget A
    (1998) Probabilistic interpretation of population codes. Neural Comput 10:403–430. doi:10.1162/089976698300017818 pmid:9472488
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 42 (37)
Journal of Neuroscience
Vol. 42, Issue 37
14 Sep 2022
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Common Neural Mechanisms Control Attention and Working Memory
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Common Neural Mechanisms Control Attention and Working Memory
Ying Zhou (周颖), Clayton E. Curtis, Kartik K. Sreenivasan, Daryl Fougnie
Journal of Neuroscience 14 September 2022, 42 (37) 7110-7120; DOI: 10.1523/JNEUROSCI.0443-22.2022

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Common Neural Mechanisms Control Attention and Working Memory
Ying Zhou (周颖), Clayton E. Curtis, Kartik K. Sreenivasan, Daryl Fougnie
Journal of Neuroscience 14 September 2022, 42 (37) 7110-7120; DOI: 10.1523/JNEUROSCI.0443-22.2022
del.icio.us logo Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • attention
  • decoding
  • fMRI
  • selection
  • working memory

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Rhythmic Entrainment Echoes in Auditory Perception
  • Multimodal Imaging for Validation and Optimization of Ion Channel-Based Chemogenetics in Nonhuman Primates
  • Cleavage of VAMP2/3 Affects Oligodendrocyte Lineage Development in the Developing Mouse Spinal Cord
Show more Research Articles

Behavioral/Cognitive

  • NMDA Receptors in the Basolateral Amygdala Complex Are Engaged for Pavlovian Fear Conditioning When an Animal’s Predictions about Danger Are in Error
  • Signatures of Electrical Stimulation Driven Network Interactions in the Human Limbic System
  • Dissociable Neural Mechanisms Underlie the Effects of Attention on Visual Appearance and Response Bias
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
(JNeurosci logo)
(SfN logo)

Copyright © 2023 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.