Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Featured ArticleResearch Articles, Behavioral/Cognitive

Hierarchical Organization of Human Visual Feature Attention Control

Sreenivasan Meyyappan, Mingzhou Ding and George R. Mangun
Journal of Neuroscience 24 September 2025, 45 (39) e2073242025; https://doi.org/10.1523/JNEUROSCI.2073-24.2025
Sreenivasan Meyyappan
1Center for Mind and Brain, University of California, Davis, California 95618
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Mingzhou Ding
2J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, Florida 32611
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Mingzhou Ding
George R. Mangun
1Center for Mind and Brain, University of California, Davis, California 95618
3Department of Psychology and Neurology, University of California, Davis, California 95616
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • Peer Review
  • PDF
Loading

Abstract

Attention can be deployed in advance of visual stimuli based on features such as color or direction of motion. This anticipatory feature-based attention involves top-down neural control signals from the frontoparietal network that bias visual cortex to enhance attended information and suppress distraction. For example, anticipatory attention control can enable effective selection based on stimulus color while ignoring distracting information about stimulus motion. Anticipatory attention can also be focused more narrowly, for example, to select specific colors or motion directions that define task-relevant aspects of the stimuli. One important question that remains open is whether anticipatory attention control first biases broad feature dimensions such as color versus motion before biasing the specific feature attributes (e.g., blue vs green). To investigate this, we recorded EEG activity during a task where human participants of either sex were cued to either attend to a motion direction (up or down) or a color (blue or green) on a trial-by-trial basis. Applying multivariate decoding approaches to the EEG alpha band activity (8–12 Hz) during attention control (cue-target interval), we observed significant decoding for both the attended dimensions (motion vs color) and specific feature attributes (up vs down; blue vs green). Importantly, the temporal onset of the dimension-level biasing (motion vs color) preceded that of the attribute-level biasing (up vs down and blue vs green). These findings demonstrate that the top-down control of feature-based attention proceeds in a hierarchical fashion, first biasing the broad feature dimension, and then narrowing to the specific feature attribute.

  • attention control
  • EEG
  • feature-based attention
  • MVPA
  • vision

Significance Statement

During voluntary feature-based attention, electrophysiological and neuroimaging studies have highlighted the role of anticipatory (top-down) biasing of the sensory cortex in enhancing the selection of attended stimulus attributes, but little is known about how this is achieved. In particular, it is not clear whether attending to an attribute such as a color (blue vs green) or motion direction (up vs down) first biases all neural structures coding that dimension (color/motion) before biasing the specific attribute or if the top-down signals directly bias only the attended attribute. Using EEG and multivariate decoding, we report that top-down attention control follows a hierarchical organization: first, the broader attended feature dimension is biased, which is followed by the biasing of the specific feature attribute.

Introduction

Attention can be voluntarily deployed in anticipation of stimuli based on their location (spatial attention), features (feature-based attention), and object properties (object-based attention). Functional neuroimaging studies have shown that in the period following an attention-directing cue, but prior to the presentation of the target stimulus (cue-target interval or anticipatory period), the dorsal attention network (DAN)—principally comprising the intraparietal sulcus (IPS) and frontal eye fields (FEF) and the inferior frontal junction (IFJ)—is involved in biasing the sensory processing in favor of the attended information (Nobre et al., 1997, 2006; Kastner et al., 1999; Treue and Martínez Trujillo, 1999; Hopfinger et al., 2000; Saenz et al., 2002; Bisley and Goldberg, 2003; Giesbrecht et al., 2003; Kastner and Pinsk, 2004; Luks and Simpson, 2004; Woldorff et al., 2004; Fannon et al., 2007; Serences and Boynton, 2007; Szczepanski et al., 2013; Tamber-Rosenau et al., 2018; Meyyappan et al., 2021; Goddard et al., 2022; Gong et al., 2022). The temporal dynamics of sensory biasing by top-down control is, however, not well established, especially for feature-based attention. More specifically, it is not clear whether anticipatory attention control first biases broad feature dimensions such as color versus motion before biasing the specific feature attributes (e.g., blue vs green) or whether the specific to-be-attended feature attribute can be biased directly.

Insights can be gleaned from models of visual working memory. According to the dimensional feature bundle model (Töllner et al., 2015), the stimulus stored in working memory is represented by both the individual feature value and its feature dimension in a three-level hierarchy: the top level represents the stimulus identity, the second level represents the feature dimension such as color and shape, and the third level represents the specific feature attribute within each feature dimension. If, for example, the color of the remembered stimulus is to be retrieved from working memory, then in addition to the activation of the representation of the specific color, the representations of the entire color dimension are also activated in working memory (Niklaus et al., 2017).

We hypothesize that a similar hierarchical organization exists for the voluntary control of feature selective attention. That is, when instructed to attend a specific attribute within a feature dimension (e.g., attend-color blue, where color is the dimension and blue is the attribute), sensory enhancement of the feature dimension (e.g., color) will take place first (Stage I), followed by sensory enhancement of the specific attribute (e.g., blue; Stage II). Other feature dimensions (e.g., motion) will be suppressed in Stage I while the other attributes within the attended dimension (e.g., yellow) will be suppressed in Stage II. The alternative model is that sensory attributes are represented independently in the attention control networks and thus may be directly biased within visual cortex without following a hierarchical organization. In such a case, we would expect no temporal difference between the dimension-level selection (motion vs color) and the attribute-level selection (e.g., up vs down for motion or blue vs green for color).

Electroencephalograms (EEG) were recorded from participants performing a cued visual feature-based attention task. Participants were cued to attend to “blue” or “green” within the color dimension or to “up” or “down” within the motion dimension. Following a variable delay period, stimuli containing all these features were presented in two intermingled streams of moving dots, which moved in opposite directions and had different colors. The task was to report the size of the dots having the cued color or cued motion direction while ignoring all other stimuli. We analyzed the alpha band activity (8–12 Hz) using multivariate decoding approaches and generated the time courses of the decoding accuracy between feature dimensions (color vs motion) and between feature attributes (blue vs green; up vs down) within each feature dimension. By comparing the onset times of above-chance decoding, we assessed whether the stage of biasing dimensions preceded that of biasing individual attributes within each dimension.

Materials and Methods

Participants

The study was approved by the Institutional Review Board of the University of California Davis. All participants were neurotypical volunteers with normal or corrected-to-normal vision and no history of color blindness or neurological disorders. They participated in the study after providing written informed consent.

Experimental design

Main experiment

EEG and eye-tracking data were recorded from 25 right-handed participants (20 female, 5 male; mean age, 25 years; range, 19–39 years). This sample size was chosen using power analyses based on our previous published work (Noah et al., 2020; Nadra et al., 2023).

The main stimuli involved two streams of centrally presented, spatially overlapping, and differently colored dots (blue or green) moving in opposite directions (up or down; Fig. 1A). In some trials, only one stream of dots appeared on the screen as described later (see next section). Within a stream (e.g., the up-moving stream), all the dots (n = 100) were of the same size (either large or small) and the same color (blue or green). However, between the two streams, the dots always had opposite colors and moved in opposite directions. For example, in a particular trial, if one stream of dots was green and moved upward, then the other stream of dots would be blue and moved downward. The sizes of the dots across the two streams were pseudorandomly determined and independent of each other. In other words, in a trial, if the upward moving dots were large, then the dots in the downward moving stream could either be large or small. Given that the task was to discriminate if the dots in the attended stream were large or small (see below), this design ensured that the participants had to attend the cued feature attribute to perform the task.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Experimental paradigms. A, Main experiment. An auditory cue directed participants to covertly attend either a color or a motion direction in order to detect the size of the dots (large vs small) in the attended stream. On 80% of trials, this was followed by a compound color-motion stimulus (dual-stream), such that task performance relies on using the cue information; distractor presence (the un-cued attribute) provided competition that requires attention. On 20% of trials, only the cued (valid) or un-cued (invalid) stimuli were presented (mono-stream), to provide behavioral measures of attention cueing (valid vs invalid). The different cue-target types were randomly interleaved within blocks. In the example given above, if the cue was “Up” or “Green,” the correct button press in valid trials would be to indicate the dots being large. B, Control experiment. Participants were instructed to detect the size of the target circle (large vs small) when it appeared. Auditory cues, identical to the ones used in the main experiment, were used as a warning signal to indicate the start of a trial. Following an anticipation period (cue-to-target), varying from 1,250 to 1,750 ms, a white circle of radius either 2.5 or 5° (equally probable) appeared and the participants reported the size of the circle. The cues were equally probable and randomly interleaved within blocks but irrelevant to the size discrimination task.

Participants performed 15 runs of the cued feature-based attention task with each run lasting 3 min. They were seated in an electrically shielded, sound-attenuating chamber (ETS-Lindgren). As shown in Figure 1A, a fixation point was placed at the center of the monitor, and the participants were instructed to maintain fixation on the fixation dot during the experiment. At the beginning of each trial, one of the four auditory cues was presented (attend-motion cues: spoken words “up” or “down”; attend-color cues: spoken words “blue” or “green”), instructing the participants to direct attention to a stream of moving dots based either on their direction of motion (up or down) or color (blue or green). From trial to trial, cues were pseudorandomly determined such that the number of trials per condition were identical, but their order of presentation in a block was fully randomized. The mean duration of the four auditory cues was 530 ± 34 ms. Targets appeared after a random delay (cue-target onset asynchrony interval) of 1,250–1,750 ms (time locked to cue onset). On 80% of the trials, referred to as the dual-stream condition, two streams of oppositely moving and differently colored dots appeared for 250 ms. The participants were instructed to discriminate the size of the dots possessing the cued feature attribute (target), while ignoring the size of dots of the un-cued attributes (distractor), and press a button to indicate whether they were large or small.

In the remaining 20% of trials, the stimuli consisted of only one stream of dots referred to as the mono-stream condition. In these trials, the dots were either only in the cued color or motion direction (valid mono-stream; 10% of total trials) with the distractor stream absent OR only in the un-cued color/motion direction (invalid mono-stream; 10% of total trials). The task during the valid mono-stream trials stayed the same as the dual-stream trials, namely, the participants had to discriminate the size of the dots. For invalid mono-stream trials, when no stimuli in the cued feature attribute was present, the participants were again required to press a button to report the size of the dots. For example, if the participants were cued “up” but only downward moving dots appeared, the participants were instructed to shift their attention to the un-cued stream and report the size of the dots (e.g., downward moving dots). The mono-stream conditions (valid and invalid) were included to obtain behavioral measures of the benefits of attentional cueing. Analogous to the behavioral effect of cue validity observed in the standard spatial attention cueing paradigms (Posner et al., 1980), we predicted that participants would be slower to respond on invalidly cued mono-stream trials, namely, RTmono-invalid > RTmono-valid.

A pseudorandomly distributed intertrial interval (ITI) between 2,000 and 3,000 ms separated target offset from the cue onset of the next trial. A training session preceded the main experiment to familiarize the participants with the task.

Control experiment

In the main experiment, we would use MVPA decoding approaches to distinguish between different attention control states that were induced by auditory spoken word cues. Because the physical features of the spoken word cues differ for different attention conditions, it is important to test the influence of the different auditory cues on the decoding performance. To that end, we conducted an EEG experiment using a paradigm that involved the same cue words and timeline as the main experiment, but the cue words were now task-irrelevant and did not therefore activate different attention control states.

The experimental procedure was approved by the Institutional Review Board at the University of California, Davis. Ten participants (5 female, 5 male; mean age, 23 years; range, 18–32 years) gave written informed consent and participated in the control experiment. As shown in Figure 1B, a fixation point was placed at the center of the monitor, and the participants were instructed to maintain fixation on the fixation dot during the experiment. At the beginning of each trial, one of the four spoken words was presented (motion words: “up” or “down”; color words: “blue” or “green”). The participants were instructed to use the cue words simply as a trial initiating signal (warning cue) for the upcoming target and not to attribute any meaning or process the cue-conditions in a specific way. Targets appeared after a random delay (cue-target onset asynchrony interval) of 1,250–1,750 ms (time locked to cue onset). The target was one of two unfilled white circles with radius of 2.5 and 5°, appearing centrally for 250 ms. The participants were instructed to discriminate the size of the circle by pressing a button to indicate whether the circle was large or small.

EEG recording

Continuous EEG data were acquired with a 64-channel Brain Products actiCAP snap active electrode system (Brain Products) and digitized using a Neuroscan SynAmps2 input board and amplifier (Compumedics USA). Signals were recorded with Curry 8 acquisition software with a sampling rate of 1,000 Hz and a bandpass filtering between 0 and 400 Hz online. Water-soluble electrolyte gel was used to maintain surface contact between the electrode and scalp; the electrode impedances were maintained at <25 kΩ.

EEG preprocessing

All data preprocessing procedures were completed with the EEGLAB toolbox written in MATLAB (Delorme and Makeig, 2004). For each participant, the EEG data files from the individual runs were merged into a single dataset before data reprocessing. The data were Hamming window sinc FIR (finite impulse response) filtered (1–83 Hz) with −6 dB roll off at 0.5 and 93 Hz and then downsampled to 250 Hz. Data were rereferenced to the average of all electrodes (common average reference) and epoched from 1,000 ms before cue onset to 4,000 ms after cue onset to include both cue and stimulus-evoked neural data. An independent component analysis (ICA) routine implemented in the EEGLAB was used to remove blink, muscular, and oculomotor artifacts.

A surface current source density (CSD) transform was applied to the epoched data by estimating the surface Laplacian to reduce the influence of volume conduction and common reference from the EEG data (Tenke and Kayser, 2012). To extract alpha band activity (8–12 Hz) as a function of time, the epoched CSD data was bandpass filtered between 8 and 12 Hz and Hilbert transformed to obtain the alpha band complex analytic signal, whose magnitude was then squared to yield the alpha power time course.

Statistical analysis

Decoding analysis

The alpha power time course from the epoched CSD data was used for the decoding analysis. To further reduce noise, a moving average filter was applied, in which the epoched EEG (CSD) data was temporally smoothed by computing the average of five adjacent time points (20 ms time window). The decoding was done on the smoothed data at every time point. The 64 alpha power values from 64 channels were used as 64 input features to the classifiers. A linear support vector machine (SVM) was applied to classify the attention conditions by using the trial-averaged decoding approach (Bae and Luck, 2018). Specifically, for decoding two attention conditions (i.e., attend-up vs attend-down; attend-blue vs attend-green; attend-motion vs attend-color), the data within each attention condition were divided into three parts (threefold cross-validation). The trials in each part were averaged for a given time point to boost the signal-to-noise ratio (Bae and Luck, 2018). The classifier was trained using the averaged alpha power from two parts of each attention condition and tested on the remaining part. To avoid bias in grouping of the trials into training and testing set, we repeated the process of dividing the trials into two-part training and one-part testing sets 100 times. The decoding performance was measured in terms of average decoding accuracy across the one hundred partitions. Above-chance decoding accuracy is taken as evidence that the two attention control states differed in the spatial pattern of alpha oscillations, reflecting the differential biasing of visual cortex for the two contrasted attention conditions. The higher the decoding accuracy, the more different the patterns of alpha power, and thus the more distinct the two attention control states.

To determine whether the decoding accuracy at a particular time point is above the chance level, we compared the actual decoding accuracy against the chance level accuracy (50%) using paired t tests. The decoding was deemed significantly above chance level if it was higher than the cluster-corrected threshold determined by requiring p < 0.05 (Fahrenfort et al., 2017). To avoid any potential bias exerted by the unequal numbers of trials in different types of comparisons (e.g., comparing attend-blue vs attend-green with attend-color vs attend-motion), we subsampled and randomly chose 50% of the trials from motion (up and down combined) and color (blue and green combined) conditions for motion versus color decoding analysis. The decoding analysis with subsampling was repeated 100 times for each participant and the resulting decoding accuracies were averaged.

Estimating decoding onset latencies

A jack-knife approach (Miller et al., 1998) was implemented to estimate the time at which decoding accuracy rises above chance level. The following steps were performed to estimate the decoding onset latency: (1) Decoding accuracy time course from a subsample of N-1 (e.g., N = 25 for the main experiment) participants was extracted by excluding one participant. For example, in the first iteration, the decoding accuracy time course from subject 1 was excluded and the decoding accuracy time courses from subjects 2 to 25 (N = 24) were considered. (2) The distribution of the decoding accuracy time course from the subsample were compared against a chance level and significant decoding clusters (p < 0.05) were obtained. (3) Onset latency was determined by noting the first time point of the earliest significant cluster. (4) Steps (1–4) were repeated 25 times—by leaving out a subject exactly once—to obtain 25 decoding onset latencies. The decoding onset latency distribution was compared across different attention control conditions.

Eye-tracking data recording and preprocessing

An SR Research EyeLink 1000 eye-tracking system was used to record the eye movements and pupillary activity at a sampling rate of 1,000 Hz. During preprocessing, eyeblinks were detected and gaze positions during eyeblinks were determined by a cubic spline interpolation algorithm. The continuous eye movement data (x and y coordinates) was epoched from 200 ms before cue onset to 1,200 ms after cue onset (−200 to 1,200 ms).

Results

Main experiment

The participants were cued auditorily to attend to a specific color or motion. Following a variable cue-target interval, two streams of differently colored and oppositely moving dots appeared on the screen, and the participants discriminated the size of the moving dots in the stream defined by the cued feature. Mono-stream trials were also included to measure the effect of attention cueing, and we consider those first.

Behavioral analysis: mono-stream trials

In order to establish that the subjects were using the cue information to selectively control attention, we analyzed their performance during the mono-stream trials, which accounted for 20% of the total trials. For validly cued mono-stream trials (i.e., distractor absent; 10%), the average response times (RT) were 779.3 ± 135.4 ms for motion trials and 758.1 ± 122.9 ms for color trials. The response accuracy for motion and color trials was 89.8 ± 7.4% and 88.7 ± 9.0%, respectively. Performance in validly cued trials did not differ significantly between attended feature dimensions (motion vs color; see below for the main effects of attention type). For invalid mono-stream trials (i.e., cued target absent; 10%), where the participants had to shift attention to the un-cued stream first and then detect the size of the moving dots, the response times (RT) were 892.5 ± 188.2 ms for motion trials and 899.5 ± 188.7 ms for color trials. The response accuracy for motion and color trials was 83.1 ± 13.0% and 85.5 ± 14.1%, respectively. Performance on invalidly cued trials did not differ significantly between the two attended feature dimensions (motion vs color; see below for the main effects of attention type).

To assess the effect of cue validity, we compared validly cued and invalidly cued mono-stream trials using two-way ANOVAs: cue validity (valid vs invalid) by attention type (motion vs color). For reaction time (RT), we found a statistically significant main effect of cue validity [valid vs invalid; Fig. 2A; Fvalidity(1,24) = 15.52, p = 2 × 10−4, η2 (effect size) = 0.14, Bayes factor (BF10) = 150.269]. There were no statistically significant main effects of attention types (motion vs color) for RTs (Fattention type(1,24) = 0.05, p = 0.83, η2 = 3.42 × 10−4, BF10 = 0.13), nor were there significant interactions between cue validity and attention type (Fvalidity × attention type(1,24) = 0.19, p = 0.66, η2 = 1.71 × 10−3, BF10 = 0.19). For accuracy, we found a statistically significant main effect of cue validity (valid vs invalid; Fig. 2B), with validly cued trials having higher accuracy than invalidly cued trials (Fvalidity(1,24) = 4.85, p = 0.03, η2 = 0.046, BF10 = 1.31). Note that while the p value suggested a significant difference between mean accuracies, the Bayes factor (BF10) value of 1.31 only slightly exceeded the threshold, indicating an anecdotal evidence (Wetzels et al., 2011) in favor of the alternate hypothesis (mean accuracies for valid and invalid trials were significantly different). There were no statistically significant main effects of attention types (motion vs color) on accuracy (Fattention type(1,24) = 0.08, p = 0.77, η2 = 7.81 × 10−4, BF10 = 0.13), nor were there significant interactions between cue validity and attention type (Fvalidity × attention type(1,24) = 0.61, p = 0.43, η2 = 5.46 × 10−3, BF10 = 0.23).

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Behavioral analysis of the main experiment: mono-stream trials. A, RTs to the targets (collapsed over motion direction and color) were significantly faster when validly cued targets were compared with invalidly cued targets. B, Accuracy (% correct) was significantly higher for validly cued trials compared with invalidly cued trials. The error bars denote 95% confidence intervals and *p < 0.05.

Behavioral analysis: dual stream trials

Eighty percent of the trials in the main experiment were dual-stream trials where both cued and un-cued stimulus streams were present, and participants attended to the cued stream (target) while ignoring the un-cued stream (distractor). To assess whether there were any differences in arousal or task difficulty among the four different attention trial types (attend-up, attend-down, attend-blue, and attend-green), we conducted a one-way ANOVA with four levels (the four attention conditions) separately for RT and accuracy measures (Fig. 3). Neither RT nor accuracy differed significantly as a function of the four attention trial types: RT (FRT(3,96) = 0.87, p = 0.45, η2 = 0.02, BF10 = 0.05); accuracy (Faccuracy(3,96) = 1.3, p = 0.28, η2 = 0.04, BF10 = 0.09). These patterns for RT and accuracy suggest that the attention conditions did not differ in overall task difficulty or arousal. Taken together, these behavioral results show that the participants deployed covert attention to different feature attributes based on the information in the cues.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Behavioral analysis of the main experiment: dual stream trials (when both the target and distractor dot steams were present). Both reaction times (A) and accuracy (B) were similar across attention conditions (attend-up, attend-down, attend-blue, and attend-green). Error bars denote 95% confidence intervals.

Alpha topography analysis

For cue-evoked neural activity, we first examined the scalp distribution of alpha power as a function of time for each of the four attention conditions. A pairwise difference map in alpha power was then computed for different feature-based attention and different feature attribute conditions, and the averaged group-level difference maps were visualized in topographic head plots for different time periods in the cue-target interval (Fig. 4). In Figure 4A, we compared the scalp topography of attended feature dimensions during attend-motion (collapsed across up and down motion) versus attend-color (collapsed across blue and green) by subtracting the alpha power topography of one condition from that of the other. Beginning in the 400–600 ms interval, we observed a decreased alpha power (blue colors) for attend-motion in the parietal channels that continued to develop bilaterally over time during the cue-target interval for several hundred milliseconds. This decrease in alpha in the dorsal parietal channels for attend-motion relative to attend-color toward the later stages of the cue-target interval aligns with the findings of Snyder and Foxe (2010) who also reported differential sources in the dorsal and ventral stream for the modulation of alpha power following attend-motion and attend-color cues, respectively. Moreover, comparing attend-up versus attend-down (Fig. 4B) or attend-blue versus attend-green (Fig. 4C) also yielded different alpha power patterns, showing that alpha power distribution over the scalp also varies with the to-be-attended feature attributes within each attended feature dimension (up vs down in the motion dimension and blue vs green in the color dimension). To quantify these effects more precisely with higher temporal resolution, we turned to the MVPA decoding approach using alpha power from 64 scalp electrodes as input features, yielding decoding accuracy time courses depicting temporal dynamics of pattern differences of alpha power as a function of attention condition.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Alpha power difference at different post-cue latencies displayed as topographic maps. The maps are views from behind the head, thus focusing on parietal and occipital scalp regions. A, Attend-motion minus attend-color. B, Attend-up minus attend-down. C, Attend-blue minus attend-green. Here, time zero denotes the onset of the attention-directing cues. The mean duration of the auditory cues was 530 ± 34 ms, and thus, the patterns in the maps developing after ∼400–600 ms represent the post-cue/pre-target anticipatory period.

Decoding the attended feature dimensions

Cue-evoked alpha patterns were used to distinguish the attended feature dimensions (motion vs color) and are shown in Figure 5A. The decoding accuracy was at chance level at the start of the cue-target interval and rose above chance level at ∼240 ms after cue onset, remaining significantly above the chance level for the analyzed cue-target period (until 1,200 ms). This period was chosen so that the trials with the shortest cue-target interval of 1,250 ms could be included in the analysis.

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Decoding alpha topographical patterns during the cue-target interval. The decoding accuracy time courses from decoding attended dimensions (attend-up and attend-down vs attend-blue and attend-green; panel A) and decoding attended feature attributes which is the average of attend-up versus attend-down and attend-blue versus attend-green decoding time courses (panel B) with onset of cue at 0 s. The chance level accuracy was 0.5. The shaded area denotes time points where the classifier performance was above chance level (p < 0.05 cluster corrected). C, The decoding onset latencies for decoding attend-motion versus attend-color (dimension decoding) were significantly earlier compared with decoding the attended feature attributes. Here time zero denotes the onset of the attention-directing cue. The error bars and shaded regions around the decoding timecourses denote SEM and **p = 0.001.

Decoding the attended feature attributes within an attended dimension

To investigate whether the modulations of alpha oscillations encode specific attended feature attributes within an attended feature dimension, we next trained linear SVM models to classify attend-up versus attend-down within the attended motion dimension and attend-blue versus attend-green within the attended color dimension. Figure 5B shows the average time course of decoding accuracies for attend-blue versus attend-green and attend-up versus attend-down. The earliest time point that decoding accuracy for feature attributes reached significance was ∼400 ms after cue onset. The individual decoding time courses for attend-up versus attend-down and attend-blue versus attend-green are shown in Figure 6. The earliest time point that decoding accuracy for motion and color attributes reached significance was ∼300 and ∼600 ms after cue onset, respectively.

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

Decoding motion attributes versus decoding color attributes during the cue-target interval. The decoding accuracy time courses from (A) decoding motion attributes (attend-up vs attend-down) and (B) decoding color attributes (attend-blue vs attend-green) with onset of cue at 0 ms. The chance level accuracy was 0.5. The shaded area denotes time points where the classifier performance was above chance level (p < 0.05 cluster corrected). The shaded regions around the decoding time courses denote SEM.

Comparison of decoding onset latencies

In the decoding approach, when the decoding accuracy between two attention conditions rises and remains above chance level for the first time, referred to here as the decoding onset latency, it signifies the beginning of the formation of distinct neural representations of the two attention control states. Inspection of Figure 5A,B suggests that the formation of the neural representations of the attended feature dimensions (motion vs color) and that of the specific attended feature attributes within an attended dimension start at different times. As shown in Figure 5C, the onset latency of decoding attend-motion versus attend-color significantly precedes the onset latency of decoding of the attended feature attributes (attend-up vs attend-down and attend-blue vs attend-green collapsed together; p = 0.001, T(24) = 3.36, d = 0.67, BF10 = 30.56). To verify that these onset latency differences were present for both motion and color attention feature attributes compared with feature dimensions, we separately calculated the onset latency of significant decoding for attended motion attributes and attended color attributes and compared these with the onset latency of decoding the feature dimensions of motion versus color. In each case, the onset latency of the feature dimension decoding was significantly earlier than for the feature attributes (up vs down; p = 0.03, T(24) = 1.97, d = 0.39, BF10 = 2.14 and blue vs green; p = 0.00001, T(24) = 23.78, d = 4.75, BF10 = 24.31 × 1014). Interestingly, as can be observed in Figure 6, the onset latencies of the motion and color feature attributes differed, with motion preceding color in time (p = 0.001, T(24) = 40.01, d = 8.0, BF10 = 28.01 × 1020).

It is worth noting that because attend-up and attend-down trials were combined as attend-motion trials, and attend-blue and attend-green trials were combined as attend-color trials, there is a possible concern that the latency effect in Figure 5C could have been driven by different numbers of trials used in the different analyses. We mitigated this concern by performing decoding analysis for attend-motion versus attend-color (dimension classification) using only 50% of the trials (random subsampling), thereby equalizing the numbers of trials used for decoding attended dimensions and that used for decoding attended attributes within an attended dimension. Such subsampling was performed 100 times, and the subsampled decoding accuracies were averaged to obtain the attend-motion versus attend-color decoding time course.

Hierarchical model of feature attention control

The results of our latency analysis suggest a hierarchical organization of feature-based attention control that consists of two stages (Fig. 7). In the first stage, the broad attended category is biased such that the attended feature dimension is relatively facilitated, and the unattended (ignored) feature dimension is suppressed. In the second (later) stage, the biasing narrows to the specific to-be-attended motion direction (up vs down) or color (blue vs green) and suppresses the unattended direction or color.

Figure 7.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 7.

Schematic representation of the hierarchical organization of feature attention control where the selection of feature dimensions temporally preceded the selection of feature attributes. The orange arrow indicates the example top-down control scenarios (left, attend-green; right, attend-up). The yellow and gray arrows indicate the facilitated and suppressed dimension (Stage I) and attribute (Stage II).

Patterns of eye movements

Previous work has shown that systematic eye movement patterns can affect decoding analysis from EEG (Hong et al., 2020), MEG (Cichy et al., 2015; Mostert et al., 2018), and fMRI (Thielen et al., 2019) data. To examine whether there were systematic eye movement pattern differences between attention conditions, we first analyzed the eye-tracking data by comparing the x and y gaze positions across time. The gaze positions (vertical and horizontal) were compared between attention conditions during the cue-target interval, and no significant differences among attention conditions were observed (Fig. 8A,B). To further test if there were pattern level differences in eye movements between attention conditions, we compared the pairs of attention conditions using linear support vector machines (Meyyappan et al., 2021; Rajan et al., 2021). The decoding accuracy failed to reach above-chance level threshold in the cue-target interval throughout the 0–1,200 ms period for all pairs of attention conditions: attend-motion versus attend-color (Fig. 8C), attend-up versus attend-down (Fig. 8D), and attend-blue versus attend-green (Fig. 8E). These eye-tracking results demonstrate that the decoding results were not driven by systematic eye movements. It is worth noting that here the decoding accuracies tended to be below the chance level (<50%). Such results have been reported in the past (Halme and Parkkonen, 2018; Mei et al., 2022; González-España et al., 2023). Jamalabadi et al. (2016) simulated various scenarios and reported that for limited sample sizes, as the effect sizes between decoded classes decreased, classification accuracy tended to decline, and for very small effect sizes (e.g., when the means of the two classes of data are the same), it could even fall below chance level. In our data, given that the mean eye positions were not statistically different between conditions (Fig. 8A,B), Jamalabadi et al. (2016)’s findings provide a plausible explanation of our eye position decoding results.

Figure 8.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 8.

Comparison of eye movement data across attention conditions. A, B, Vertical and horizontal positions of eye gaze as functions of time expressed in visual angles. C, Decoding accuracy as a function of time for attend-motion versus attend-color (feature dimension classification). D, Decoding accuracy as a function of time for attend-up versus attend-down. E, Decoding accuracy as a function of time for attend-blue versus attend-green. Here, time zero refers to the onset of the cue, and the decoding accuracies were not above chance level throughout the cue-target interval for all three classifications. The shaded areas around the time courses denote SEM.

Control experiment

In the foregoing, patterns of alpha oscillations were used as features for MVPA classifiers to test the temporal formation of different attention states. Given that different attention states were induced by different auditory cues (spoken words), it is possible that what was being decoded reflected the processing of different spoken words, rather than different attention states. We conducted a control experiment to examine this issue using the same auditory words as in the main experiment, but the words were not attention cues but simply warning signals indicating the start of a trial. Furthermore, the semantic meaning of the cues was unrelated to any aspect of the stimuli or the task; the task was merely to discriminate the size of a colorless, motionless gray circle, independent of the cue words.

Behavioral analysis

The different auditory words had no influence on the reaction times or accuracies to the targets. Reaction times were 458.8 ± 117.6 (word = up), 469.0 ± 125.7 (word = down), 458.1 ± 108.7 (word = blue), and 461.0 ± 107.6 (word = green). Accuracies were 96.9 ± 2.8 (word = up), 97.3 ± 3.0 (word = down), 96.6 ± 2.1 (word = blue), and 96.5 ± 3.6 (word = green). We conducted a one-way ANOVA, having four levels (the four cue words), for RT and accuracy measures separately. Neither RT nor accuracy was found to be significantly different among the four auditory word warning cue trial types: RT (FRT(3,36) = 0.02, p = 0.99, η2 = 0.001, BF10 = 0.05); accuracy (Faccuracy(3,36) = 0.14, p = 0.93, η2 = 0.01, BF10 = 0.06).

Decoding spoken cue words

The EEG data was preprocessed using the same preprocessing pipeline as for the main experiment. Cue-related alpha patterns were analyzed for the different types of auditory word stimuli. The following contrasts were tested. At the feature dimension level, the words up and down were combined and compared with the words blue and green combined (Fig. 9A). At the feature attribute level, the words up versus down were compared (Fig. 9B), and the words blue versus green were compared (Fig. 9C). Decoding accuracy remained at chance level for motion words versus color words throughout the cue-target interval (0–1,200 ms). Up versus down and blue versus green showed brief periods of above chance level (p < 0.05 uncorrected) decoding at ∼700–850 and ∼975–1,150 ms, respectively, but otherwise remained at the chance level during the cue-target interval. These results suggest that while linguistic processing of the cue words might elicit different patterns of alpha oscillations, the effects were brief rather than sustained as seen in the main experiment and followed very different time courses compared with the main experiment.

Figure 9.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 9.

Decoding analysis for the control experiment. A, The decoding accuracy time course from decoding motion words versus color words. The decoding accuracy time courses from (B) decoding between motion words (up vs down) and (C) decoding between color words (blue vs green). The chance level accuracy was 0.5. The shaded area, in yellow, denotes time points where the classifier performance was above chance level (p < 0.05 uncorrected). The shaded regions around the decoding time courses denote SEM. D, Weight maps from the main experiment (left column) and the control experiment (center column); the difference weight maps (main experiment minus control experiment) are shown in the right column. Here warmer colors denote higher weights and increased contribution to the classification analysis. E, T-statistic value estimates from one-sampled t test comparing the weight maps between the main experiment and the control experiment. *For the main experiment, weight maps from 10 participants were randomly subsampled 100 times and averaged.

To further examine whether the alpha patterns elicited by the cue words in the control experiment were comparable with those in the main experiment, we utilized a weight map analysis. Alpha power topographies from the time period 600–1,200 ms were considered. This time period was chosen since the decoding accuracy was above-chance level during this time period in the control experiment and the weight maps are only meaningfully defined if the decoding accuracy is above chance level. The weight maps are shown in Figure 9D. Since the control experiment had a smaller sample size (N = 10) than the main experiment (N = 25), to control for this difference, we randomly selected 10 participants from the main experiment and obtained the weight maps. Repeating this process 100 times and averaging the results yielded the weight maps shown in Figure 9D (left panel). Figure 9D (middle panel) displays the weight maps for the control experiment. Subtracting the control experiment weight maps from the main experiment weight maps reveals the difference weight maps corresponding to the attention-related effects of the main experiment. As seen in Figure 9D (right panel), attention to visual features generated stronger weights in parietal and occipital electrodes. To test the significance of the weights, we compared the weights from the control experiment with the weights estimated from the data subsampled from the main experiment using a one-sampled t test for each channels individually. The estimated p values, from all 64 channels, were then corrected for multiple comparisons using FDR (p < 0.05) and the corresponding T-statistic values are shown in Figure 9E. These results are consistent with the notion that preparatory attention to different feature attributes led to differential sensory biasing patterns in visual cortex, which by design is absent in the control experiment.

Discussion

We applied multivariate decoding approaches to patterns of alpha oscillations to investigate the mechanisms of top-down feature-based attention control. In the period following attention-directing cues but before stimulus onset, we compared the time courses of decoding accuracies when anticipatory attention was paid to the broader category of feature dimensions as well as the specific feature attributes within a feature dimension. We found that attend-motion versus attend-color (contrasting attention to different feature dimensions) could be decoded above chance at ∼240 ± 50 ms latency after the cues, while attention to the specific direction of motion (up vs down) or color (blue vs green; feature attributes) did not exceed chance decoding until ∼400 ± 50 ms; here the inclusion of ±50 accounts for the temporal smearing due to trial averaging (Bae and Luck, 2019). We interpret this pattern of results as evidence that feature-based attention control is hierarchically organized, being first deployed selectively to bias sensory processing at the level of the task-relevant feature dimension (motion vs color) and subsequently narrowing to the specific feature attributes within the attended feature dimension. Across the two feature dimensions investigated here, decoding motion attributes (up vs down) has an earlier onset than decoding color attributes (blue vs green), suggesting that motion directions are prioritized compared with color (Ffytche et al., 1995; Bullier, 2001).

Alpha oscillations and feature-based attention control

The modulation of alpha oscillations by visual spatial attention is well established (Worden et al., 2000; Sauseng et al., 2005; Thut et al., 2006; Rihs et al., 2007; Samaha et al., 2016; Popov et al., 2019). Snyder and Foxe (2010) were among the first to report the involvement of alpha oscillations during nonspatial feature-based attention. In a cue-target paradigm similar to the one used here, they extracted color and motion-sensitive alpha components from the cue-target interval and identified the putative neural sources for the increased alpha power within the ventral or the dorsal visual pathways when color or motion was to be suppressed, in line with the Gating by Inhibition Model (Jensen and Mazaheri, 2010). Our univariate, as well as multivariate analyses, revealed different topographic patterns of alpha power modulation, depending on the feature dimension (motion vs color) being attended, consistent with the findings of Snyder and Foxe (2010). Additionally, though, we found that the topographic patterns of alpha oscillations also distinguished specific attended feature attributes (up vs down; blue vs green) within an attended feature dimension. Therefore, our results go beyond the findings of Snyder and Foxe (2010) with respect to the role of alpha in feature-based attention. This finding is in contrast, however, to Gundlach et al. (2023) who reported no alpha modulation when attending different feature attributes. Two factors may explain this discrepancy. First, Gundlach et al. (2023) used a univariate approach to index EEG alpha activity, which may lack the sensitivity of the multivariate approach employed here. Second, the visual stimuli in Gundlach et al. (2023) were presented throughout the experiment, while in our design, during the cue-target period, no visual stimuli were presented; it is known that visual stimulation suppresses alpha power and therefore may diminish the ability to detect different patterns of alpha power modulation across different anticipatory attention conditions.

Importantly, our work advances the understanding of feature-based attention by revealing the differential time courses of anticipatory attention control and biasing based on visual feature dimensions versus feature attributes. By analyzing the decoding accuracy time courses, we observed significant above-chance decoding for feature dimensions (motion vs color) beginning ∼240 ms after cue onset. In contrast, for specific feature attributes (up vs down; blue vs green), we found a delay in the onset of significant above-chance decoding, with significant classification not beginning until ∼400 ms latency. These onset latencies suggest a hierarchical model of feature attention control where the selection of the attended feature dimensions precedes that of the specific attended feature attributes.

Relation with other relevant works in the literature

Hierarchical control of feature-based attention has been proposed before based on the functional-anatomical patterns of activity in frontoparietal cortex. Liu and Hou (2013) used fMRI and found that the spatial distribution of neural activity within the frontoparietal cortex was more similar for within-feature attributes (e.g., blue vs green) and across feature dimensions (e.g., blue vs upward motion) compared with between feature and spatial attention contrasts (e.g., blue vs left visual field), which evoked more distinct neural patterns. That is, the functional-anatomical patterns suggested a hierarchical grouping based on task goals (spatial vs feature attention). Such a functional-anatomical hierarchy could also give rise to a temporal hierarchy, as demonstrated in our EEG findings.

Our results also align with the models of feature-based attention selection in visual working memory (Brady et al., 2011; Töllner et al., 2015; Hajonides et al., 2020) and visual search experiments (Müller et al., 1995, 2003; Found and Müller, 1996; Krummenacher and Müller, 2012; Nako et al., 2014). Niklaus et al. (2017), in a series of retroactive cuing object-based attention experiments, found that when participants were retroactively cued to shift their focus of attention to an un-cued object (which was deemed irrelevant by an earlier cue), the cost of switching (error rates) to the new object was minimal if that object shared the same dimension (intradimensional switch) as the initially cued object as opposed to a condition where the participants were cued to report the characteristics of an un-cued dimension (cross-dimensional switch). These findings support the presence of active representations of both the feature dimension and the attended feature attribute, as opposed to just representing the specific feature attributes, because if the feature attributes alone were represented in the working memory as an independent unit, then the switching costs should have been identical.

The feature dimensional weighting model makes similar predictions (Müller et al., 1995; Found and Müller, 1996; Krummenacher and Müller, 2012; Liesefeld and Müller, 2019). Specifically, Müller et al. (2003) found that during a cued feature pop-out search task, cueing participants to the color of the target (e.g., red) enhanced search performance even if the target had an un-cued attribute but within the same dimension as the cue (e.g., color blue), compared with trials with targets containing different dimensions (e.g., left or right orientation), suggesting that attending to a feature attribute activates the representation of the entire feature dimension. Nako et al. (2014), during an alphanumeric visual search task, reported that both trials with targets and trials without targets (but with category-matched item; i.e., foils) evoked robust N2pc components initially during search (0–200 ms), but after 200 ms, target trials evoked stronger N2pc components, suggesting an early dimension-specific (category) processing before item-specific attentional processing. Our proposed hierarchical control model of feature attention extends these models to the mechanisms of top-down anticipatory attention control by demonstrating a temporal difference in the formation of dimensional-level attentional biasing (earlier) versus attribute-level attentional biasing (later).

Our findings can be described in terms of the Gating by Inhibition Model of attention control (Jensen and Mazaheri, 2010). In line with this model, our data show that top-down feature attention control occurs in two critical sequential stages involving EEG alpha mechanisms. The first stage would involve suppression of the unattended feature dimension (increased alpha) and enhancement of the attended feature dimension (decreased alpha), giving rise to decodable differences in the alpha power topography. For our data, this stage corresponds to attention being directed to the color of the moving dots while ignoring the direction of motion, and vice versa. In the second stage, top-down control selectively modulates the patterns sensitive to the attended attribute while suppressing the unattended attribute within the attended dimension, again leading to decodable differences in the alpha power topography. For our data, this stage corresponds to attention being directed to green moving dots while ignoring blue moving dots or up moving dots while ignoring down moving dots.

Our findings also align with the feature similarity gain model of attention (Treue and Martínez Trujillo, 1999) which suggests that attention enhances the gain of the neurons based on the preference of the neurons for the attended feature. For example, during the attend-blue condition, neurons tuned to the color dimension and also to the blue stimulus attribute are enhanced while the neurons preferring the opposite motion dimension are suppressed. The opposite is true when attention is directed to up or down moving dots where the color dimension is suppressed (Chen et al., 2012).

Limitations

We note the following limitations in our study. First, the decoding approach requires multiple trials for model construction and testing, and this will not capture the timing information at the individual trial level. Second, the auditory cues used in the task varied in duration (530 ± 34 ms), which could contribute to temporal smearing. However, we argue that this is unlikely because the variation in cue lengths (±34 ms) is small relative to the robust ∼160 ms (240 vs 400 ms) difference in the latencies between decoding feature dimensions and feature attributes.

Footnotes

  • This work was supported by National Institutes of Health Grant MH117991, National Science Foundation Grant BCS-2318886 to G.R.M. and M.D., and the University of California, Davis. We thank Sai Katta for help with data collection and our colleagues in the Center for Mind and Brain for helpful discussions.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to George R. Mangun at mangun{at}ucdavis.edu.

SfN exclusive license.

References

  1. ↵
    1. Bae GY,
    2. Luck SJ
    (2018) Dissociable decoding of spatial attention and working memory from EEG oscillations and sustained potentials. J Neurosci 38:409–422. https://doi.org/10.1523/JNEUROSCI.2860-17.2017
    OpenUrlAbstract/FREE Full Text
  2. ↵
    1. Bae GY,
    2. Luck SJ
    (2019) Decoding motion direction using the topography of sustained ERPs and alpha oscillations. Neuroimage 184:242–255. https://doi.org/10.1016/j.neuroimage.2018.09.029
    OpenUrlCrossRefPubMed
  3. ↵
    1. Bisley JW,
    2. Goldberg ME
    (2003) Neuronal activity in the lateral intraparietal area and spatial attention. Science 299:81–86. https://doi.org/10.1126/science.1077395
    OpenUrlAbstract/FREE Full Text
  4. ↵
    1. Brady TF,
    2. Konkle T,
    3. Alvarez GA
    (2011) A review of visual memory capacity: beyond individual items and toward structured representations. J Vis 11:4. https://doi.org/10.1167/11.5.4
    OpenUrlAbstract/FREE Full Text
  5. ↵
    1. Bullier J
    (2001) Integrated model of visual processing. Brain Res Rev 36:96–107. https://doi.org/10.1016/S0165-0173(01)00085-6
    OpenUrlCrossRefPubMed
  6. ↵
    1. Chen X,
    2. Hoffmann KP,
    3. Albright TD,
    4. Thiele A
    (2012) Effect of feature-selective attention on neuronal responses in macaque area MT. J Neurophysiol 107:1530–1543. https://doi.org/10.1152/jn.01042.2010
    OpenUrlCrossRefPubMed
  7. ↵
    1. Cichy RM,
    2. Ramirez FM,
    3. Pantazis D
    (2015) Can visual information encoded in cortical columns be decoded from magnetoencephalography data in humans? Neuroimage 121:193–204. https://doi.org/10.1016/j.neuroimage.2015.07.011
    OpenUrlCrossRefPubMed
  8. ↵
    1. Delorme A,
    2. Makeig S
    (2004) EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J Neurosci Methods 134:9–21. https://doi.org/10.1016/j.jneumeth.2003.10.009
    OpenUrlCrossRefPubMed
  9. ↵
    1. Fahrenfort JJ,
    2. Grubert A,
    3. Olivers CNL,
    4. Eimer M
    (2017) Multivariate EEG analyses support high-resolution tracking of feature-based attentional selection. Sci Rep 7:1886. https://doi.org/10.1038/s41598-017-01911-0
    OpenUrlCrossRefPubMed
  10. ↵
    1. Fannon SP,
    2. Saron CD,
    3. Mangun GR
    (2007) Baseline shifts do not predict attentional modulation of target processing during feature-based visual attention. Front Hum Neurosci 1:7. https://doi.org/10.3389/neuro.01.1.1.001.2007
    OpenUrlCrossRefPubMed
  11. ↵
    1. Ffytche DH,
    2. Guy CN,
    3. Zeki S
    (1995) The parallel visual motion inputs into areas V1 and V5 of human cerebral cortex. Brain 118:1375–1394. https://doi.org/10.1093/brain/118.6.1375
    OpenUrlCrossRefPubMed
  12. ↵
    1. Found A,
    2. Müller HJ
    (1996) Searching for unknown feature targets on more than one dimension: investigating a “dimension-weighting” account. Percept Psychophys 58:88–101. https://doi.org/10.3758/BF03205479
    OpenUrlCrossRefPubMed
  13. ↵
    1. Giesbrecht B,
    2. Woldorff MG,
    3. Song AW,
    4. Mangun GR
    (2003) Neural mechanisms of top-down control during spatial and feature attention. Neuroimage 19:496–512. https://doi.org/10.1016/S1053-8119(03)00162-9
    OpenUrlCrossRefPubMed
  14. ↵
    1. Goddard E,
    2. Carlson TA,
    3. Woolgar A
    (2022) Spatial and feature-selective attention have distinct, interacting effects on population-level tuning. J Cogn Neurosci 34:290–312. https://doi.org/10.1162/jocn_a_01796
    OpenUrlCrossRefPubMed
  15. ↵
    1. Gong M,
    2. Chen Y,
    3. Liu T
    (2022) Preparatory attention to visual features primarily relies on non-sensory representation. Sci Rep 12:21726. https://doi.org/10.1038/s41598-022-26104-2
    OpenUrlCrossRefPubMed
  16. ↵
    1. González-España JJ,
    2. Back KJ,
    3. Reynolds D,
    4. Contreras-Vidal JL
    (2023) Decoding taste from EEG: gustatory evoked potentials during wine tasting. In: 2023 IEEE international conference on systems, man, and cybernetics (SMC), pp 4253–4258.
  17. ↵
    1. Gundlach C,
    2. Forschack N,
    3. Müller MM
    (2023) Global attentional selection of visual features is not associated with selective modulation of posterior alpha-band activity. Psychophysiology 60:e14244. https://doi.org/10.1111/psyp.14244
    OpenUrlCrossRefPubMed
  18. ↵
    1. Hajonides JE,
    2. van Ede F,
    3. Stokes MG,
    4. Nobre AC
    (2020) Comparing the prioritization of items and feature-dimensions in visual working memory. J Vis 20:25. https://doi.org/10.1167/jov.20.8.25
    OpenUrlCrossRef
  19. ↵
    1. Halme HL,
    2. Parkkonen L
    (2018) Across-subject offline decoding of motor imagery from MEG and EEG. Sci Rep 8:10087. https://doi.org/10.1038/s41598-018-28295-z
    OpenUrlPubMed
  20. ↵
    1. Hong X,
    2. Bo K,
    3. Meyyappan S,
    4. Tong S,
    5. Ding M
    (2020) Decoding attention control and selection in visual spatial attention. Hum Brain Mapp 41:3900–3921. https://doi.org/10.1002/hbm.25094
    OpenUrlCrossRefPubMed
  21. ↵
    1. Hopfinger JB,
    2. Buonocore MH,
    3. Mangun GR
    (2000) The neural mechanisms of top-down attentional control. Nat Neurosci 3:284–291. https://doi.org/10.1038/72999
    OpenUrlCrossRefPubMed
  22. ↵
    1. Jamalabadi H,
    2. Alizadeh S,
    3. Schönauer M,
    4. Leibold C,
    5. Gais S
    (2016) Classification based hypothesis testing in neuroscience: below-chance level classification rates and overlooked statistical properties of linear parametric classifiers. Hum Brain Mapp 37:1842–1855. https://doi.org/10.1002/hbm.23140
    OpenUrlCrossRefPubMed
  23. ↵
    1. Jensen O,
    2. Mazaheri A
    (2010) Shaping functional architecture by oscillatory alpha activity: gating by inhibition. Front Hum Neurosci 4:186. https://doi.org/10.3389/fnhum.2010.00186
    OpenUrlCrossRefPubMed
  24. ↵
    1. Kastner S,
    2. Pinsk MA
    (2004) Visual attention as a multilevel selection process. Cogn Affect Behav Neurosci 4:483–500. https://doi.org/10.3758/CABN.4.4.483
    OpenUrlCrossRefPubMed
  25. ↵
    1. Kastner S,
    2. Pinsk MA,
    3. De Weerd P,
    4. Desimone R,
    5. Ungerleider LG
    (1999) Increased activity in human visual cortex during directed attention in the absence of visual stimulation. Neuron 22:751–761. https://doi.org/10.1016/S0896-6273(00)80734-5
    OpenUrlCrossRefPubMed
  26. ↵
    1. Krummenacher J,
    2. Müller HJ
    (2012) Dynamic weighting of feature dimensions in visual search: behavioral and psychophysiological evidence. Front Psychol 3:221. https://doi.org/10.3389/fpsyg.2012.00221
    OpenUrlPubMed
  27. ↵
    1. Liesefeld HR,
    2. Müller HJ
    (2019) Distractor handling via dimension weighting. Curr Opin Psychol 29:160–167. https://doi.org/10.1016/j.copsyc.2019.03.003
    OpenUrlPubMed
  28. ↵
    1. Liu T,
    2. Hou Y
    (2013) A hierarchy of attentional priority signals in human frontoparietal cortex. J Neurosci 33:16606–16616. https://doi.org/10.1523/JNEUROSCI.1780-13.2013
    OpenUrlAbstract/FREE Full Text
  29. ↵
    1. Luks TL,
    2. Simpson GV
    (2004) Preparatory deployment of attention to motion activates higher-order motion-processing brain regions. Neuroimage 22:1515–1522. https://doi.org/10.1016/j.neuroimage.2004.04.008
    OpenUrlCrossRefPubMed
  30. ↵
    1. Mei N,
    2. Santana R,
    3. Soto D
    (2022) Informative neural representations of unseen contents during higher-order processing in human brains and deep artificial networks. Nat Hum Behav 6:720–731. https://doi.org/10.1038/s41562-021-01274-7
    OpenUrlPubMed
  31. ↵
    1. Meyyappan S,
    2. Rajan A,
    3. Mangun GR,
    4. Ding M
    (2021) Role of Inferior frontal junction (IFJ) in the control of feature versus spatial attention. J Neurosci 41:8065–8074. https://doi.org/10.1523/JNEUROSCI.2883-20.2021
    OpenUrlAbstract/FREE Full Text
  32. ↵
    1. Miller J,
    2. Patterson T,
    3. Ulrich R
    (1998) Jackknife-based method for measuring LRP onset latency differences. Psychophysiology 35:99–115. https://doi.org/10.1111/1469-8986.3510099
    OpenUrlCrossRefPubMed
  33. ↵
    1. Mostert P,
    2. Albers AM,
    3. Brinkman L,
    4. Todorova L,
    5. Kok P,
    6. de Lange FP
    (2018) Eye movement-related confounds in neural decoding of visual working memory representations. eNeuro 5:ENEURO.0401-0417.2018. https://doi.org/10.1523/ENEURO.0401-17.2018
    OpenUrl
  34. ↵
    1. Müller HJ,
    2. Heller D,
    3. Ziegler J
    (1995) Visual search for singleton feature targets within and across feature dimensions. Percept Psychophys 57:1–17. https://doi.org/10.3758/BF03211845
    OpenUrlCrossRefPubMed
  35. ↵
    1. Müller HJ,
    2. Reimann B,
    3. Krummenacher J
    (2003) Visual search for singleton feature targets across dimensions: stimulus- and expectancy-driven effects in dimensional weighting. J Exp Psychol Hum Percept Perform 29:1021–1035. https://doi.org/10.1037/0096-1523.29.5.1021
    OpenUrlCrossRefPubMed
  36. ↵
    1. Nadra JG,
    2. Bengson JJ,
    3. Morales AB,
    4. Mangun GR
    (2023) Attention without constraint: alpha lateralization in uncued willed attention. eNeuro 10:ENEURO.0258-0222.2023. https://doi.org/10.1523/ENEURO.0258-22.2023
    OpenUrl
  37. ↵
    1. Nako R,
    2. Wu R,
    3. Eimer M
    (2014) Rapid guidance of visual search by object categories. J Exp Psychol Hum Percept Perform 40:50–60. https://doi.org/10.1037/a0033228
    OpenUrlCrossRefPubMed
  38. ↵
    1. Niklaus M,
    2. Nobre AC,
    3. van Ede F
    (2017) Feature-based attentional weighting and spreading in visual working memory. Sci Rep 7:42384. https://doi.org/10.1038/srep42384
    OpenUrlCrossRefPubMed
  39. ↵
    1. Noah S,
    2. Powell T,
    3. Khodayari N,
    4. Olivan D,
    5. Ding M,
    6. Mangun GR
    (2020) Neural mechanisms of attentional control for objects: decoding EEG alpha when anticipating faces, scenes, and tools. J Neurosci 40:4913–4924. https://doi.org/10.1523/JNEUROSCI.2685-19.2020
    OpenUrlAbstract/FREE Full Text
  40. ↵
    1. Nobre AC,
    2. Sebestyen GN,
    3. Gitelman DR,
    4. Mesulam MM,
    5. Frackowiak RS,
    6. Frith CD
    (1997) Functional localization of the system for visuospatial attention using positron emission tomography. Brain 120:515–533. https://doi.org/10.1093/brain/120.3.515
    OpenUrlCrossRefPubMed
  41. ↵
    1. Nobre AC,
    2. Rao A,
    3. Chelazzi L
    (2006) Selective attention to specific features within objects: behavioral and electrophysiological evidence. J Cogn Neurosci 18:539–561. https://doi.org/10.1162/jocn.2006.18.4.539
    OpenUrlCrossRefPubMed
  42. ↵
    1. Popov T,
    2. Gips B,
    3. Kastner S,
    4. Jensen O
    (2019) Spatial specificity of alpha oscillations in the human visual system. Hum Brain Mapp 40:4432–4440. https://doi.org/10.1002/hbm.24712
    OpenUrlCrossRefPubMed
  43. ↵
    1. Posner MI,
    2. Snyder CR,
    3. Davidson BJ
    (1980) Attention and the detection of signals. J Exp Psychol 109:160–174. https://doi.org/10.1037/0096-3445.109.2.160
    OpenUrlCrossRefPubMed
  44. ↵
    1. Rajan A,
    2. Meyyappan S,
    3. Liu Y,
    4. Samuel IBH,
    5. Nandi B,
    6. Mangun GR,
    7. Ding M
    (2021) The microstructure of attentional control in the dorsal attention network. J Cogn Neurosci 33:965–983. https://doi.org/10.1162/jocn_a_01710
    OpenUrlCrossRefPubMed
  45. ↵
    1. Rihs TA,
    2. Michel CM,
    3. Thut G
    (2007) Mechanisms of selective inhibition in visual spatial attention are indexed by alpha-band EEG synchronization. Eur J Neurosci 25:603–610. https://doi.org/10.1111/j.1460-9568.2007.05278.x
    OpenUrlCrossRefPubMed
  46. ↵
    1. Saenz M,
    2. Buracas GT,
    3. Boynton GM
    (2002) Global effects of feature-based attention in human visual cortex. Nat Neurosci 5:631–632. https://doi.org/10.1038/nn876
    OpenUrlCrossRefPubMed
  47. ↵
    1. Samaha J,
    2. Sprague TC,
    3. Postle BR
    (2016) Decoding and reconstructing the focus of spatial attention from the topography of alpha-band oscillations. J Cogn Neurosci 28:1090–1097. https://doi.org/10.1162/jocn_a_00955
    OpenUrlCrossRefPubMed
  48. ↵
    1. Sauseng P,
    2. Klimesch W,
    3. Stadler W,
    4. Schabus M,
    5. Doppelmayr M,
    6. Hanslmayr S,
    7. Gruber WR,
    8. Birbaumer N
    (2005) A shift of visual spatial attention is selectively associated with human EEG alpha activity. Eur J Neurosci 22:2917–2926. https://doi.org/10.1111/j.1460-9568.2005.04482.x
    OpenUrlCrossRefPubMed
  49. ↵
    1. Serences JT,
    2. Boynton GM
    (2007) Feature-based attentional modulations in the absence of direct visual stimulation. Neuron 55:301–312. https://doi.org/10.1016/j.neuron.2007.06.015
    OpenUrlCrossRefPubMed
  50. ↵
    1. Snyder AC,
    2. Foxe JJ
    (2010) Anticipatory attentional suppression of visual features indexed by oscillatory alpha-band power increases: a high-density electrical mapping study. J Neurosci 30:4024–4032. https://doi.org/10.1523/JNEUROSCI.5684-09.2010
    OpenUrlAbstract/FREE Full Text
  51. ↵
    1. Szczepanski SM,
    2. Pinsk MA,
    3. Douglas MM,
    4. Kastner S,
    5. Saalmann YB
    (2013) Functional and structural architecture of the human dorsal frontoparietal attention network. Proc Natl Acad Sci U S A 110:15806–15811. https://doi.org/10.1073/pnas.1313903110
    OpenUrlAbstract/FREE Full Text
  52. ↵
    1. Tamber-Rosenau BJ,
    2. Asplund CL,
    3. Marois R
    (2018) Functional dissociation of the inferior frontal junction from the dorsal attention network in top-down attentional control. J Neurophysiol 120:2498–2512. https://doi.org/10.1152/jn.00506.2018
    OpenUrlCrossRefPubMed
  53. ↵
    1. Tenke CE,
    2. Kayser J
    (2012) Generator localization by current source density (CSD): implications of volume conduction and field closure at intracranial and scalp resolutions. Clin Neurophysiol 123:2328–2345. https://doi.org/10.1016/j.clinph.2012.06.005
    OpenUrlCrossRefPubMed
  54. ↵
    1. Thielen J,
    2. Bosch SE,
    3. van Leeuwen TM,
    4. van Gerven MAJ,
    5. van Lier R
    (2019) Evidence for confounding eye movements under attempted fixation and active viewing in cognitive neuroscience. Sci Rep 9:17456. https://doi.org/10.1038/s41598-019-54018-z
    OpenUrlCrossRefPubMed
  55. ↵
    1. Thut G,
    2. Nietzel A,
    3. Brandt SA,
    4. Pascual-Leone A
    (2006) Alpha-band electroencephalographic activity over occipital cortex indexes visuospatial attention bias and predicts visual target detection. J Neurosci 26:9494–9502. https://doi.org/10.1523/JNEUROSCI.0875-06.2006
    OpenUrlAbstract/FREE Full Text
  56. ↵
    1. Töllner T,
    2. Mink M,
    3. Müller HJ
    (2015) Searching for targets in visual working memory: investigating a dimensional feature bundle (DFB) model. Ann N Y Acad Sci 1339:32–44. https://doi.org/10.1111/nyas.12703
    OpenUrlCrossRefPubMed
  57. ↵
    1. Treue S,
    2. Martínez Trujillo JC
    (1999) Feature-based attention influences motion processing gain in macaque visual cortex. Nature 399:575–579. https://doi.org/10.1038/21176
    OpenUrlCrossRefPubMed
  58. ↵
    1. Wetzels R,
    2. Matzke D,
    3. Lee MD,
    4. Rouder JN,
    5. Iverson GJ,
    6. Wagenmakers EJ
    (2011) Statistical evidence in experimental psychology: an empirical comparison using 855 t tests. Perspect Psychol Sci 6:291–298. https://doi.org/10.1177/1745691611406923
    OpenUrlCrossRefPubMed
  59. ↵
    1. Woldorff MG,
    2. Hazlett CJ,
    3. Fichtenholtz HM,
    4. Weissman DH,
    5. Dale AM,
    6. Song AW
    (2004) Functional parcellation of attentional control regions of the brain. J Cogn Neurosci 16:149–165. https://doi.org/10.1162/089892904322755638
    OpenUrlCrossRefPubMed
  60. ↵
    1. Worden MS,
    2. Foxe JJ,
    3. Wang N,
    4. Simpson GV
    (2000) Anticipatory biasing of visuospatial attention indexed by retinotopically specific alpha-band electroencephalography increases over occipital cortex. J Neurosci 20:RC63. https://doi.org/10.1523/JNEUROSCI.20-06-j0002.2000
    OpenUrlFREE Full Text
Back to top

In this issue

The Journal of Neuroscience: 45 (39)
Journal of Neuroscience
Vol. 45, Issue 39
24 Sep 2025
  • Table of Contents
  • About the Cover
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Hierarchical Organization of Human Visual Feature Attention Control
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Hierarchical Organization of Human Visual Feature Attention Control
Sreenivasan Meyyappan, Mingzhou Ding, George R. Mangun
Journal of Neuroscience 24 September 2025, 45 (39) e2073242025; DOI: 10.1523/JNEUROSCI.2073-24.2025

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Hierarchical Organization of Human Visual Feature Attention Control
Sreenivasan Meyyappan, Mingzhou Ding, George R. Mangun
Journal of Neuroscience 24 September 2025, 45 (39) e2073242025; DOI: 10.1523/JNEUROSCI.2073-24.2025
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • Peer Review
  • PDF

Keywords

  • attention control
  • EEG
  • feature-based attention
  • MVPA
  • vision

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Increased perceptual reliability reduces membrane potential variability in cortical neurons
  • Synergistic geniculate and cortical dynamics facilitate a decorrelated spatial frequency code in the early visual system
  • Is the whole the sum of its parts? Neural computation of consumer bundle valuation in humans
Show more Research Articles

Behavioral/Cognitive

  • Increased perceptual reliability reduces membrane potential variability in cortical neurons
  • Synergistic geniculate and cortical dynamics facilitate a decorrelated spatial frequency code in the early visual system
  • Is the whole the sum of its parts? Neural computation of consumer bundle valuation in humans
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.