The visual system has a limited capacity, so visual inputs must compete for representation in visual cortex. Attentional mechanisms resolve this competition by selecting a subset of behaviorally relevant information for processing on the basis of location or nonspatial features, such as color (for review, see Carrasco, 2011). One mechanism for attentional selection is modulation of neural gain such that the neuronal response to attended stimuli is amplified relative to unattended inputs (Hillyard et al., 1998). Numerous studies suggest that feature-based attention operates globally, such that neural gain for an attended feature (e.g., the color red or upward motion) is amplified across the entire visual field. For example, Treue and Martínez Trujillo (1999) demonstrated that attending to a feature (direction of motion) in one visual hemifield modulated the response of visual neurons with receptive fields in the opposite hemifield. Additional evidence for global feature-based attention has subsequently come from studies using a range of methodologies, including single-unit recordings in monkeys (for review, see Maunsell and Treue, 2006), psychophysics (Liu and Mance, 2011), functional magnetic resonance imaging (Liu et al., 2007; Serences and Boynton, 2007), and electroencephalography (EEG; Andersen et al., 2013). Although there is strong evidence that global feature-based attention occurs in most experimental settings, it is less clear whether spatial attention can narrow the scope of feature-based attention. If relevant stimuli only appear in one portion of the visual field (e.g., near fixation), it may be advantageous to restrict feature-based attention to the relevant location.
A recent study in The Journal of Neuroscience by Painter et al. (2014) examined whether task demands influence the scope of feature-based attention. Most studies of feature-based attention have used tasks in which participants monitor a single stimulus for a subtle change in a target feature (e.g., a decrement in luminance). Instead of such a simple attentional monitoring task, Painter and colleagues (2014) used a visual search task to examine the effects of feature-based attention. In this task, participants searched sequential arrays of items presented at fixation for targets with a defined feature (a specific color), while task-irrelevant stimuli flickered in the periphery. Given the privileged status of central vision in directing attention within displays (Hollingworth et al., 2001), a fixated central display with task-irrelevant peripheral stimuli may be a prime situation in which feature-based attention can be restricted to a relevant location.
In Painter and colleagues' (2014) Experiment 1, participants performed two different search tasks. During unique-feature search, each array included one item in the target color (e.g., an upright red “T”) among uniformly colored (e.g., green) distractors. During conjunction search, half of the items were in the target color and half were in a distractor color so that participants had to inspect all items in the target color to identify targets. Steady-state visual-evoked potentials (SSVEPs), measured using EEG, were used to assess whether neural responses to the attended color were enhanced globally during visual search. SSVEPs are measured as an enhanced oscillatory response at the frequency of a repetitive, flickering visual stimulus. The SSVEP response is further enhanced when the flickering stimulus is attended. This attentional enhancement is thought to reflect a relative increase in gain for the neurons representing the attended stimulus (Morgan et al., 1996). In Painter and colleagues' (2014) experiments, the central search array was surrounded by checkerboards containing both the target and distractor colors. The target and distractor colors flickered at different frequencies, allowing the authors to examine the SSVEP response to each color separately. If feature-based attention affects processing outside the search array (i.e., is spatially global), the SSVEP response to the target color in the checkerboard surround should be greater than that to the distractor color. Such evidence for global feature selection was found during conjunction search, but not during unique-feature search.
In Experiment 2, Painter and colleagues (2014) examined whether the global effect of feature-based attention seen during conjunction search was the result of a gain increase for the target color (target enhancement), a gain decrease for the distractor color (distractor suppression), or a combination of the two. Participants performed the same conjunction search as in Experiment 1, with the exception that a third color was introduced to the peripheral checkerboard. The three colors in the checkerboard flickered at different frequencies, allowing the authors to examine the SSVEP response to each color separately. Importantly, only two colors were included in the search array (one as the target color and another as the distractor color). The third color therefore provided a neutral baseline. Relative to this baseline, Painter and colleagues (2014) observed a potentiated SSVEP response to the target color, but no attenuation of the SSVEP response to the distractor color, suggesting that feature-based selection was mediated exclusively by target enhancement during conjunction search.
Painter and colleagues' (2014) experiments are among the first to test whether feature-based attention is global during visual search rather than more typical attentional monitoring tasks (see also Bichot et al., 2005). Prominent models of visual search, such as guided search theory (Wolfe, 1994), have long proposed that global feature templates direct attention to potential target features during visual search. During conjunction search in both experiments, Painter and colleagues (2014) found evidence for global enhancement of the SSVEP response to the attended feature. These results suggest that searching for a target color (in a conjunction search task) enhances the response to that color across the entire visual field, even at completely irrelevant locations. This finding provides direct evidence for the role of global feature-based attention during visual search.
Although the conjunction search condition provided strong evidence for global feature-based attention, Painter and colleagues' (2014) found no modulated SSVEP responses to the peripheral checkerboard during unique-feature search. This finding could suggest that feature-based attention can be restricted to the central search array during unique-feature search. However, a second possibility is that participants simply did not use a feature-search strategy (i.e., they did not search for the red item when the target color was red). Instead, participants may have searched for the item that was the “odd color out” in the unique-feature search condition. This strategy, known as singleton search, may be used during visual search when the target is defined by a single unique feature (e.g., the odd color out or the odd shape out; Bacon and Egeth, 1994). If participants used a singleton-search strategy, one would expect that the feature-specific response to the target color would not be enhanced.
Painter and colleagues (2014) acknowledged that singleton search is the most parsimonious explanation for the lack of global feature enhancement during unique-feature search. Nevertheless, they ultimately concluded that their results suggest an interaction between spatial attention and feature-based attention when search demands vary. In other words, feature-based attention is global in some circumstances but is spatially restricted in others. However, this conclusion is undermined by the possibility that participants were not using feature-based attention at all during unique-feature search. Given that participants may have adopted a singleton search strategy in the unique-feature search condition, future research should test whether feature-based attention is global during single-feature search when the possibility of singleton search is precluded. Straightforward modifications of Painter and colleagues' (2014) paradigm should prove useful in this respect. For example, presenting a color-defined target item among heterogeneous-colored distractor items would prevent participants from using a singleton-search strategy and promote search for the target color itself.
Painter and colleagues' (2014) conclusion that feature-based attention can be restricted to a relevant location under certain circumstances is also difficult to reconcile with another recent SSVEP study. Andersen and colleagues (2013) found that participants could not restrict feature-based attention to a visual hemifield when instructed to do so, even though failure to restrict feature-based attention was detrimental to task performance. In a critical condition, subjects were instructed to attend a color in one hemifield and to ignore the same color in the opposite hemifield. The SSVEP response to the attended color was not greater in the relevant hemifield than in the ignored hemifield, indicating that subjects were unable to restrict feature-based attention to the relevant hemifield. In light of these results, the most likely explanation of the lack of global feature enhancement during Painter and colleagues' (2014) unique-feature search is the use of a singleton-search strategy. A singleton search interpretation is consistent both with the visual search literature and with the existing studies on the obligatory global nature of feature-based attention.
Despite ambiguity in the interpretation of the unique-feature search results, the global effect of feature-based attention during conjunction search reported by Painter and colleagues (2014) sheds light on how feature-based attention guides visual search. By recording SSVEPs elicited by checkerboards outside the search array during a visual search task, Painter and colleagues (2014) provide compelling new evidence for models of visual search, which assert that global feature templates direct attention to potential targets. The novel application of SSVEPs to a visual search task highlights the contribution of the basic mechanism of feature-based attention to visual search.
Footnotes
Editor's Note: These short, critical reviews of recent papers in the Journal, written exclusively by graduate students or postdoctoral fellows, are intended to summarize the important findings of the paper and provide additional insight and commentary. For more information on the format and purpose of the Journal Club, please see http://www.jneurosci.org/misc/ifa_features.shtml.
J.J.F. and K.C.S.A. are supported by National Institute of Mental Health Grant R01-MH087214 awarded to Edward Awh and Edward K. Vogel. J.J.F. is also supported by a Fulbright Science and Innovation Graduate Award awarded by Fulbright New Zealand.
The authors declare no competing financial interests.
- Correspondence should be addressed to either Joshua J. Foster or Kirsten C. S. Adam, Department of Psychology, 1227 University of Oregon, Eugene, Oregon 97403-1227. jfoster2{at}uoregon.edu or kadam{at}uoregon.edu