Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

Perceptual Difficulty Regulates Attentional Gain Modulations in Human Visual Cortex

Prapasiri Sawetsuttipan, Phond Phunchongharn, Kajornvut Ounjai, Annalisa Salazar, Sarigga Pongsuwan, Singh Intrachooto, John T. Serences and Sirawaj Itthipuripat
Journal of Neuroscience 3 May 2023, 43 (18) 3312-3330; DOI: https://doi.org/10.1523/JNEUROSCI.0519-22.2023
Prapasiri Sawetsuttipan
1Neuroscience Center for Research and Innovation, Learning Institute, King Mongkut's University of Technology Thonburi, Bangkok 10140, Thailand
2Computer Engineering Department, Faculty of Engineering, King Mongkut's University of Technology Thonburi, Bangkok 10140, Thailand
3Big Data Experience Center, King Mongkut's University of Technology Thonburi, Bangkok 10140, Thailand
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Phond Phunchongharn
2Computer Engineering Department, Faculty of Engineering, King Mongkut's University of Technology Thonburi, Bangkok 10140, Thailand
3Big Data Experience Center, King Mongkut's University of Technology Thonburi, Bangkok 10140, Thailand
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Kajornvut Ounjai
1Neuroscience Center for Research and Innovation, Learning Institute, King Mongkut's University of Technology Thonburi, Bangkok 10140, Thailand
4Biological Engineering Program, Faculty of Engineering, King Mongkut's University of Technology Thonburi, Bangkok 10140, Thailand
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Annalisa Salazar
5Department of Psychology, University of California, San Diego, La Jolla, California 92093-1090
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Sarigga Pongsuwan
6Happiness Science Hub, Research & Innovation for Sustainability Center (RISC), Bangkok 10260, Thailand
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Singh Intrachooto
6Happiness Science Hub, Research & Innovation for Sustainability Center (RISC), Bangkok 10260, Thailand
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
John T. Serences
5Department of Psychology, University of California, San Diego, La Jolla, California 92093-1090
7Neurosciences Graduate Program and Kavli Foundation for the Brain and Mind, University of California, San Diego, La Jolla, California 92093-1090
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for John T. Serences
Sirawaj Itthipuripat
1Neuroscience Center for Research and Innovation, Learning Institute, King Mongkut's University of Technology Thonburi, Bangkok 10140, Thailand
3Big Data Experience Center, King Mongkut's University of Technology Thonburi, Bangkok 10140, Thailand
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Sirawaj Itthipuripat
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

Perceptual difficulty is sometimes used to manipulate selective attention. However, these two factors are logically distinct. Selective attention is defined by priority given to specific stimuli based on their behavioral relevance, whereas perceptual difficulty is often determined by perceptual demands required to discriminate relevant stimuli. That said, both perceptual difficulty and selective attention are thought to modulate the gain of neural responses in early sensory areas. Previous studies found that selectively attending to a stimulus or increasing perceptual difficulty enhanced the gain of neurons in visual cortex. However, some other studies suggest that perceptual difficulty can have either a null or even reversed effect on gain modulations in visual cortex. According to Yerkes–Dodson's Law, it is possible that this discrepancy arises because of an interaction between perceptual difficulty and attentional gain modulations yielding a nonlinear inverted-U function. Here, we used EEG to measure modulations in the visual cortex of male and female human participants performing an attention-cueing task where we systematically manipulated perceptual difficulty across blocks of trials. The behavioral and neural data implicate a nonlinear inverted-U relationship between selective attention and perceptual difficulty: a focused-attention cue led to larger response gain in both neural and behavioral data at intermediate difficulty levels compared with when the task was more or less difficult. Moreover, difficulty-related changes in attentional gain positively correlated with those predicted by quantitative modeling of the behavioral data. These findings suggest that perceptual difficulty mediates attention-related changes in perceptual performance via selective neural modulations in human visual cortex.

SIGNIFICANCE STATEMENT Both perceptual difficulty and selective attention are thought to influence perceptual performance by modulating response gain in early sensory areas. That said, less is known about how selective attention interacts with perceptual difficulty. Here, we measured neural gain modulations in the visual cortex of human participants performing an attention-cueing task where perceptual difficulty was systematically manipulated. Consistent with Yerkes–Dodson's Law, our behavioral and neural data implicate a nonlinear inverted-U relationship between selective attention and perceptual difficulty. These results suggest that perceptual difficulty mediates attention-related changes in perceptual performance via selective neural modulations in visual cortex, extending our understanding of the attentional operation under different levels of perceptual demands.

  • attention
  • EEG
  • gain
  • perceptual difficulty
  • signal detection theory
  • SSVEP

Introduction

In a wide range of behavioral tasks, performance is modulated by how much priority is placed on processing information about a relevant stimulus and how hard is it to detect and discriminate relevant stimulus features (Urbach and Spitzer, 1995; Ahissar and Hochstein, 1997; McDowd and Craik, 1988; Lavie et al., 2004; Prinzmetal et al., 2009; Carrasco, 2011; Giesbrecht et al., 2014). These two factors are known as selective attention and perceptual difficulty, respectively. While the effects of attention on neural gain in early visual cortex (Moran and Desimone, 1985; Mangun and Hillyard, 1987; Desimone and Duncan, 1995; Johannes et al., 1995; Connor et al., 1997; Hillyard et al., 1998; Mangun and Buck, 1998; McAdams and Maunsell, 1999; Reynolds et al., 2000; Martínez-Trujillo and Treue, 2002; Reynolds and Chelazzi, 2004; Maunsell and Treue, 2006; Lee and Maunsell, 2009; Reynolds and Heeger, 2009; Lee and Maunsell, 2010; Anton-Erxleben and Carrasco, 2013; Buschman and Kastner, 2015; Luo and Maunsell, 2015; Foster et al., 2021), and the concurrent attention-related benefits in perceptual performance, have been extensively studied (Mangun and Hillyard, 1988, 1990; Störmer et al., 2009; Pestilli et al., 2011; Itthipuripat et al., 2014a, b, 2017; Itthipuripat and Serences, 2016), it is still unclear how perceptual difficulty mediates neural responses in visual cortex and how perceptual difficulty interacts with selective attention.

First, prior studies that only manipulated perceptual difficulty without systematically manipulating attention have found mixed effects on neural activity in visual cortex. Some early studies in nonhuman primates found that increasing perceptual difficulty increased neural firing rates in early visual areas and sharpened neural tuning functions in mid-level visual areas (Spitzer et al., 1988; Spitzer and Richmond, 1991). In contrast, a recent study reported that increasing perceptual difficulty lead to increased firing rates in some neurons and decreased firing rates in others, resulting in null effects of perceptual difficulty on the population-level activity in early visual cortex (Ruff and Cohen, 2014a). We hypothesized that different effects of perceptual difficulty on neural activity were partially because of a nonlinear interaction between perceptual difficulty and the effects of selective attention on gain modulations in visual cortex.

This hypothesis is hard to evaluate based on prior work because of differences in task design and most studies did not independently manipulate perceptual difficulty and attention. In a few studies that did jointly manipulate these factors, perceptual difficulty was manipulated across a limited range, limiting the ability to observe nonmonotonic modulations in attentional gain with increased perceptual difficulty (Chen et al., 2008; Dodson, 1915; also see Motter, 1993; Handy and Mangun, 2000; Handy et al., 2001; Boudreau et al., 2006; Prinzmetal et al., 2009). Moreover, data linking difficulty-related changes in attentional modulations of neural activity and behavioral performance are still lacking.

According to the influential Yerkes–Dodson's Law, arousal state should mediate behavioral performance and the underlying neural processes in line with a nonlinear inverted-U function (Yerkes and Dodson, 1908; Dodson, 1915; see also Hull, 1943; Broadhurst, 1957; Duffy, 1957; Anderson et al., 1989; Diamond et al., 2007). Therefore, increasing perceptual difficulty from easy to an intermediate level could enhance overall arousal, leading to larger attentional gain modulations. On the other hand, when a behavioral task is too difficult, arousal might be too high, leading to no further increase or even a drop in attentional gain. Based on this framework, it is possible that some behavioral tasks were already challenging for the subjects to perform, resulting in no gain modulation with increasing difficulty (e.g., from ∼80% to ∼63% accuracy in Ruff and Cohen, 2014a). However, if baseline perceptual difficulty was low, then increasing difficulty might lead to large gain modulations (e.g., from ∼93% to ∼73% accuracy in Spitzer et al., 1988). In either case, the unique contribution of each of these two factors can only be assessed by parametrically varying the perceptual demand of behavioral tasks across the entire range of difficulty levels while independently manipulating the focus of attention.

Here, we designed a paradigm to manipulate perceptual difficulty to assess possible nonmonotonic effects on attentional gain in human visual cortex. We concurrently measured behavioral performance and the amplitudes of steady-state visually evoked potentials (SSVEPs) while human participants performed a contrast discrimination task (Fig. 1). Moreover, we employed a linking model based on the signal detection theory (SDT) to describe the interaction between attention and perceptual difficulty based on brain-behavior modulations (Tanner and Swets, 1954; Pestilli et al., 2011; Itthipuripat et al., 2014a, 2017; Itthipuripat and Serences, 2016).

Materials and Methods

Subjects

We recruited seven neurologically healthy male and female human subjects (19–23 years old) from the community at University of California San Diego (UCSD). All subjects had normal or corrected-to-normal vision and participated in 6 d of experiments, including 1 d of behavioral training and 5 d of EEG recording. After the first day of training, one subject decided not to continue, resulting in the complete datasets from six subjects in the final analysis (three female, three male, all right-handed, mean age ± SD = 20.5 ± 1.217 years old). Before their participation, each subject signed an informed consent form approved by the Institutional Review Board at UCSD. They were compensated for $10 and $15 per hour for participating in behavioral and EEG experiments, respectively. The use of six subjects is within the typical range for studies using similar multiple session approaches to measure behavioral and neural contrast response functions (CRFs) in human subjects (Di Russo et al., 2001; Morrone et al., 2002, 2004; Carrasco et al., 2004; Pestilli and Carrasco, 2005; Ling and Carrasco, 2006; Pestilli et al., 2007, 2009, 2011; Herrmann et al., 2010; Itthipuripat et al., 2014a, b, 2017).

Experimental design and statistical analysis

Stimuli and tasks

Stimuli and behavioral tasks were controlled by a PC running Windows XP using MATLAB (The MathWorks) and the Psychophysics Toolbox (version 3.0.8; Brainard, 1997; Pelli, 1997). Subjects sat 60 cm from a CRT monitor with a gray background of 34.51 cd/m2 and the refresh rate of 120 Hz. We conducted the experiments in the dark room in a quiet experimental area for the behavioral training sessions and in a dark, sound-attenuated, and electromagnetically shielded room (ETS Lindgren) for the EEG recording sessions.

In the main EEG experiment, subjects fixated at the center of the screen while performing different variants of the attention-cued contrast discrimination task with differing levels of task difficulty (easy, medium, and hard; Fig. 1). In our tasks, each trial began with either a focused-attention or a divided-attention cue, followed by an array of two checkerboard wedge stimuli (i.e., a target and a nontarget stimulus) appearing 400–600 ms after the cue onset for a duration of 1500 ms. The focused-attention cue pointed directly to the target stimulus appearing on either the left or the right lower visual quadrant. The divided attention cue pointed to both quadrants and the target was equally likely to appear in one of these locations.

The target stimulus contained a contrast increment in either the parafoveal or peripheral segments of the stimulus. Subjects reported which of these segments had the contrast increment as accurately and fast as possible by pressing one of the two corresponding buttons on the keyboard using their right index and middle fingers, respectively. Note that this contrast increment appeared for the entire stimulus duration of 1500 ms. The nontarget stimulus had the same contrast value throughout the entire stimulus region. The spatial frequency of the wedge stimuli was adjusted along the foveal-to-peripheral axis to emulate the cortical magnification factor measured in human V1 (Cowey and Rolls, 1974; Sereno et al., 1995; Engel et al., 1997; Duncan and Boynton, 2003). The inner and outer edges of the wedge stimuli were placed 7.03° and 13.46° visual angle away from fixation, respectively. The two lateral edges of these wedge stimuli were 7.944° apart.

To measure behavioral and neural data as a function of stimulus contrast, we pseudo-randomized the pedestal contrasts (or baseline contrasts) of the target and nontarget stimuli independently from six contrast levels: 0%, 3.75%, 7.5%, 15%, 30%, and 60% Michelson contrasts. To simultaneously monitor SSVEPs evoked by individual stimuli, the visual stimuli on the lower left and lower right quadrants were flickered on-off at 24 and 20 Hz, respectively. Then, placeholders came up 300 ms after stimulus offset on both sides of the contrast increment segment of the target stimulus. These placeholders would change from black to blue, red, or yellow to provide feedback about whether the subject responded correctly, incorrectly, or too slowly (i.e., slower than 1500 ms after stimulus onset), respectively. The intertrial interval was pseudo-randomly drawn from a uniform distribution spanning 300–500 ms.

In order to study the effect of perceptual difficulty on the attentional modulations of behavioral and EEG data, we adjusted the contrast thresholds for each of the attention conditions and the pedestal contrast levels across blocks of trials so that the accuracy levels of the behavioral tasks were maintained at ∼91%, ∼76%, and ∼65% accuracy for the easy, medium, and hard blocks, respectively (see behavioral results in Fig. 2A). On the first day, subjects underwent a behavioral training session for 2.5 h, where we implemented the method of constant stimuli to estimate initial contrast discrimination thresholds to be used for the easy, medium, and hard blocks on the first day of EEG recording. On each of the 5 d of the EEG experiment, subjects underwent six easy blocks, six medium blocks, and six hard blocks. Block types changed every three blocks and the order of blocks types were pseudo-randomized across subjects. Collectively, the entire EEG experiment contained 4320 trials where all experimental conditions were counterbalanced: two attention conditions × two target locations × six pedestal contrast levels of target × two contrast increment locations × three difficulty level × 30 repeats. Trial order was also pseudo-randomized so that trial types were not predictable.

Behavioral analyses

The contrast increment values averaged across all EEG sessions were plotted as a function of pedestal contrasts, yielding threshold versus contrast (TvC) functions for all difficulty levels (easy, medium, and hard) and attention conditions (focused and divided attention). The within-subject SEMs associated with these contrast increment values were computed and plotted for each level of stimulus contrast (Loftus and Masson, 1994). There were 120 trials in total for each of these 36 experimental conditions. 3-way repeated-measures ANOVAs were used to examine the effects of perceptual difficulty (easy/medium/hard), attention (focused-attention/divided-attention), and contrast (0–60%) on hit rates, mean reaction times on correct trials (correct RTs) and contrast discrimination thresholds (Fig. 2). For each difficulty level we also performed two-way repeated-measures ANOVAs to test the main effects of attention and interactions between attention and contrast on the perceptual threshold data. For each ANOVA that yielded a significant main effect of attention or a significant interaction between attention and contrast, we performed subsequent pairwise t tests to examine the effects of attention on the contrast thresholds across individual contrast levels (one-tailed because of the predicted direction of attentional modulations, i.e., attention decreasing contrast thresholds). Multiple comparisons for 6 individual contrast levels were corrected using the Holm–Bonferroni method (Holm, 1979).

In addition, we used a combination of d′and Naka–Rushton equations (see Fig. 3 and Eqs. 1–4 below) to model TvC data of individual subjects (Fig. 4). According to the signal detection theory (SDT), perceptual contrast sensitivity (d′) is limited by the difference in neural responses evoked by the pedestal and increment stimuli (ΔR) divided by the magnitude of sensory noise (in this case, the intertrial variability of the neuronal activity, denoted σ; Tanner and Swets, 1954; Legge and Foley, 1980; Boynton et al., 1999; Gorea and Sagi, 2001; Huang and Dobkins, 2005; Pestilli et al., 2011; Hara and Gardner, 2014; Itthipuripat et al., 2014a, 2017; Itthipuripat and Serences, 2016) as follows: d′=ΔR(c)σ=R(c)−R(Δc(c))σ.(1)

Here, c is stimulus contrast and Δc(c) is the contrast discrimination threshold (or contrast increment) at each contrast level required to maintain accuracy levels of ∼91%, ∼76%, and ∼65% for the easy, medium, and hard blocks, respectively (Fig. 3, left). R(c) is a hypothetical contrast response function (CRF; Fig. 3, right) that could be described by a Naka–Rushton equation as follows: R(c)=Grcqcq + Gcq + b.(2)

Here, Gr and Gc are response gain and contrast gain factors, which control the vertical shift and the horizontal position of the CRF, respectively. b is the baseline offset and q is the exponent that controls the rate at which the CRF rises and reaches asymptote. Using the combination of the d′ and Naka–Rushton equations (Eqs. 1, 2), we estimated the contrast threshold (Δc) based on the first derivative (i.e., slope) of the hypothetical CRF (Boynton et al., 1999) as follows: Δc(c)=ΔR(c)dRdc.(3)

In this equation, dR/dC is the derivative of the hypothetical CRF. Since Gr and σ jointly control the vertical shift of the TvC function they were set to 1. Since changes in the additive factor b do not impact the slope of the TvC function, it was set to 0. In addition, the d' values of the two-alternative forced-choice (2AFC) tasks for all difficulty levels of individual subjects were computed using the following equation: d′=Z(hit)−Z(fa)2.(4)

Here, Z is the inverse cumulative distribution function of the standard normal. For each subject, we fit the TvC data separately for individual difficulty levels and attention conditions with Equations 1–4 using MATLAB's fminsearch function (Nelder–Mead method; nonlinear least squares) with Gr, Gc, and q as free parameters (the initial seed values for Gr, Gc, and q were 1, 20%, and 1, respectively). Since the unit of Gr for the hypothetical CRF is arbitrary, we rescaled the data for individual subjects across all difficulty levels and attention using the following formula: normalizedGr=Gr−min(Gr)max(Gr)−min(Gr).(5)

Where min(Gr) is the lowest Grvalue and max(Gr) is the highest Gr value across all difficulty levels and attention conditions for each subject. Lastly, we used resampling statistics to determine the significance of the attention effects on the Gr parameter as well as the interaction between perceptual difficulty and attention on these parameters (Fig. 5). First, we resampled subject labels with replacement 10,000 times, and then computed the probability at which the resampled Gr in the focused-attention condition was more or less than that in the divided-attention condition for each difficulty level (two-tailed). Here, multiple comparisons were corrected using the Holm–Bonferroni method (Holm, 1979). Next, we tested whether the degree of attentional modulations of Gr significantly differed across difficulty levels. To do so, we computed the probability at which the attentional modulation of the resampled Gr in the medium blocks was lower than those in the easy and the difficult blocks (one-tailed). We also computed the probability that the attentional modulation of the resampled Gr in the difficult blocks was lower than that in the easy blocks (one-tailed). We used one-tailed statistics here because of the predicted direction of the interactions between attention and perceptual difficulty in the observed dataset. Also, multiple comparisons were corrected using the Holm–Bonferroni method (Holm, 1979). We then ran the same statistical analyses on Gc.

EEG data acquisition, preprocessing, and analyses

EEG data were recorded using a 64-channel ActiveTwo system (Biosemi Instrumentation) with the sampling rate of 512 Hz. There were 8 additional external electrodes: a pair of reference electrodes placed on the left and right mastoids, a pair of electrodes affixed near the outer canthi of the left and right eyes for monitoring horizontal eye movements, and two pairs of electrodes affixed above and below the left and right eyes for monitoring blinks and vertical eye movements. The EEG data were online-referenced to the CMS-DRL electrode, and the data offset in all electrodes were maintained 20 V, which is a standard criterion for this active electrode system.

We used custom MATLAB scripts and EEGLab11.0.3.1b (Delorme and Makeig, 2004) to preprocess and analyze EEG data. First, the EEG data from all electrodes were re-referenced to the algebraic mean of the data recorded from the left and right mastoid electrodes. Then, we applied the 0.25-Hz high-pass and 55-Hz low-pass Butterworth filters (third order). The continuous EEG data were segmented into epochs extending from 500 ms before to 3000 ms after cue onset. Independent component analysis was performed to remove prominent eye blinks (Makeig et al., 1996). Next, we used visual inspection and threshold rejections to further reject trials confounded by residual eye movements, muscle activity, drifts, and other artifacts. This step led to the removal of 7.38 ± 4.43% SD of trials across all six subjects.

Next, the epoched data were time-locked to stimulus onset and the baseline activity was subtracted from −100 to 0 ms relative to the stimulus onset. The data were then sorted into the following experimental bins: focused-attention target stimuli, focused-attention nontarget stimuli (e.g., ignored stimuli), and divided-attention stimuli separately for the easy, medium, and hard blocks. For each of these bins, trials were also sorted into 12 subbins depending on the contrast and the location of the stimulus of interest (six contrast levels × two target locations). Following this data sorting step, the EEG data in all subbins were then averaged to obtain event-related potentials (ERPs) for each condition. Next, the ERPs were filtered with a Gaussian wavelet function with a 0.1 fractional bandwidth and center frequencies of 20 and 24 Hz, resulting in frequency-domain coefficients at the stimulus driving frequencies. Given this small fractional bandwidth, the frequency-domain standard deviation of the SSVEP signals were 0.8493 and 1.0192 Hz for 20- and 24-Hz flickers, respectively. In turn, we obtained the amplitudes of the SSVEPs evoked by individual stimulus-flicker frequencies by computing the absolute value of the coefficients. For each subject, the SSVEP amplitudes in individual subbins were then averaged across five contralateral posterior occipital electrodes where the SSVEP amplitudes collapsed across contrast levels, stimulus types, difficulty levels, and attention conditions were maximal. The data were then averaged across a 0–500 ms poststimulus window to avoid potential confounds from including data after a response had been made which was around 550–850 ms across all conditions (Fig. 2B). Here, the within-subject SEMs associated with the averaged SSVEP were computed for each level of stimulus contrast (Loftus and Masson, 1994). The mean SSVEP amplitudes and their SEMs were then plotted as a function of stimulus contrast, yielding SSVEP-based CRFs for each of the difficulty levels and attention conditions. On the contrast axis of the CRFs, the stimulus contrast values for the ignored stimulus were fixed at 0%, 3.75%, 7.5%, 15%, 30%, and 60% Michelson contrasts. However, the contrast values in the focused-attention and divided-attention conditions were obtained from the average contrast values between the pedestal and increment segments because the target stimuli contained both pedestal and contrast increment segments. Since the fit parameters were obtained from individual CRFs with true mean physical contrasts of all stimulus segments (i.e., instead of directly comparing SSVEP amplitudes for each pedestal contrast), any task-related and attention-related modulations of fit parameters associated with neural CRFs should be minimally affected by low-level physical differences in visual stimuli.

Next, we fit SSVEP-based CRFs for each difficulty level and attention condition of individual subjects with the Naka–Rushton equation (Eq. 2), where (R(c)) is the SSVEP amplitude as a function of stimulus contrast. First, we exhaustively searched for the value of exponent q from 1, 2, 3, 4, and 5 that provided the best fit to the SSVEP-based CRFs. From this step, we selected q of three because it yielded the highest goodness of fit (R2), averaged across all experimental conditions and all subjects. Since past EEG studies have consistently reported no changes in the response baseline of CRFs based on early visually evoked responses, we subtracted the response baseline from all SSVEP-based CRFs and fixed b at 0 (cf. Di Russo et al., 2001; Kim et al., 2007; Lauritzen et al., 2010; Wang and Wade, 2011; Itthipuripat et al., 2014a, b, 2017). We used five initial seed values for Gc (i.e., 1%, 10%, 20%, 50%, and 100% contrast) and five initial seed values for Gr (i.e., the difference between maximum and minimum responses divided by 0.5, 1, 2, 4, and 8). For individual subjects, we chose the seed values for Gc and Gr that yielded the best fit for each difficulty level and attention condition. The fitting procedure was constrained so that 0% ≥ Gc ≤ 100% contrast and was optimized using the least square error estimation method (fminsearch function in MATLAB). This method yielded estimates of the maximum response or G′r (i.e., the SSVEP response at 100% contrast minus baseline) and the half-maximum contrast or G′c (i.e., the contrast at which the response reached half-maximum), which we used as proxies for the response gain and contrast gain of the SSVEP-based CFRs, respectively. Next, we ran resampling statistics to determine the significance of the attention and perceptual difficulty effects on the G′r, and G′c parameters as well as the interaction between the two factors on these fitting parameters. First, we resampled subject labels with replacement 10,000 times, and then computed the SSVEP responses across the resampled datasets (i.e., bootstrapping). For each iteration, we fit the bootstrapped SSVEP data using the Naka-Rushton equation (Eq. 2) to obtain G′r and G′c parameters, and then examined the main effects of attention and perceptual difficulty as well as their interaction on these fit parameters. We did the resampling method this way instead of directly resampling the fit parameters of individual subjects to prevent the influence of potential spurious fits of the CRFs at the individual-subject level on the statistics. We also ran an auxiliary analysis where we did not subtract the response baseline from all SSVEP-based CRFs. Here, we estimated the b parameter by finding the minimum value of the SSVEP data across all contrast bins separately for each attention condition, each difficulty level, and each subject (see Fig. 9B).

There were three main sets of statistical comparisons we employed to test the effects of attention and their interactions with perceptual difficulty. First, we tested the overall differences across focused-attention, divided-attention stimuli, focused-attention nontarget (or ignored) stimuli from the data collapsed across all difficulty levels. To do so, we computed the probability at which the resampled G′ r, G′c, and b of the focused-attention target stimuli was more or less than those of the focused-attention nontarget (or ignored) stimuli and the divided-attention stimuli for the data collapsed across all difficulty levels (two-tailed). The same method was applied to determine differences between the divided-attention and ignored conditions. Multiple comparisons were corrected using the Holm–Bonferroni method (Holm, 1979). Second, we examined these differences separately for each of the difficulty levels using the same resampling methods. Finally, we tested for an interaction between attention and perceptual difficulty level by assessing whether the degree of attentional modulations of G′r, G′c, and b significantly differed across difficulty levels. To do so, we computed the probability that the attentional modulations of the resampled G′r, G′c, and b in the medium blocks were lower than those in the easy and the difficult blocks (one-tailed). We used one-tailed statistics here because of the predicted direction of the interactions between attention and perceptual difficulty in the observed dataset. We also computed the probability that the attentional modulations of the resampled G′r, G′c, and b in the difficult blocks were lower than that in the easy blocks (one-tailed). Here, multiple comparisons were corrected using the Holm–Bonferroni method (Holm, 1979).

Note that in the current study, the stimulus duration of 1500 ms and the time window of the main analysis (0–500 ms) are not perfectly aligned. We chose this analysis window based on the protocol from a recent SSVEP from our group that used a similar contrast discrimination task (Itthipuripat et al., 2018). In this prior study, we knew that subjects made a behavioral response ∼500 ms after stimulus onset, and that motivated the choice of the analysis window. That said, we opted to keep the stimulus duration at 1500 ms because we also manipulated perceptual difficulty which could result in longer RTs. As it turned out, responses were made ∼550–850 ms after stimulus onset in the present study. So, restricting the SSVEP window to 0–500 ms helped minimize confounds from motor-related activity and ensured that we only examined attentional modulations while subjects were still actively attending to the stimuli. That said, we also conducted the similar analyses for the data averaged across 500–1000 ms and across 1000–1500 ms to investigate the effects of attention and perceptual difficulty at these later time windows.

Although we chose the stimulus flickers (20 and 24 Hz) above the α range (∼10 Hz) to avoid contaminations from alpha band oscillations, one of our stimulus flickers (i.e., 20 Hz) still overlapped with the second harmonic of endogenous alpha band activity. In order to rule out the potential confound from the second harmonic of α, we conducted an additional analysis where we examined the effects of perceptual difficulty and attention on the CRFs based on the mean amplitude of induced EEG activity at 20 Hz. We first applied a wavelet filter at 20 Hz to the trial-by-trial EEG data and then computed the amplitude of the wavelet coefficients. We then averaged the amplitude data across trials to compute the induced EEG activity at 20 Hz as a function of stimulus contrast separately for different difficulty levels and attention conditions. Note that this method is different from the analysis performed to obtain SSVEP, where we averaged trial-by-trial EEG data first to isolate the 20-Hz steady-state response before applying the wavelet filter. We then bootstrapped data using the similar resampling and fitting procedures performed in the main SSVEP analysis.

Correlation analysis

We examined how difficulty-related changes in attention-induced response gain modulations observed in the SSVEP data related to those predicted by modeling the TvC data at the intersubject level. To do so, we first computed differences in attention-induced changes in the rescaled Gr predicted by the TvC data (focused minus divided attention) between the medium and easy blocks, between the medium and hard blocks, and between the hard and easy blocks. Next, we performed the same analysis for the G′r obtained from fitting in the SSVEP data measured from 0 to 500, 500 to 1000, and 1000 to 1500 ms poststimulus, respectively. Last, we correlated these difficulty-related differences in attention-induced changes in the behavioral and SSVEP data using the repeated-measures correlation method with paired measures assessed across different pairs of task comparisons (medium vs easy, medium vs hard, and hard vs easy) using the “rmcorr” function in R (R Core Team, 2020). Multiple comparisons were corrected using the Holm–Bonferroni method (Holm, 1979).

Data sharing

The stimulus presentation and analysis code as well as the behavioral and EEG data have been made available on The Open Science Framework (see the following links: https://osf.io/bc3hj/?view_only=326b8c4ffeb842dcbb81673f7f3a852c and https://osf.io/rbnm6/?view_only=aee48f6f9fae44d6a4e917af99aa709d).

Results

The present study used the attention-cueing contrast discrimination task shown in Figure 1 to simultaneously monitor attentional modulations of perceptual thresholds and SSVEPs measured as a function of stimulus contrast (the threshold vs contrast functions, or TvCs, and the SSVEP-based contrast response functions, or CRFs, respectively). Each trial started with a focused-attention cue pointing to the target stimulus or a divided-attention cue pointing to both target and nontarget stimuli. Subjects determined whether the target contained a contrast increment either in the more foveal or the more peripheral region of the target stimulus as accurately and quickly as possible using a button-press response. The pedestal (or baseline) contrasts for the visual stimuli were pseudo-randomly selected from 0%, 3.75%, 7.5%, 15%, 30%, and 60% Michelson contrasts. The left and right stimuli were flickered at 20 and 24 Hz, respectively, so that we could simultaneously monitor SSVEPs evoked by both stimuli. To manipulate perceptual difficulty, we adjusted the magnitude of contrast increments every three blocks to control hit rates in the easy, medium, and hard blocks.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

The attention-cueing contrast discrimination task. Each trial started with a focused-attention or divided-attention cue. The focused-attention cue pointed directly to the target stimulus, but the divided-attention cue pointed to both target and nontarget stimuli. Subjects had to determine whether the target contained a contrast increment at the inner or outer region of the stimulus. Once it occurred, the contrast increment was presented for the duration of the stimulus presentation period. Feedback, which indicated the exact position of the contrast increment, was presented in blue for correct responses and in red for incorrect responses. The pedestal contrasts were independently and pseudo-randomly selected from 0%, 3.75%, 7.5%, 15%, 30%, and 60% Michelson contrasts. The left and right stimuli were flickered at 24 and 20 Hz, respectively, to allow the simultaneous monitoring of SSVEPs evoked by both stimuli. Perceptual difficulty was systematically manipulated across blocks of trials by adjusting the magnitude of the contrast increments so that hit rates were maintained at ∼91%, ∼76%, and ∼65% for easy, medium, and hard blocks, respectively.

Behavioral results

Perceptual difficulty for the easy, medium, and hard blocks was successfully controlled at ∼91% (mean = 91.34%, ±1.60% SEM), ∼76% (mean = 76.27%, ±0.97% SEM), and ∼65% accuracy (mean = 65.27%, ±2.95% SEM), respectively (Fig. 2A). A repeated-measures ANOVA showed a significant main effect of perceptual difficulty on accuracy (F(2,10) = 174.88, p < 0.001). Since perceptual difficulty was intentionally equated across attention tasks and contrast levels, there was no main effect of attention (F(1,5) = 0.22, p = 0.659), no main effect of contrast (F(5,25) = 2.04, p = 0.108), and no interaction between attention and perceptual difficulty on hit rates (F(5,25) = 1.99, p = 0.115). Increasing perceptual difficulty significantly increased correct RTs (F(2,10) = 57.46, p < 0.001) and divided attention increased correct RTs compared with focused attention (F(1,5) = 34.87, p = 0.002; Fig. 2B). In addition, we found no main effect of contrast (F(5,25) = 2.33, p = 0.072), and no interaction between attention, perceptual difficulty, and contrast on correct RTs (ps ≥ 0.305).

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Hit rates, response times on correct trials (correct RTs) and perceptual thresholds in the contrast discrimination task plotted as a function of pedestal contrast. A, The accuracy levels were successfully controlled at ∼91%, ∼76%, and ∼65% for the easy, medium, and hard runs, respectively. Hit rates were also equated across the focused-attention and divided-attention conditions. B, Correct RTs increased with increasing perceptual difficulty and in the divided compared with focused attention condition. C, The threshold versus contrast (TvC) functions for the easy, medium, and difficult blocks. The TvC data were fit using a combination of Naka–Rushon and d' equations (Eqs. 1–4) that describe the link between behavior and the hypothetical neural contrast response functions (CRFs; Fig. 3). Error bars indicate ±1 within-subject SEM. Diff*** labels in A and B indicate significant main effects of perceptual difficulty on hit rates and correct RTs with ps < 0.001. Att** in B shows a significant main effect of attention on correct RTs with p < 0.01. Att*** and Att x Ct* labels in C indicate a significant main effect of attention and a significant interaction between attention and contrast on the perceptual thresholds with ps < 0.001 and 0.05, respectively.

The manipulation of perceptual difficulty across blocks of trials led to significant decreases in contrast discrimination thresholds as a function of perceptual difficulty (Fig. 2C; a main effect of perceptual difficulty on contrast discrimination thresholds: F(2,10) = 21.30, p < 0.001). Compared with divided attention, focused attention significantly reduced contrast discrimination thresholds (a main effect of attention on contrast discrimination thresholds: F(1,5) = 10.83, p = 0.023), consistent with previous results (Huang and Dobkins, 2005; Pestilli et al., 2011; Hara and Gardner, 2014; Itthipuripat et al., 2014a, 2017, 2019). Note that these changes in contrast thresholds were not because of differences in perceptual difficulty across attention tasks because accuracy levels were successfully equated across the focused-attention and divided-attention conditions and across the 6 pedestal contrasts (see the statistics in the previous paragraph).

The attention-induced decreases in contrast thresholds were mainly driven by a significant main effect of attention in the medium difficulty blocks (F(1,5) = 23.14, p = 0.005). At this difficulty level, there is no interaction between focused/divided attention and contrast (F(1,5) = 2.26, p = 0.080). Post hoc paired t tests revealed significant decreases in contrast discrimination thresholds in the focused-attention compared with the divided-attention conditions for all of the 6 pedestal contrast levels at this difficulty level (t(5)s = −2.76, −5.75, −3.78, −3.28, −3.31, and −2.50 with ps = 0.020, 0.001, 0.006, 0.011, 0.011, and 0.027 for 0–60% contrast levels, respectively; one-tailed because of the known direction of the main effect of attention without the interaction with attention and contrast; i.e., the reduced thresholds with focused attention, passing the Holm–Bonferroni corrected threshold of 0.05; Fig. 2C). This finding is consistent with data reported in previous studies where perceptual difficulty was controlled at a similar accuracy level (Huang and Dobkins, 2005; Pestilli et al., 2011; Hara and Gardner, 2014; Itthipuripat et al., 2014a, 2017, 2019). In contrast, we observed no significant main effects of attention on contrast thresholds in the easy or in the hard blocks (F(1,5) = 4.17, p = 0.097, and F(1,5) = 3.88, p = 0.106, respectively). That said, in the hard blocks, there was a significant interaction between attention and pedestal contrast on the contrast threshold data, which was driven by a significant decrease in threshold for the highest contrast level; i.e., 60% contrast (t(5) = −4.02, p = 0.005, one-tailed, Holm–Bonferroni corrected) with no significant and some marginal attentional modulations at the lower contrast levels (t(5)s = 0.33, −2.66, −2.15, −1.25, and 0.54 with ps = 0.377, 0.022, 0.042, 0.134, 0.205 for 0%−30% contrast levels, respectively; one-tailed, not passing Holm–Bonferroni corrected threshold of 0.0083).

To estimate how much neural attentional gain is needed to account for the attention-induced changes in contrast thresholds, we employed a linking model based on the signal detection theory (SDT; Tanner and Swets, 1954; Legge and Foley, 1980; Boynton et al., 1999; Gorea and Sagi, 2001; Huang and Dobkins, 2005; Pestilli et al., 2011; Hara and Gardner, 2014; Itthipuripat et al., 2014a, 2017; Itthipuripat and Serences, 2016; Fig. 3). Under this modeling framework, the perceptual contrast threshold (Δc) derived from the TvC function (Fig. 3, left) is predicted based on the neural responses evoked by the pedestal and incremental stimuli (ΔR) divided by the derivative (dR/dc) of the hypothetical neural contrast response function (CRF; Fig. 3, right; see Materials and Methods, Behavioral analyses). Accordingly, we predicted that the lower contrast thresholds observed in the focused-attention compared with the divided-attention conditions at medium difficulty levels (Fig. 2C) should be because of the increase in response gain (Gr; compare black and blue curves in Fig. 3). While past studies have shown that attention could also increase contrast gain (Gc), or shift neural CRFs leftward, this would lead to an increase instead of a decrease in the perceptual thresholds at the highest contrast levels (compare black and red curves in Fig. 3), which is inconsistent with the observed behavioral data (Fig. 2C).

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Quantitative linking model based on the signal detection theory (SDT; Tanner and Swets, 1954; Boynton et al., 1999; Pestilli et al., 2011; Itthipuripat et al., 2014a, 2017; Itthipuripat and Serences, 2016). The level of perceptual contrast threshold (Δc) in the threshold versus contrast (TvC) function (left) were estimated based on the hypothetical neural responses evoked by the pedestal and incremental stimuli (ΔR) divided by the derivative (dR/dc) of the neural contrast response function (CRF). Response gain (Gr) is an increase in the slope of the neural CRF and predicts the overall reduction in the contrast thresholds across the entire range of stimulus contrasts because of an increase in the differential mean response between the pedestal and incremental stimuli (ΔR, blue compared with black curves). Under the same level of task difficulty, a smaller difference between the pedestal and incremental stimuli or a lower perceptual contrast threshold (Δc) is needed to discriminate the contrast difference. On the other hand, contrast gain (Gc), or a leftward shift of the neural CRF, predicts a selective reduction in contrast thresholds at low contrast levels but an increase in the contrast thresholds at higher contrast levels (red compared with black curves).

Based on the model fitting procedure, we obtained estimates of the response gain (Gr) and contrast gain (Gc) parameters that best predict each subject's TvC function (Fig. 4). Overall, the linking model performed very well at fitting the TvC data from individual subjects (R2 ± SME = 0.9291 ± 0.0099). In the medium difficulty blocks, lower perceptual thresholds in the focused-attention compared with the divided-attention conditions were highly consistent across all subjects. In contrast, changes in perceptual threshold in the easy and hard blocks were less robust at the individual-subject level. Next, bootstrapping (i.e., resampling with replacement) was employed to assess the significance of the main effect of attention and the interaction between attention and perceptual difficulty on each of the model parameters (see Materials and Methods, Behavioral analyses). Figure 5 illustrates the mean values and the 68% confidence intervals (error bars) from the bootstrapped distributions of the response gain (Gr) and contrast gain parameters (Gc) as well as the fit values of individual subjects (color dots).

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Individual fits for the threshold versus contrast (TvC) functions across all subjects.

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

The corresponding best fit parameters of the TvC functions (see Figs. 2, 4). The first column shows the mean values (empty symbols) and 68% confidence intervals (error bars) from the bootstrapped distributions of the response gain (Gr) and contrast gain parameters (Gc) shown as violin plots in the second, third, and fourth columns. The filled circles with different colors indicate best fit values for individual subjects. Top row, The response gain (Gr) parameter that describes the slope of the hypothetical neural CRFs obtained by fitting the TvC functions. Middle row, The normalized Gr parameter. Overall, attention increased the Gr parameter, and the attentional modulations of Grwere relatively larger in the medium compared with the easy and hard blocks. Bottom row, The contrast gain (Gc) parameter that describes the horizontal shift of the hypothetical neural CRFs obtained from fitting the TvC functions. Att labels with ††† signs show significant attention effects (focused vs divided attention) for the data collapsed across difficulty levels with ps < 0.001. x and xx signs indicate significant differences in attentional modulations between different difficulty levels with ps < 0.05 and 0.01, respectively. ** and *** signs show significant differences between attention conditions for each difficulty level with ps < 0.01 and 0.001, respectively. n.s. labels indicate non-significant results.

For the behavioral data collected in the medium and hard blocks, the model predicted an increase in Gr in the focused-attention compared with the divided-attention conditions (ps = 0 and 0.004 for medium and hard, respectively, two-tailed with a Holm–Bonferroni's corrected threshold of p < 0.025). However, no significant modulation of Gr was predicted in the easy blocks (p = 0.794, two-tailed). The degree of attention-induced increase in Gr in the medium-difficulty block was also significantly higher than the modulation predicted in the easy and hard blocks (ps = 0.036 and 0.008, respectively, one-tailed because of the predicted direction of the interactions).

The response gain parameters predicted from the TvC data were expressed in arbitrary units. Thus, we rescaled the predicted response gain parameters so that overall predicted response gain parameters across individual subjects spanned a range of 0–1. Since this rescaling step was performed using the minimum and maximum values of the response gain predicted by the behavioral data across all task and attention conditions (not within individual conditions), it did not produce substantial effects or interact with task-related and attention-related modulations. Specifically, the normalized Gr data yielded qualitatively similar results, where there was a significant increase in response gain in the focused-attention compared with the divided-attention conditions in the medium and hard blocks (ps = 0 and 0.002 for medium and hard, respectively, two-tailed with a Holm–Bonferroni's corrected threshold of p < 0.025) and no significant modulation of Gr was found in the easy blocks (p = 0.794, two-tailed). In addition, the degree of attention-induced increase in response gain in the medium-difficulty block was also significantly higher than the modulation found in the easy and hard blocks (ps = 0.024 and 0.007, respectively, one-tailed because of the predicted direction of the interactions).

Unlike Gr, we found no attention-induced changes in Gc for any difficulty level (ps = 0.083, 0.329, 0.720 for easy, medium, and hard, respectively, two-tailed). In addition, Gc did not change as a function of perceptual difficulty (ps = 0.114–0.827, two-tailed). Overall, the model linking the behavioral data are consistent with a nonlinear inverted-U relationship between perceptual difficulty and attention-induced response gain modulations of the hypothetical neural CRFs.

SSVEP results

To determine whether the actual pattern of attentional modulations in neural responses followed the pattern predicted by the SDT model (Fig. 3), we measured SSVEP responses over human visual cortex. SSVEP responses are the phase-locked EEG responses generated from early visual areas in response to rhythmically presented stimuli (Regan, 1989; Andersen et al., 2012; Norcia et al., 2015). Many past studies have shown that attention can either increase response or contrast gain of the SSVEP-based CRFs (Hillyard and Anllo-Vento, 1998; Di Russo et al., 2001; Lauritzen et al., 2010; Wang and Wade, 2011; Itthipuripat et al., 2014a,b, 2018), making it a sensitive measure to test this specific prediction. In the current study, we computed the SSVEP amplitudes at the driving stimulus frequencies of 24 and 20 Hz for the left and right stimuli, respectively (see Materials and Methods, EEG data acquisition, preprocessing, and analyses). According to the full frequency spectrum of the EEG data from the contralateral posterior occipital electrodes, we found that our stimulus flicker rates of 20 and 24 Hz induced robust SSVEP signals that peaked around the central frequencies throughout the entire stimulus presentation period (i.e., 0–1500 ms; Fig. 6). The amplitudes of the SSVEP responses scaled with stimulus contrast. Importantly, the amplitude at the SSVEP frequencies were far higher than the amplitude associated with the frequencies of early visually evoked potentials (VEPs; 8–9 Hz) that appeared ∼0–500 ms poststimulus onset. This ensured that our SSVEP results were minimally confounded by modulations of slow-going VEPs and α oscillations.

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

The full frequency spectrum of the EEG data from the posterior occipital electrodes contralateral to the stimulus flickers of 20 and 24 Hz, shown in cold and hot colors, respectively. The SSVEP responses were centered at the flicker frequencies of 20 and 24 Hz for the duration of the stimulus presentation (0–1500 ms). There was also early stimulus-evoked activity at low frequencies (8–9 Hz) from 0 to 500 ms poststimulus onset. The amplitude of the SSVEP responses scaled with stimulus contrast. Importantly, the SSVEP frequencies were far above the VEP and α frequencies and they were thus not overlapping.

We then averaged the SSVEP amplitudes across contralateral posterior occipital electrodes where the responses, collapsed across contrast levels, stimulus types, difficulty levels, and attention conditions, were maximal. Finally, we plotted the mean amplitude of the SSVEP responses as a function of stimulus contrast, producing SSVEP-based CRFs for all difficulty levels (easy, medium, and hard) as well as attention conditions [i.e., the focused-attention target, the focused-attention nontarget (or ignored), and the divided-attention stimuli; see Fig. 7].

Figure 7.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 7.

A, The SSVEP-based CRFs from 0 to 500 ms poststimulus across different attention conditions and difficulty levels (the top and bottom panels are the same data organized in different layouts). Error bars indicate within-subject SEM. Focused attention produced the most robust effects on SSVEP-based response gain in the medium, compared with the easy and hard blocks (see corresponding fit parameters in Fig. 9). Note that in the focused-attention and divided-attention conditions, the stimulus contrast values on the x-axis reflect the mean values computed across all segments of the target stimulus (both pedestal and incremental contrasts). In contrast, the stimulus contrast values in the ignored condition were based solely on the pedestal contrast because the ignored nontarget stimuli contained no contrast increments. B, Topographical maps of SSVEP data elicited by left and right stimuli of 24- and 20-Hz flicker frequencies, respectively, plotted separately for different contrast levels, attention conditions, and difficulty levels. The data were averaged from 0 to 500 ms poststimulus onset. For both left and right stimuli, focused attention produced the highest effects on SSVEP amplitudes at high contrast levels in the medium difficulty compared with the easy and hard blocks.

Overall, we found a nonmonotonic inverted-U relationship between perceptual difficulty and attentional gain modulations in the SSVEP-based CRFs. In the medium difficulty blocks, there was a robust increase in response gain in the SSVEP-based CRFs in the focused-attention target compared with the divided-attention and the focused-attention nontarget (ignored) stimuli (see Figs. 7, 8 for the grand-averaged SSVEP data and individual subjects' data, respectively). However, attention-related response gain modulations of the SSVEP-based CRFs were smaller in the easy and hard blocks. This pattern of results was also consistent at the individual-subject level (Figs. 8, 9).

Figure 8.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 8.

Individual fits for the SSVEP-based contrast response functions (CRFs) across all subjects.

Figure 9.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 9.

A, Response gain (G′r) and contrast gain (G′c) parameters obtained from fitting the SSVEP-based CRFs shown in Figures 7, 8. The first column shows the mean values (empty symbols) and 68% confidence intervals (error bars) from the bootstrapped distributions (resampling subject labels with replacement) shown as violin plots in the second, third, and fourth columns. The filled circles with different colors indicate fit values of individual subjects, whose SSVEP data were randomly drawn and fit to produce the bootstrapped distributions. B, Same as A, but the response baseline (b) of the SSVEP-based CRF (the last row) was not subtracted from the CRFs before curve fitting procedures. The statistical results for G′r and G′c were qualitatively similar to those in A. In addition, there was no significant change in the response baseline parameter. Att labels with ††† signs show significant increases in response gain (i.e., increased G′r) induced by focused compared with divided attention for the data collapsed across all difficulty levels with ps < 0.001. Foc labels with †† and ††† signs show significant increases in response gain induced by the focused-attention target compared with ignored stimuli with ps < 0.01 and 0.001, respectively. Div with † shows a significant increase in response induced by the divided-attention compared with ignored stimuli with p < 0.05. Foc labels with Inverted †† signs show significant decreases in contrast gain (i.e., increased G′c) induced by the focused-attention target compared with ignored stimuli with ps < 0.01. Div labels with Inverted ††† signs show significant decreases in contrast gain induced by the divided-attention compared with ignored stimuli with ps < 0.001. x and xx signs indicate significant differences in attentional modulations between different difficulty levels with ps < 0.05 and 0.01, respectively. *, **, and *** signs show significant differences between attention conditions for each difficulty level with ps < 0.05, < 0.01, and 0.001, respectively. n.s. indicates a non-significant result.

To more precisely characterize the impact of perceptual difficulty and attention on response and contrast gain modulations of these SSVEP-based CRFs, we next fit the data with a Naka–Rushon equation (see Materials and Methods, EEG data acquisition, preprocessing, and analyses; and Eq. 2). We used the G′r and G′c parameters as proxies for response and contrast gain modulations, respectively (see Itthipuripat et al., 2014b, 2017, 2018, 2019). The G′r (or Rmax) parameter is the level of the SSVEP response at the maximal contrast level (i.e., 100% contrast) and G′c (or C50) is the contrast value at which the SSVEP response reached half its maximum response. Overall, the Naka-Rushton equation fits the SSVEP-based CRF data of individual subjects well (R2 ± SME = 0.8506 ± 0.1162). Next, bootstrapping (i.e., resampling with replacement) was used to assess the main effect of attention at different difficulty levels and to test for any interactions between attention and perceptual difficulty. Figure 9A shows the corresponding mean and the 68% confidence intervals of the G′r and G′c values associated with the SSVEP-based CRF data across different attention conditions and difficulty levels.

There were three main sets of statistical comparisons we employed to test the effects of attention and possible interactions with perceptual difficulty. First, we tested for overall differences across focused-attention, divided-attention stimuli, focused-attention nontarget (or ignored) stimuli using data collapsed across all difficulty levels. Second, we examined these differences separately for each of the difficulty levels. Finally, we tested for an interaction between attention and perceptual difficulty level. Specifically, we examined whether the attentional modulations (focused minus divided attention) were different across the medium and easy blocks, across the medium and hard blocks, and across the hard and easy blocks, respectively.

Collapsed across all difficulty levels, we observed significant increases in G′r for the focused-attention target compared with the divided-attention stimuli, for the focused-attention target compared with the focused-attention nontarget (or ignored) stimuli, and for the divided-attention stimuli compared with the ignored stimuli (ps = 0.003, 0.001, and 0.016, respectively, two-tailed, Holm–Bonferroni corrected). When examining these differences in individual difficulty levels, we found that they were driven predominantly by changes in the medium difficulty condition.

Specifically, we observed significant increases in G′r for the focused-attention target stimuli compared with the divided-attention stimuli only in the medium and hard blocks, but not in the easy block (Fig. 9A; ps = 0.501, 0.004, and 0.027 for the easy, medium, and hard blocks, respectively, two-tailed, Holm–Bonferroni corrected). Similarly, when comparing the focused-attention target to the ignored stimuli, we observed a significant increase in the G′r for the medium and the hard blocks but not in the easy block (ps = 0.438, 0, and 0.009 for the easy, medium, and hard blocks, respectively, two-tailed, Holm–Bonferroni corrected). That said, when comparing the divided-attention to the ignored stimuli, we also found significant increases in G′r in the medium and easy blocks, but not in the hard block (ps = 0.010, 0.001, and 0.644 for the easy, medium, and hard blocks, respectively, two-tailed, Holm–Bonferroni corrected).

Importantly, when examining the interaction between attention and perceptual difficulty, we found significant differences in attentional modulations (focused minus divided attention) between the medium and easy blocks and between the hard and easy blocks but not between the medium and hard blocks (ps = 0.010, 0.014, and 0.573, respectively, one-tailed because of the known direction of the modulations, Holm–Bonferroni corrected). Taken together, the observed response gain patterns suggest nonlinear changes in attentional modulation of response gain as a function of perceptual difficulty.

For the contrast gain data, we found a decrease instead of an increase in contrast gain. Specifically, focused attention shifted the CRFs rightwards instead of leftwards. There were significant increases in the G′c values for the focused-attention target and the divided-attention stimuli compared with the ignored stimuli (ps = 0.008 and 0, respectively, two-tailed, Holm–Bonferroni corrected). However, there was no difference between the focused-attention target and the divided-attention stimuli themselves (p = 0.166, two-tailed).

An auxiliary analysis that examined changes in baseline (b) of the CRFs showed no significant differences between the focused-attention target and the divided-attention stimuli, between the focused-attention target and the ignored stimuli, or between the divided-attention stimuli and the ignored stimuli (p = 0.042–0.910; Fig. 9B, bottom panels). That said, the patterns of response and contrast gain patterns were highly similar to those observed with the baseline subtraction methods (compare Fig. 9A and B).

Notably, nonlinear changes in attentional modulations of response gain as a function of perceptual difficulty in the SSVEP data (Fig. 9A,B, top panels) were only observed in the time period before (from 0 to 500 ms poststimulus), but not during or after subjects, made responses (from 500 to 1500 ms poststimulus; Fig. 10). Specifically, from 500 to 1000 ms poststimulus, there was a significant difference in the G′r values between the focused-attention target and ignored stimuli for the data collapsed across all difficulty levels (p = 0.001, two-tailed, Holm–Bonferroni corrected), but there were no differences between the focused-attention target and divided-attention stimuli and between the divided-attention and ignored stimuli (ps = 0.243 and 0.852, two-tailed). Importantly, there was no significant difference in attentional modulations (focused minus divided attention) between different levels of perceptual difficulty (ps = 0.059–0.460, one-tailed). From 1000 to 1500 ms poststimulus, there was no significant difference in the G′r values between attention conditions for the data collapsed across all difficulty levels (ps = 0.693–0.854, two-tailed) and no significant difference in attentional modulations in any of the difficulty levels (ps = 0.777–0.939, one-tailed).

Figure 10.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 10.

A, The SSVEP-based CRFs from 0 to 1500 ms poststimulus across different attention conditions and difficulty levels. Error bars indicate within-subject SEM. Focused attention produced the most robust effects on the SSVEP-based response gain in the medium difficulty block, compared with the easy and hard blocks only before (0–500 ms in the left column), but not during (500–1000 ms in the middle column) or after subjects responded (1000–1500 ms in in the right column). B, Corresponding response gain (G′r) and contrast gain (G′c) parameters obtained from fitting the SSVEP-based CRFs shown in A. The empty symbols and error bars represent the mean values and 68% confidence intervals from the bootstrapped distributions of the fit parameters (resampling subject labels with replacement). Att with ††† show a significant increase in response gain (i.e., increased G′r) induced by the focused compared with divided attention for the data collapsed across all difficulty levels with p < 0.001. Foc labels with ††† signs show significant increases in response gain induced by the focused-attention target compared with ignored stimuli with ps < 0.001. Div with † shows a significant increase in response induced by the divided-attention compared with ignored stimuli with p < 0.05. Foc with Inverted †† shows a significant decrease in contrast gain (i.e., increased G′c) induced by the focused-attention target compared with ignored stimuli with p < 0.01. Div with Inverted ††† shows significant a decrease in contrast gain induced by the divided-attention compared with ignored stimuli with ps < 0.001. x and xx signs indicate significant differences in attentional modulations between different difficulty levels with ps < 0.05 and 0.01, respectively.

For contrast gain, there was no significant difference in the G′c values between attention conditions for the data collapsed across all difficulty levels (ps = 0.028–0.638, two-tailed, not passing the Holm–Bonferroni corrected threshold of 0.0167) and no significant difference in attentional modulations in any of the difficulty levels from 500 to 1000 ms poststimulus (ps = 0.017–0.829, one-tailed, not passing the Holm–Bonferroni corrected threshold of 0.0167). Similarly, from 1000 to 1500 ms, contrast gain did not differ across attention conditions (ps = 0.073–0.870, two-tailed) and attentional modulations in contrast gain did not vary across difficulty levels (ps = 0.081–0.486, one-tailed).

Although we chose to flicker our stimuli in frequencies (i.e., 20 and 24 Hz) that were far above the α range (∼10 Hz), the amplitude of the SSVEP response at 20 Hz could potentially be mediated by the harmonic of endogenous alpha band oscillations. In order to test this possibility, we conducted an additional analysis where we examined the effects of perceptual difficulty and attention on the CRFs based on the mean amplitude of induced (total) EEG activity at 20 Hz. Unlike the SSVEP results, we found no significant modulations of perceptual difficulty (ps = 0.941–0.969) and attention (ps = 0.228–0.918) and no interactions between these two factors on the response gain parameters describing the neural CRFs based on the 20-Hz-induced EEG oscillation (ps = 0.384–0.667, all ps are two-tailed; Fig. 11). Accordingly, it is unlikely that the inverted-U-shaped SSVEP results were contaminated by modulations of the harmonic of endogenous α oscillations.

Figure 11.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 11.

A, The induced, or total, EEG amplitude at 20 Hz as a function of task difficulty and attention. Error bars indicate within-subject SEM. There was no main effect of perceptual difficulty, no main effect of attention, and no interaction between the two factors on the response gain parameters describing the shapes of the neural CRFs based on the amplitudes of the induced EEG oscillations at 20 Hz. B, The corresponding fit parameters of the neural CRFs. The empty symbols and error bars represent the mean values and 68% confidence intervals from the bootstrapped distributions of the fit parameters (resampling subject labels with replacement).

Correlations between behavioral and SSVEP data

Overall, the effect of attention and its interaction with perceptual difficulty on the SSVEP-based CRFs was consistent with modulatory patterns of response gain predicted by the behavioral data. This suggests a robust connection between the effects of perceptual difficulty on attentional gain modulations and changes in behavioral performance. To more formally evaluate this relationship, we evaluated between-subject correlations between the difficulty-related changes in attentional modulations of response gain predicted by the SDT-based modeling of the TvC data and those observed in the SSVEP-based CRFs. To do so, we examined whether difficulty-related changes in attention-related response gain in the SSVEP data positively correlated with those predicted by the TvC data on a between-subject basis. We first calculated the magnitude of the response gain modulations predicted by the modeling of the TvC data (focused minus divided attention) for individual subjects (same as above), and then computed the differences between the medium and easy blocks (Fig. 11, triangles), between the medium and hard blocks (Fig. 11, diamonds), and between the hard and easy blocks (Fig. 11, circles). Next, we repeated these steps for the normalized response gain modulations observed in the SSVEP data separately for 0–500, 500–1000, and 1000–1500 ms poststimulus. Finally, we correlated these difficulty-related differences in attention-induced response gain modulations obtained from the behavioral and SSVEP data using a repeated-measures correlation method. We observed a significant positive correlation between the difficulty-related differences in response gain modulations in the behavioral and SSVEP data measured only before response period (0–500 ms poststimulus; Rho = 0.689, p = 0.009; Fig. 12). That said, no significant correlations were found between the behavioral and SSVEP data measured during and after response period (Rhos = 0.115 and 0.103 and ps = 0.708 and 0.737 for 500–1000 and 1000–1500 ms poststimulus, respectively). Note that the correlation results reported here should be interpreted with caution because of a small sample size.

Figure 12.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 12.

Repeated measures correlations between difficulty-related differences in attentional modulations of response gain parameters obtained from fitting the SSVEP-based CRFs and behavioral TvCs. There was a significant correlation only in the time window before response execution (0–500 ms poststimulus onset; Rho = 0.689, p < 0.001) but not after (500–1500 ms; Rhos = 0.708–0.737, ps > 0.05). Different color dots represent differences in attentional modulations between the medium and easy blocks (triangles), between the medium and hard blocks (circles), and between the hard and easy blocks for individual subjects (diamonds). The data from the same subject are assigned the same color, with corresponding lines showing the repeated measures correlation fit for each subject.

Together, the data suggest a robust link between neural and behavioral attentional modulations across changes in perceptual difficulty and that the interactive effects between attention and perceptual difficulty on selective sensory information processing only occur at the time period relevant to behavior.

Discussion

The present study tested how perceptual difficulty influenced attentional modulations of the population-based neural activity in human visual cortex. We found that perceptual difficulty had a nonlinear inverted-U-shaped impact on response gain modulations of the behavioral TvC functions and on neural CRFs. Importantly, this nonlinear relationship could be accounted for by SDT-based quantitative modeling of perceptual contrast discrimination thresholds. Furthermore, positive correlations between these metrics suggest a close relationship between difficulty-related differences in attention-related response gain in the behavioral and neural data. Together, our results suggest that perceptual difficulty regulates behavioral performance in part by mediating the magnitude of attentional gain modulations in visual cortex.

Past behavioral studies have investigated the influence of perceptual difficulty on selective attention (Dosher and Lu, 2000; Lu and Dosher, 2000; Prinzmetal et al., 2009). One study compared the effects of perceptual difficulty on endogenous and exogenous attention (Prinzmetal et al., 2009). They found that increasing perceptual difficulty increased the beneficial effect of endogenous attention on behavioral performance but produced the opposite effect on exogenous attention (Prinzmetal et al., 2009). Our study used endogenous attention cues and showed similar behavioral effects: increasing perceptual difficulty from easy to intermediate levels improved perceptual performance. That said, making the task harder did not lead to a better performance and actually led to smaller attentional effects on behavior.

To assess attentional gain of population-level activity in visual cortex, we measured the effect of attention on SSVEPs. SSVEPs reflect the synchronized activity in visual cortex that oscillates in synchrony with periodic sensory inputs (for review, see Andersen et al., 2012; Norcia et al., 2015; also see Regan, 1989; Müller et al., 1997; Srinivasan et al., 2006; Di Russo et al., 2007; Andersen et al., 2008, 2012; Fuchs et al., 2008; Andersen and Muller, 2010; Störmer et al., 2013; Störmer and Alvarez, 2014). Note that the flicker frequencies of the driving stimuli (20 and 24 Hz) in the present study were far above those of the slow-going VEPs and alpha power. The full frequency spectrum of the EEG data also showed robust SSVEP signals peaked at the flicker frequencies. Thus, task-related attentional modulations of SSVEP activity reported here should be minimally influenced by those of the VEPs and alpha activity. Indeed, an analysis of induced power at 20 Hz (i.e., power not phase locked to the driving stimulus) revealed no attentional modulations. This suggests that our data were not contaminated by modulations of the second harmonic of endogenous alpha band oscillations at ∼10 Hz.

Consistent with the single-unit data reported by Chen and colleagues (Chen et al., 2008), we found that increasing perceptual difficulty from an accuracy level of 91% to 76% significantly increased response gain of SSVEP-based CRFs. However, when perceptual difficulty was increased further to 65% accuracy, there was no further increase in response gain. Instead, response gain was modestly attenuated, consistent with an inverted-U relationship between perceptual difficulty and attentional modulations. We speculate that we observed an inverted-U relationship, in contrast to a monotonic increase, because the difficulty range in our experiment was broader compared with previous studies (65–91% accuracy vs 86–98% accuracy in Chen et al., 2008).

Since contrast increments were independently adjusted for the focused-attention and divided-attention conditions in the present study, there were low-level differences in the target stimuli across these two conditions. However, it is unlikely that these low-level differences substantively contributed to the observed task-related attentional modulations of the SSVEPs. First, instead of using the pedestal contrast as contrast values to plot neural CRFs, we used the mean contrast values computed across all segments of the visual stimulus (both pedestal and incremental contrasts). Since the fit parameters were obtained from individual CRFs with true mean physical contrasts of all stimulus segments (i.e., instead of directly comparing SSVEP amplitudes for each pedestal contrast), we believe that any task-related and attention-related modulations of fit parameters associated with neural CRFs were minimally affected by low-level physical differences in visual stimuli. Second, overall mean stimulus contrast in the divided attention condition was higher than in the focused attention condition and contrasts in the easy block were higher than in the medium and hard blocks. If differences in our SSVEP data were because of these low-level physical differences, we should have observed task-related and attention-related modulations of the SSVEP data in opposite directions. For example, focused attention should have reduced the overall SSVEP amplitudes compared with divided attention. Moreover, the easy block should have generally produced the highest response amplitudes, compared with the medium and hard blocks. However, these were not the patterns of results we observed.

The nonlinear inverted-U relationships between perceptual difficulty and attentional gain modulations observed in our study are consistent with Yerkes–Dodson's Law, which describe how global cognitive factors and arousal states regulated behavioral performance (Yerkes and Dodson, 1908; Dodson, 1915; see also Broadhurst, 1957; Duffy, 1957; Anderson et al., 1989; Diamond et al., 2007). Indeed, a recent study conducted by Fischer and colleagues (2008) found that increasing task demands while human participants were racing in a driving simulator imacted overall cortical excitability measured in the frontal cortex following an inverted-U function (Loveless and Sanford, 1975; Elbert and Rockstroh, 1987; Birbaumer et al., 1990; Aeschbach et al., 1999; Nagai et al., 2004). That said, it is unclear how changes in perceptual difficulty, known to mediate arousal levels and cortical excitability, affect changes in selective information processing in the early sensory areas. The present study provides evidence suggesting that perceptual difficulty indeed regulates attention-induced response gain modulations at the earliest stages of visual cortical information processing.

Similar to the mixed effects of perceptual difficulty on gain modulations reported in previous studies, other global cognitive factors such as stress, anxiety, emotion, aversive stimuli, and expected rewards have been shown to either up-regulate or down-regulate the level of early sensory responses and in turn led to either an improvement or an impairment in performance (Gray et al., 2003; Keil et al., 2005; Phelps et al., 2006; Bakardjian et al., 2011; Hajcak et al., 2013; Schönwald and Müller, 2014; Song and Keil, 2013, 2014; Wieser et al., 2014; Baruni et al., 2015; Bekhtereva et al., 2015; Bekhtereva and Müller, 2015; Paczynski et al., 2015; Zhang et al., 2016; Chang et al., 2020). It is likely that some of these mixed results could be in part explained by Yerkes–Dodson's Law (Yerkes and Dodson, 1908; Dodson, 1915; see also Broadhurst, 1957; Duffy, 1957; Anderson et al., 1989; Diamond et al., 2007; Hull, 1943). However, there is still a paucity of research studies that parametrically manipulate the strength of these cognitive factors. Future studies could adopt our approach to determine whether similar inverted-U relationships can be generalized to explain the influence that other cognitive factors may have on selective information processing and perceptual performance.

Recent studies have proposed that heightened arousal states could desynchronize local neural activity and thus reduce correlated variability between neurons. In turn, reducing noise correlations may increase the efficiency of information processing in visual cortex (Ruff and Cohen, 2014b; Beaman et al., 2017). The difficulty-related changes in attentional gain observed here could in part reflect arousal-dependent changes in the synchrony of local neural activity, which could disrupt the SSVEP response (cf. Ruff and Cohen, 2014a; Kohn et al., 2016; Cohen and Kohn, 2011; Cohen and Maunsell, 2009; Mitchell et al., 2007, 2009). At the systems and neurotransmitter levels, arousal is also thought to increase local glutamate levels, which interact with norepinephrine and acetylcholine to amplify high priority sensory representations and inhibit low priority sensory representations (for review, see Mather et al., 2016). These neurotransmitters have been found to regulate receptor pathways that independently mediate neural gain amplification and neural noise modulations (Kirkwood et al., 1999; Herrero et al., 2008, 2013; Polack et al., 2013). That said, the links between neural mechanisms that support arousal-dependent attention at the neuromolecular, systems, physiological, and behavioral levels are not clear. This is a challenging research question that will likely require combining multimodal methods (e.g., pharmacological intervention, single-unit recording, local field potential, and EEG) and using computational models to integrate results across different levels of analysis (Wyler et al., 1982; Herrero et al., 2008, 2013; Purcell et al., 2013; Cosman et al., 2018; for review, see Itthipuripat and Serences, 2016).

In conclusion, we found that perceptual difficulty regulates attentional gain measured at the population level in human visual cortex: attentional gain was highest when perceptual difficulty was at an intermediate level and lower when the task was easier or harder. Moreover, the interaction between perceptual difficulty and attentional gain in visual cortex was correlated with difficulty-related modulations in perceptual performance. Taken together, our data suggest an inverted-U relationship between perceptual difficulty and selective processing of early sensory information, partly reconciling contradictory results of perceptual difficulty on attentional gain modulations in the early visual cortex.

Footnotes

  • This work was supported by the National Institutes of Health Grant R01-EY025872 and a James S. McDonnell Foundation award (to J.T.S.). This work was also supported by the National Research Council of Thailand grant (fiscal years 2021-2024, under project numbers 102976 and 118711); the Thailand Science Research and Innovation (TSRI) Basic Research Fund: fiscal year 2022 under project number FRB650048/0164, fiscal year 2021 under project number FRB640008, and fiscal year 2020 under project number 62W1501; the Program Management Unit (PMU) grant (fiscal year 2023); the Asahi Glass Foundation grant; the research grant from the Research & Innovation for Sustainability Center, Magnolia Quality Development Corporation Limited, Thailand; the KMUTT Partnering initiative grant (fiscal year 2021) and the startup fund for junior researchers at King Mongkut's University of Technology Thonburi (KMUTT); and the KMUTT's Frontier Research Unit Grant for Neuroscience Center for Research and Innovation (to S.It.).

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Sirawaj Itthipuripat at itthipuripat.sirawaj{at}gmail.com

SfN exclusive license.

References

  1. ↵
    1. Aeschbach D,
    2. Matthews JR,
    3. Postolache TT,
    4. Jackson MA,
    5. Giesen HA,
    6. Wehr TA
    (1999) Two circadian rhythms in the human electroencephalogram during wakefulness. Am J Physiol 277:R1771–R1779.
    OpenUrlPubMed
  2. ↵
    1. Ahissar M,
    2. Hochstein S
    (1997) Task difficulty and the specificity of perceptual learning. Nature 387:401–406. https://doi.org/10.1038/387401a0
    OpenUrlCrossRefPubMed
  3. ↵
    1. Andersen SK,
    2. Muller MM
    (2010) Behavioral performance follows the time course of neural facilitation and suppression during cued shifts of feature-selective attention. Proc Natl Acad Sci U S A 107:13878–13882.
    OpenUrlAbstract/FREE Full Text
  4. ↵
    1. Andersen SK,
    2. Hillyard SA,
    3. Müller MM
    (2008) Attention facilitates multiple stimulus features in parallel in human visual cortex. Curr Biol 18:1006–1009.
    OpenUrlCrossRefPubMed
  5. ↵
    1. Andersen SK,
    2. Müller MM,
    3. Martinovic J
    (2012) Bottom-up biases in feature-selective attention. J Neurosci 32:16953–16958.
    OpenUrlAbstract/FREE Full Text
  6. ↵
    1. Anderson KJ,
    2. Revelle W,
    3. Lynch MJ
    (1989) Caffeine, impulsivity, and memory scanning: a comparison of two explanations for the Yerkes-Dodson effect. Motiv Emot 13:1–20. https://doi.org/10.1007/BF00995541
    OpenUrl
  7. ↵
    1. Anton-Erxleben K,
    2. Carrasco M
    (2013) Attentional enhancement of spatial resolution: linking behavioural and neurophysiological evidence. Nat Rev Neurosci 14:188–200. https://doi.org/10.1038/nrn3443
    OpenUrlCrossRefPubMed
  8. ↵
    1. Bakardjian H,
    2. Tanaka T,
    3. Cichocki A
    (2011) Emotional faces boost up steady-state visual responses for brain-computer interface. Neuroreport 22:121–125. https://doi.org/10.1097/WNR.0b013e32834308b0
    OpenUrlPubMed
  9. ↵
    1. Baruni JK,
    2. Lau B,
    3. Salzman CD
    (2015) Reward expectation differentially modulates attentional behavior and activity in visual area V4. Nat Neurosci 18:1656–1663. https://doi.org/10.1038/nn.4141
    OpenUrlCrossRefPubMed
  10. ↵
    1. Beaman CB,
    2. Eagleman SL,
    3. Dragoi V
    (2017) Sensory coding accuracy and perceptual performance are improved during the desynchronized cortical state. Nat Commun 8:1308. https://doi.org/10.1038/s41467-017-01030-4
    OpenUrlCrossRefPubMed
  11. ↵
    1. Bekhtereva V,
    2. Müller MM
    (2015) Affective facilitation of early visual cortex during rapid picture presentation at 6 and 15 Hz. Soc Cogn Affect Neurosci 10:1623–1633. https://doi.org/10.1093/scan/nsv058
    OpenUrlCrossRefPubMed
  12. ↵
    1. Bekhtereva V,
    2. Craddock M,
    3. Müller MM
    (2015) Attentional bias to affective faces and complex IAPS images in early visual cortex follows emotional cue extraction. Neuroimage 112:254–266. https://doi.org/10.1016/j.neuroimage.2015.03.052
    OpenUrl
  13. ↵
    1. Birbaumer N,
    2. Elbert T,
    3. Canavan AG,
    4. Rockstroh B
    (1990) Slow potentials of the cerebral cortex and behavior. Physiol Rev 70:1–41. https://doi.org/10.1152/physrev.1990.70.1.1
    OpenUrlCrossRefPubMed
  14. ↵
    1. Boudreau CE,
    2. Williford TH,
    3. Maunsell JHR
    (2006) Effects of task difficulty and target likelihood in area V4 of macaque monkeys. J Neurophysiol 96:2377–2387. pmid:16855106
    OpenUrlCrossRefPubMed
  15. ↵
    1. Boynton GM,
    2. Demb JB,
    3. Glover GH,
    4. Heeger DJ
    (1999) Neuronal basis of contrast discrimination. Vision Res 39:257–269. pmid:10326134
    OpenUrlCrossRefPubMed
  16. ↵
    1. Brainard DH
    (1997) The psychophysics toolbox. Spatial Vis 10:433–436. https://doi.org/10.1163/156856897X00357
    OpenUrlCrossRef
  17. ↵
    1. Broadhurst PL
    (1957) Emotionality and the Yerkes-Dodson law. J Exp Psychol 54:345–352. https://doi.org/10.1037/h0049114
    OpenUrlCrossRefPubMed
  18. ↵
    1. Buschman TJ,
    2. Kastner S
    (2015) From behavior to neural dynamics: an integrated theory of attention. Neuron 88:127–144.
    OpenUrl
  19. ↵
    1. Carrasco M
    (2011) Visual attention: the past 25 years. Vision Res 51:1484–1525. https://doi.org/10.1016/j.visres.2011.04.012
    OpenUrlCrossRefPubMed
  20. ↵
    1. Carrasco M,
    2. Ling S,
    3. Read S
    (2004) Attention alters appearance. Nat Neurosci 7:308–313. https://doi.org/10.1038/nn1194
    OpenUrlCrossRefPubMed
  21. ↵
    1. Chang TY,
    2. Doudlah R,
    3. Kim B,
    4. Sunkara A,
    5. Thompson LW,
    6. Lowe ME,
    7. Rosenberg A
    (2020) Functional links between sensory representations, choice activity, and sensorimotor associations in parietal cortex. Elife 9:e57968. https://doi.org/10.7554/eLife.57968
    OpenUrlCrossRef
  22. ↵
    1. Chen Y,
    2. Martinez-Conde S,
    3. Macknik SL,
    4. Bereshpolova Y,
    5. Swadlow HA,
    6. Alonso J-M
    (2008) Task difficulty modulates the activity of specific neuronal populations in primary visual cortex. Nat Neurosci 11:974–982. https://doi.org/10.1038/nn.2147
    OpenUrlCrossRefPubMed
  23. ↵
    1. Cohen MR,
    2. Kohn A
    (2011) Measuring and interpreting neuronal correlations. Nat Neurosci 14:811–819. https://doi.org/10.1038/nn.2842
    OpenUrlCrossRefPubMed
  24. ↵
    1. Cohen MR,
    2. Maunsell JHR
    (2009) Attention improves performance primarily by reducing interneuronal correlations. Nat Neurosci 12:1594–1600. https://doi.org/10.1038/nn.2439
    OpenUrlCrossRefPubMed
  25. ↵
    1. Connor CE,
    2. Preddie DC,
    3. Gallant JL,
    4. Van Essen DC
    (1997) Spatial attention effects in macaque area V4. J Neurosci 17:3201–3214. https://doi.org/10.1523/JNEUROSCI.17-09-03201.1997
    OpenUrlAbstract/FREE Full Text
  26. ↵
    1. Cosman JD,
    2. Lowe KA,
    3. Zinke W,
    4. Woodman GF,
    5. Schall JD
    (2018) Prefrontal control of visual distraction. Curr Biol 28:414–420.e3. https://doi.org/10.1016/j.cub.2017.12.023
    OpenUrlCrossRefPubMed
  27. ↵
    1. Cowey A,
    2. Rolls ET
    (1974) Human cortical magnification factor and its relation to visual acuity. Exp Brain Res 21:447–454. https://doi.org/10.1007/BF00237163
    OpenUrlCrossRefPubMed
  28. ↵
    1. Delorme A,
    2. Makeig S
    (2004) EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J Neurosci Methods 134:9–21. https://doi.org/10.1016/j.jneumeth.2003.10.009
    OpenUrlCrossRefPubMed
  29. ↵
    1. Desimone R,
    2. Duncan J
    (1995) Neural mechanisms of selective visual attention. Annu Rev Neurosci 18:193–222. https://doi.org/10.1146/annurev.ne.18.030195.001205
    OpenUrlCrossRefPubMed
  30. ↵
    1. Diamond DM,
    2. Campbell AM,
    3. Park CR,
    4. Halonen J,
    5. Zoladz PR
    (2007) The temporal dynamics model of emotional memory processing: a synthesis on the neurobiological basis of stress-induced amnesia, flashbulb and traumatic memories, and the Yerkes-Dodson law. Neural Plast 2007:60803. https://doi.org/10.1155/2007/60803
    OpenUrlCrossRefPubMed
  31. ↵
    1. Di Russo F,
    2. Spinelli D,
    3. Morrone MC
    (2001) Automatic gain control contrast mechanisms are modulated by attention in humans: evidence from visual evoked potentials. Vision Res 41:2435–2447. https://doi.org/10.1016/S0042-6989(01)00134-1
    OpenUrlCrossRefPubMed
  32. ↵
    1. Di Russo F,
    2. Pitzalis S,
    3. Aprile T,
    4. Spitoni G,
    5. Patria F,
    6. Stella A,
    7. Spinelli D,
    8. Hillyard SA
    (2007) Spatiotemporal analysis of the cortical sources of the steady-state visual evoked potential. Hum Brain Mapp 28:323–334. https://doi.org/10.1002/hbm.20276
    OpenUrlCrossRefPubMed
  33. ↵
    1. Dosher B,
    2. Lu Z-L
    (2000) Noise exclusion in spatial cuing of attention. Psychol Sci 11:139–146.
    OpenUrlCrossRefPubMed
  34. ↵
    1. Dodson JD
    (1915) The relation of strength of stimulus to rapidity of habit-formation in the kitten. J Anim Behav 5:330–336. https://doi.org/10.1037/h0073415
    OpenUrlCrossRef
  35. ↵
    1. Duffy E
    (1957) The psychological significance of the concept of “arousal” or “activation.” Psychol Rev 64:265–275. https://doi.org/10.1037/h0048837 pmid:13494613
    OpenUrlCrossRefPubMed
  36. ↵
    1. Duncan RO,
    2. Boynton GM
    (2003) Cortical magnification within human primary visual cortex correlates with acuity thresholds. Neuron 38:659–671. https://doi.org/10.1016/S0896-6273(03)00265-4
    OpenUrlCrossRefPubMed
  37. ↵
    1. Elbert T,
    2. Rockstroh B
    (1987) Threshold regulation - a key to the understanding of the combined dynamics of EEG and event-related potentials. J Psychophysiol 1:317–333.
    OpenUrl
  38. ↵
    1. Engel SA,
    2. Glover GH,
    3. Wandell BA
    (1997) Retinotopic organization in human visual cortex and the spatial precision of functional MRI. Cereb Cortex 7:181–192.
    OpenUrlCrossRefPubMed
  39. ↵
    1. Fischer T,
    2. Langner R,
    3. Birbaumer N,
    4. Brocke B
    (2008) Arousal and attention: self-chosen stimulation optimizes cortical excitability and minimizes compensatory effort. J Cogn Neurosci 20:1443–1453.
    OpenUrlCrossRefPubMed
  40. ↵
    1. Foster JJ,
    2. Thyer W,
    3. Wennberg JW,
    4. Awh E
    (2021) Covert attention increases the gain of stimulus-evoked population codes. J Neurosci 41:1802–1815. https://doi.org/10.1523/JNEUROSCI.2186-20.2020
    OpenUrlAbstract/FREE Full Text
  41. ↵
    1. Fuchs S,
    2. Andersen SK,
    3. Gruber T,
    4. Müller MM
    (2008) Attentional bias of competitive interactions in neuronal networks of early visual processing in the human brain. Neuroimage 41:1086–1101. https://doi.org/10.1016/j.neuroimage.2008.02.040
    OpenUrlCrossRefPubMed
  42. ↵
    1. Giesbrecht B,
    2. Sy J,
    3. Bundesen C,
    4. Kyllingsbæk S
    (2014) A new perspective on the perceptual selectivity of attention under load. Ann N Y Acad Sci 1316:71–86. pmid:24716751
    OpenUrlPubMed
  43. ↵
    1. Gorea A,
    2. Sagi D
    (2001) Disentangling signal from noise in visual contrast discrimination. Nat Neurosci 4:1146–1150. https://doi.org/10.1038/nn741
    OpenUrlCrossRefPubMed
  44. ↵
    1. Gray JR,
    2. Chabris CF,
    3. Braver TS
    (2003) Neural mechanisms of general fluid intelligence. Nat Neurosci 6:316–322. https://doi.org/10.1038/nn1014
    OpenUrlCrossRefPubMed
  45. ↵
    1. Hajcak G,
    2. MacNamara A,
    3. Foti D,
    4. Ferri J,
    5. Keil A
    (2013) The dynamic allocation of attention to emotion: simultaneous and independent evidence from the late positive potential and steady state visual evoked potentials. Biol Psychol 92:447–455. https://doi.org/10.1016/j.biopsycho.2011.11.012
    OpenUrlCrossRefPubMed
  46. ↵
    1. Handy TC,
    2. Mangun GR
    (2000) Attention and spatial selection: electrophysiological evidence for modulation by perceptual load. Percept Psychophys 62:175–186. https://doi.org/10.3758/BF03212070
    OpenUrlCrossRefPubMed
  47. ↵
    1. Handy TC,
    2. Soltani M,
    3. Mangun GR
    (2001) Perceptual load and visuocortical processing: event-related potentials reveal sensory-level selection. Psychol Sci 12:213–218. https://doi.org/10.1111/1467-9280.00338
    OpenUrlCrossRefPubMed
  48. ↵
    1. Hara Y,
    2. Gardner JL
    (2014) Encoding of graded changes in spatial specificity of prior cues in human visual cortex. J Neurophysiol 112:2834–2849. https://doi.org/10.1152/jn.00729.2013
    OpenUrlCrossRefPubMed
  49. ↵
    1. Herrero JL,
    2. Roberts MJ,
    3. Delicato LS,
    4. Gieselmann MA,
    5. Dayan P,
    6. Thiele A
    (2008) Acetylcholine contributes through muscarinic receptors to attentional modulation in V1. Nature 454:1110–1114. https://doi.org/10.1038/nature07141
    OpenUrlCrossRefPubMed
  50. ↵
    1. Herrero JL,
    2. Gieselmann MA,
    3. Sanayei M,
    4. Thiele A
    (2013) Attention-induced variance and noise correlation reduction in macaque V1 is mediated by NMDA receptors. Neuron 78:729–739. https://doi.org/10.1016/j.neuron.2013.03.029
    OpenUrl
  51. ↵
    1. Herrmann K,
    2. Montaser-Kouhsari L,
    3. Carrasco M,
    4. Heeger DJ
    (2010) When size matters: attention affects performance by contrast or response gain. Nat Neurosci 13:1554–1559. https://doi.org/10.1038/nn.2669 pmid:21057509
    OpenUrlCrossRefPubMed
  52. ↵
    1. Hillyard SA,
    2. Anllo-Vento L
    (1998) Event-related brain potentials in the study of visual selective attention. Proc Natl Acad Sci U S A 95:781–787. https://doi.org/10.1073/pnas.95.3.781
    OpenUrlAbstract/FREE Full Text
  53. ↵
    1. Hillyard SA,
    2. Vogel EK,
    3. Luck SJ
    (1998) Sensory gain control (amplification) as a mechanism of selective attention: electrophysiological and neuroimaging evidence. Phil Trans R Soc Lond B 353:1257–1270. https://doi.org/10.1098/rstb.1998.0281
    OpenUrlCrossRefPubMed
  54. ↵
    1. Holm S
    (1979) A simple sequentially rejective multiple test procedure. Scand J Statist 6:65–70.
    OpenUrl
  55. ↵
    1. Huang L,
    2. Dobkins KR
    (2005) Attentional effects on contrast discrimination in humans: evidence for both contrast gain and response gain. Vision Res 45:1201–1212. pmid:15707928
    OpenUrlCrossRefPubMed
  56. ↵
    1. Hull CL
    (1943) The problem of intervening variables in molar behavior theory. Psychol Rev 50:273–291. https://doi.org/10.1037/h0057518
    OpenUrlCrossRef
  57. ↵
    1. Itthipuripat S,
    2. Serences JT
    (2016) Integrating levels of analysis in systems and cognitive neurosciences: selective attention as a case study. Neuroscientist 22:225–237. https://doi.org/10.1177/1073858415603312
    OpenUrlCrossRefPubMed
  58. ↵
    1. Itthipuripat S,
    2. Cha K,
    3. Byers A,
    4. Serences JT
    (2017) Two different mechanisms support selective attention at different phases of training. PLoS Biol 15:e2001724.
    OpenUrlCrossRefPubMed
  59. ↵
    1. Itthipuripat S,
    2. Ester EF,
    3. Deering S,
    4. Serences JT
    (2014a) Sensory gain outperforms efficient readout mechanisms in predicting attention-related improvements in behavior. J Neurosci 34:13384–13398. https://doi.org/10.1523/JNEUROSCI.2277-14.2014
    OpenUrlAbstract/FREE Full Text
  60. ↵
    1. Itthipuripat S,
    2. Garcia JO,
    3. Rungratsameetaweemana N,
    4. Sprague TC,
    5. Serences JT
    (2014b) Changing the spatial scope of attention alters patterns of neural gain in human cortex. J Neurosci 34:112–123. https://doi.org/10.1523/JNEUROSCI.3943-13.2014
    OpenUrlAbstract/FREE Full Text
  61. ↵
    1. Itthipuripat S,
    2. Cha K,
    3. Deering S,
    4. Salazar AM,
    5. Serences JT
    (2018) Having more choices changes how human observers weight stable sensory evidence. J Neurosci 38:8635–8649. https://doi.org/10.1523/JNEUROSCI.0440-18.2018
    OpenUrlAbstract/FREE Full Text
  62. ↵
    1. Itthipuripat S,
    2. Sprague TC,
    3. Serences JT
    (2019) Functional MRI and EEG index complementary attentional modulations. J Neurosci 39:6162–6179. https://doi.org/10.1523/JNEUROSCI.2519-18.2019
    OpenUrlAbstract/FREE Full Text
  63. ↵
    1. Johannes S,
    2. Münte TF,
    3. Heinze HJ,
    4. Mangun GR
    (1995) Luminance and spatial attention effects on early visual processing. Cogn Brain Res 2:189–205. https://doi.org/10.1016/0926-6410(95)90008-X
    OpenUrlCrossRefPubMed
  64. ↵
    1. Keil A,
    2. Moratti S,
    3. Sabatinelli D,
    4. Bradley MM,
    5. Lang PJ
    (2005) Additive effects of emotional content and spatial selective attention on electrocortical facilitation. Cereb Cortex 15:1187–1197.
    OpenUrlCrossRefPubMed
  65. ↵
    1. Kim YJ,
    2. Grabowecky M,
    3. Paller KA,
    4. Muthu K,
    5. Suzuki S
    (2007) Attention induces synchronization-based response gain in steady-state visual evoked potentials. Nat Neurosci 10:117–125.
    OpenUrlCrossRefPubMed
  66. ↵
    1. Kirkwood A,
    2. Rozas C,
    3. Kirkwood J,
    4. Perez F,
    5. Bear MF
    (1999) Modulation of long-term synaptic depression in visual cortex by acetylcholine and norepinephrine. J Neurosci 19:1599–1609. https://doi.org/10.1523/JNEUROSCI.19-05-01599.1999
    OpenUrlAbstract/FREE Full Text
  67. ↵
    1. Kohn A,
    2. Coen-Cagli R,
    3. Kanitscheider I,
    4. Pouget A
    (2016) Correlations and neuronal population information. Annu Rev Neurosci 39:237–256. https://doi.org/10.1146/annurev-neuro-070815-013851
    OpenUrlCrossRefPubMed
  68. ↵
    1. Lauritzen TZ,
    2. Ales JM,
    3. Wade AR
    (2010) The effects of visuospatial attention measured across visual cortex using source-imaged, steady-state EEG. J Vis 10:39–39. https://doi.org/10.1167/10.14.39
    OpenUrlAbstract
  69. ↵
    1. Lavie N,
    2. Hirst A,
    3. De Fockert JW,
    4. Viding E
    (2004) Load theory of selective attention and cognitive control. J Exp Psychol General 133:339–354. https://doi.org/10.1037/0096-3445.133.3.339
    OpenUrlCrossRefPubMed
  70. ↵
    1. Lee J,
    2. Maunsell JHR
    (2009) A normalization model of attentional modulation of single unit responses. PLoS One 4:e4651.
    OpenUrlCrossRefPubMed
  71. ↵
    1. Lee J,
    2. Maunsell JHR
    (2010) The effect of attention on neuronal responses to high and low contrast stimuli. J Neurophysiol 104:960–971. https://doi.org/10.1152/jn.01019.2009
    OpenUrlCrossRefPubMed
  72. ↵
    1. Legge GE,
    2. Foley JM
    (1980) Contrast masking in human vision. J Opt Soc Am 70:1458–1471. pmid:7463185
    OpenUrlCrossRefPubMed
  73. ↵
    1. Ling S,
    2. Carrasco M
    (2006) Sustained and transient covert attention enhance the signal via different contrast response functions. Vision Res 46:1210–1220. pmid:16005931
    OpenUrlCrossRefPubMed
  74. ↵
    1. Loftus GR,
    2. Masson ME
    (1994) Using confidence intervals in within-subject designs. Psychon Bull Rev 1:476–490.
    OpenUrlCrossRefPubMed
  75. ↵
    1. Loveless NE,
    2. Sanford AJ
    (1975) The impact of warning signal intensity on reaction time and components of the contingent negative variation. Biol Psychol 2:217–226. https://doi.org/10.1016/0301-0511(75)90021-6
    OpenUrlCrossRefPubMed
  76. ↵
    1. Lu ZL,
    2. Dosher BA
    (2000) Mechanisms of perceptual attention in precuing of location. Vision Res 40:1269–1292.
    OpenUrlCrossRefPubMed
  77. ↵
    1. Luo TZ,
    2. Maunsell JHR
    (2015) Neuronal modulations in visual cortex are associated with only one of multiple components of attention. Neuron 86:1182–1188. https://doi.org/10.1016/j.neuron.2015.05.007
    OpenUrl
  78. ↵
    1. Makeig S,
    2. Bell AJ,
    3. Jung TP,
    4. Sejnowski TJ
    (1996) Independent component analysis of electroencephalographic data. Adv Neural Inf Process Syst 145–151.
  79. ↵
    1. Mangun GR,
    2. Buck LA
    (1998) Sustained visual spatial attention produces costs and benefits in response time and evoked neural activity. Neuropsychologia 36:189–200. https://doi.org/10.1016/S0028-3932(97)00123-1
    OpenUrlCrossRefPubMed
  80. ↵
    1. Mangun GRR,
    2. Hillyard SA
    (1987) The spatial allocation of visual attention as indexed by event-related brain potentials. Hum Factors 29:195–211. https://doi.org/10.1177/001872088702900207
    OpenUrlCrossRefPubMed
  81. ↵
    1. Mangun GR,
    2. Hillyard SA
    (1988) Spatial gradients of visual attention: behavioral and electrophysiological evidence. Electroencephalogr Clin Neurophysiol 70:417–428. pmid:2460315
    OpenUrlCrossRefPubMed
  82. ↵
    1. Mangun GR,
    2. Hillyard SA
    (1990) Allocation of visual attention to spatial locations: tradeoff functions for event-related brain potentials and detection performance. Percept Psychophys 47:532–550. https://doi.org/10.3758/BF03203106
    OpenUrlCrossRefPubMed
  83. ↵
    1. Martínez-Trujillo JC,
    2. Treue S
    (2002) Attentional modulation strength in cortical area MT depends on stimulus contrast. Neuron 35:365–370. https://doi.org/10.1016/S0896-6273(02)00778-X
    OpenUrlCrossRefPubMed
  84. ↵
    1. Mather M,
    2. Clewett D,
    3. Sakaki M,
    4. Harley CW
    (2016) Norepinephrine ignites local hotspots of neuronal excitation: how arousal amplifies selectivity in perception and memory. Behav Brain Sci 39:e200. https://doi.org/10.1017/S0140525X15000667
    OpenUrlCrossRefPubMed
  85. ↵
    1. Maunsell JHR,
    2. Treue S
    (2006) Feature-based attention in visual cortex. Trends Neurosci 29:317–322. https://doi.org/10.1016/j.tins.2006.04.001
    OpenUrlCrossRefPubMed
  86. ↵
    1. McAdams CJ,
    2. Maunsell JHR
    (1999) Effects of attention on orientation-tuning functions of single neurons in macaque cortical area V4. J Neurosci 19:431–441. https://doi.org/10.1523/JNEUROSCI.19-01-00431.1999
    OpenUrlAbstract/FREE Full Text
  87. ↵
    1. McDowd JM,
    2. Craik FIM
    (1988) Effects of aging and task difficulty on divided attention performance. J Exp Psychol Hum Percept Perform 14:267–280. https://doi.org/10.1037/0096-1523.14.2.267
    OpenUrlCrossRefPubMed
  88. ↵
    1. Mitchell AS,
    2. Baxter MG,
    3. Gaffan D
    (2007) Dissociable performance on scene learning and strategy implementation after lesions to magnocellular mediodorsal thalamic nucleus. J Neurosci 27:11888–11895. https://doi.org/10.1523/JNEUROSCI.1835-07.2007
    OpenUrlAbstract/FREE Full Text
  89. ↵
    1. Mitchell JF,
    2. Sundberg KA,
    3. Reynolds JH
    (2009) Spatial attention decorrelates intrinsic activity fluctuations in macaque area V4. Neuron 63:879–888. https://doi.org/10.1016/j.neuron.2009.09.013
    OpenUrlCrossRefPubMed
  90. ↵
    1. Moran J,
    2. Desimone R
    (1985) Selective attention gates visual processing in the extrastriate cortex. Science 229:782–784. pmid:4023713
    OpenUrlAbstract/FREE Full Text
  91. ↵
    1. Morrone MC,
    2. Denti V,
    3. Spinelli D
    (2002) Color and luminance contrasts attract independent attention. Curr Biol 12:1134–1137. https://doi.org/10.1016/S0960-9822(02)00921-1
    OpenUrlCrossRefPubMed
  92. ↵
    1. Morrone MC,
    2. Denti V,
    3. Spinelli D
    (2004) Different attentional resources modulate the gain mechanisms for color and luminance contrast. Vision Res 44:1389–1401. https://doi.org/10.1016/j.visres.2003.10.014
    OpenUrlCrossRefPubMed
  93. ↵
    1. Motter BC
    (1993) Focal attention produces spatially selective processing in visual cortical areas V1, V2, and V4 in the presence of competing stimuli. J Neurophysiol 70:909–919. pmid:8229178
    OpenUrlCrossRefPubMed
  94. ↵
    1. Müller MM,
    2. Elbert T,
    3. Rockstroh B
    (1997) Visually-induced gamma band responses in human EEG- expression of cortical stimulus representation? Z Exp Psychol 44:186–212.
    OpenUrlPubMed
  95. ↵
    1. Nagai Y,
    2. Critchley HD,
    3. Featherstone E,
    4. Fenwick PBC,
    5. Trimble MR,
    6. Dolan RJ
    (2004) Brain activity relating to the contingent negative variation: an fMRI investigation. Neuroimage 21:1232–1241. https://doi.org/10.1016/j.neuroimage.2003.10.036
    OpenUrlCrossRefPubMed
  96. ↵
    1. Norcia AM,
    2. Gregory Appelbaum L,
    3. Ales JM,
    4. Cottereau BR,
    5. Rossion B
    (2015) The steady-state visual evoked potential in vision research: a review. J Vis 15:4–46. https://doi.org/10.1167/15.6.4
    OpenUrlAbstract/FREE Full Text
  97. ↵
    1. Paczynski M,
    2. Burton AM,
    3. Jha AP
    (2015) Brief exposure to aversive stimuli impairs visual selective attention. J Cogn Neurosci 27:1172–1179. https://doi.org/10.1162/jocn_a_00768
    OpenUrl
  98. ↵
    1. Pelli DG
    (1997) The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spatial Vis 10:437–442. https://doi.org/10.1163/156856897X00366
    OpenUrlCrossRef
  99. ↵
    1. Pestilli F,
    2. Carrasco M
    (2005) Attention enhances contrast sensitivity at cued and impairs it at uncued locations. Vision Res 45:1867–1875. pmid:15797776
    OpenUrlCrossRefPubMed
  100. ↵
    1. Pestilli F,
    2. Viera G,
    3. Carrasco M
    (2007) How do attention and adaptation affect contrast sensitivity? J Vis 7:9. https://doi.org/10.1167/7.7.9
    OpenUrlAbstract/FREE Full Text
  101. ↵
    1. Pestilli F,
    2. Ling S,
    3. Carrasco M
    (2009) A population-coding model of attention's influence on contrast response: estimating neural effects from psychophysical data. Vision Res 49:1144–1153. https://doi.org/10.1016/j.visres.2008.09.018
    OpenUrlCrossRefPubMed
  102. ↵
    1. Pestilli F,
    2. Carrasco M,
    3. Heeger DJ,
    4. Gardner JL
    (2011) Attentional enhancement via selection and pooling of early sensory responses in human visual cortex. Neuron 72:832–846. https://doi.org/10.1016/j.neuron.2011.09.025
    OpenUrlCrossRefPubMed
  103. ↵
    1. Phelps EA,
    2. Ling S,
    3. Carrasco M
    (2006) Emotion facilitates perception and potentiates the perceptual benefits of attention. Psychol Sci 17:292–299. https://doi.org/10.1111/j.1467-9280.2006.01701.x
    OpenUrlCrossRefPubMed
  104. ↵
    1. Polack PO,
    2. Friedman J,
    3. Golshani P
    (2013) Cellular mechanisms of brain state–dependent gain modulation in visual cortex. Nat Neurosci 16:1331–1339. https://doi.org/10.1038/nn.3464
    OpenUrlCrossRefPubMed
  105. ↵
    1. Prinzmetal W,
    2. Zvinyatskovskiy A,
    3. Gutierrez P,
    4. Dilem L
    (2009) Voluntary and involuntary attention have different consequences: the effect of perceptual difficulty. Q J Exp Psychol 62:352–369. https://doi.org/10.1080/17470210801954892
    OpenUrlCrossRef
  106. ↵
    1. Purcell BA,
    2. Schall JD,
    3. Woodman GF
    (2013) On the origin of event-related potentials indexing covert attentional selection during visual search: timing of selection by macaque frontal eye field and event-related potentials during pop-out search. J Neurophysiol 109:557–569. https://doi.org/10.1152/jn.00549.2012
    OpenUrlCrossRefPubMed
  107. ↵
    1. Regan D
    (1989) Human brain electrophysiology: evoked potentials and evoked magnetic fields in science and medicine. Br J Ophthalmol 74:255–255.
    OpenUrl
  108. ↵
    1. Reynolds JH,
    2. Chelazzi L
    (2004) Attentional modulation of visual processing. Annu Rev Neurosci 27:611–647. https://doi.org/10.1146/annurev.neuro.26.041002.131039
    OpenUrlCrossRefPubMed
  109. ↵
    1. Reynolds JH,
    2. Heeger DJ
    (2009) The normalization model of attention. Neuron 61:168–185. https://doi.org/10.1016/j.neuron.2009.01.002
    OpenUrlCrossRefPubMed
  110. ↵
    1. Reynolds JH,
    2. Pasternak T,
    3. Desimone R
    (2000) Attention increases sensitivity of V4 neurons. Neuron 26:703–714. https://doi.org/10.1016/S0896-6273(00)81206-4
    OpenUrlCrossRefPubMed
  111. ↵
    1. Ruff DA,
    2. Cohen MR
    (2014a) Global cognitive factors modulate correlated response variability between V4 neurons. J Neurosci 34:16408–16416. https://doi.org/10.1523/JNEUROSCI.2750-14.2014
    OpenUrlAbstract/FREE Full Text
  112. ↵
    1. Ruff DA,
    2. Cohen MR
    (2014b) Attention can either increase or decrease spike count correlations in visual cortex. Nat Neurosci 17:1591–1597. https://doi.org/10.1038/nn.3835
    OpenUrlCrossRefPubMed
    1. Schönwald LI,
    2. Müller MM
    (2014) Slow biasing of processing resources in early visual cortex is preceded by emotional cue extraction in emotion-attention competition. Hum Brain Mapp 35:1477–1490. https://doi.org/10.1002/hbm.22267
    OpenUrlCrossRefPubMed
  113. ↵
    1. Sereno MI,
    2. Dale AM,
    3. Reppas JB,
    4. Kwong KK,
    5. Belliveau JW,
    6. Brady TJ,
    7. Rosen BR,
    8. Tootell RBH
    (1995) Borders of multiple visual areas in humans revealed by functional magnetic resonance imaging. Science 268:889–893. https://doi.org/10.1126/science.7754376
    OpenUrlAbstract/FREE Full Text
  114. ↵
    1. Song I,
    2. Keil A
    (2013) Affective engagement and subsequent visual processing: effects of contrast and spatial frequency. Emotion 13:748–757. https://doi.org/10.1037/a0031553
    OpenUrl
  115. ↵
    1. Song I,
    2. Keil A
    (2014) Differential classical conditioning selectively heightens response gain of neural population activity in human visual cortex. Psychophysiol 51:1185–1194. https://doi.org/10.1111/psyp.12260
    OpenUrlCrossRefPubMed
  116. ↵
    1. Spitzer H,
    2. Richmond BJ
    (1991) Task difficulty: ignoring, attending to, and discriminating a visual stimulus yield progressively more activity in inferior temporal neurons. Exp Brain Res 83:340–348. https://doi.org/10.1007/BF00231157
    OpenUrlPubMed
  117. ↵
    1. Spitzer H,
    2. Desimone R,
    3. Moran J
    (1988) Increased attention enhances both behavioral and neuronal performance. Science 240:338–340. https://doi.org/10.1126/science.3353728
    OpenUrlAbstract/FREE Full Text
  118. ↵
    1. Srinivasan R,
    2. Bibi FA,
    3. Nunez PL
    (2006) Steady-state visual evoked potentials: distributed local sources and wave-like dynamics are sensitive to flicker frequency. Brain Topogr 18:167–187. https://doi.org/10.1007/s10548-006-0267-4
    OpenUrlCrossRefPubMed
  119. ↵
    1. Störmer VS,
    2. Alvarez GA
    (2014) Feature-based attention elicits surround suppression in feature space. Curr Biol 24:1985–1988.
    OpenUrlCrossRefPubMed
  120. ↵
    1. Störmer VS,
    2. McDonald JJ,
    3. Hillyard SA
    (2009) Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli. Proc Natl Acad Sci U S A 106:22456–22461.
    OpenUrlAbstract/FREE Full Text
  121. ↵
    1. Störmer VS,
    2. Winther GN,
    3. Li SC,
    4. Andersen SK
    (2013) Sustained multifocal attentional enhancement of stimulus processing in early visual areas predicts tracking performance. J Neurosci 33:5346–5351. https://doi.org/10.1523/JNEUROSCI.4015-12.2013
    OpenUrlAbstract/FREE Full Text
  122. ↵
    1. Tanner WPJ,
    2. Swets JA
    (1954) A decision-making theory of visual detection. Psychol Rev 61:401–409. https://doi.org/10.1037/h0058700
    OpenUrlCrossRefPubMed
  123. ↵
    R Core Team (2020) R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria.
  124. ↵
    1. Urbach D,
    2. Spitzer H
    (1995) Attentional effort modulated by task difficulty. Vision Res 35:2169–2177. https://doi.org/10.1016/0042-6989(94)00305-X
    OpenUrlCrossRefPubMed
  125. ↵
    1. Wang J,
    2. Wade AR
    (2011) Differential attentional modulation of cortical responses to S-cone and luminance stimuli. J Vis 11:1. https://doi.org/10.1167/11.6.1
    OpenUrlAbstract/FREE Full Text
  126. ↵
    1. Wieser MJ,
    2. Miskovic V,
    3. Rausch S,
    4. Keil A
    (2014) Different time course of visuocortical signal changes to fear-conditioned faces with direct or averted gaze: a ssVEP study with single-trial analysis. Neuropsychologia 62:101–110. https://doi.org/10.1016/j.neuropsychologia.2014.07.009 pmid:25050854
    OpenUrlCrossRefPubMed
  127. ↵
    1. Wyler AR,
    2. Ojemann GA,
    3. Ward AAJ
    (1982) Neurons in human epileptic cortex: correlation between unit and EEG activity. Ann Neurol 11:301–308. https://doi.org/10.1002/ana.410110311
    OpenUrlCrossRefPubMed
  128. ↵
    1. Yerkes RM,
    2. Dodson JD
    (1908) The relation of strength of stimulus to rapidity of habit-formation. J Comp Neurol Psychol 18:459–482. https://doi.org/10.1002/cne.920180503
    OpenUrlCrossRef
  129. ↵
    1. Zhang J,
    2. Tan L,
    3. Ren Y,
    4. Liang J,
    5. Lin R,
    6. Feng Q,
    7. Zhou J,
    8. Hu F,
    9. Ren J,
    10. Wei C,
    11. Yu T,
    12. Zhuang Y,
    13. Bettler B,
    14. Wang F,
    15. Luo M
    (2016) Presynaptic excitation via GABAB receptors in habenula cholinergic neurons regulates fear memory expression. Cell 166:716–728. https://doi.org/10.1016/j.cell.2016.06.026
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 43 (18)
Journal of Neuroscience
Vol. 43, Issue 18
3 May 2023
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Perceptual Difficulty Regulates Attentional Gain Modulations in Human Visual Cortex
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Perceptual Difficulty Regulates Attentional Gain Modulations in Human Visual Cortex
Prapasiri Sawetsuttipan, Phond Phunchongharn, Kajornvut Ounjai, Annalisa Salazar, Sarigga Pongsuwan, Singh Intrachooto, John T. Serences, Sirawaj Itthipuripat
Journal of Neuroscience 3 May 2023, 43 (18) 3312-3330; DOI: 10.1523/JNEUROSCI.0519-22.2023

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Perceptual Difficulty Regulates Attentional Gain Modulations in Human Visual Cortex
Prapasiri Sawetsuttipan, Phond Phunchongharn, Kajornvut Ounjai, Annalisa Salazar, Sarigga Pongsuwan, Singh Intrachooto, John T. Serences, Sirawaj Itthipuripat
Journal of Neuroscience 3 May 2023, 43 (18) 3312-3330; DOI: 10.1523/JNEUROSCI.0519-22.2023
Reddit logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • attention
  • EEG
  • gain
  • perceptual difficulty
  • signal detection theory
  • SSVEP

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Optogenetics reveals roles for supporting cells in force transmission to and from outer hair cells in the mouse cochlea
  • Pre-saccadic neural enhancements in marmoset area MT
  • Interareal synaptic inputs underlying whisking-related activity in the primary somatosensory barrel cortex
Show more Research Articles

Behavioral/Cognitive

  • Featural representation and internal noise underlie the eccentricity effect in contrast sensitivity
  • Dissociative effects of age on neural differentiation at the category and item level
  • Intonation Units in Spontaneous Speech Evoke a Neural Response
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
(JNeurosci logo)
(SfN logo)

Copyright © 2023 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.