Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

Bottom-Up and Top-Down Factors Differentially Influence Stimulus Representations Across Large-Scale Attentional Networks

Nicole M. Long and Brice A. Kuhl
Journal of Neuroscience 7 March 2018, 38 (10) 2495-2504; DOI: https://doi.org/10.1523/JNEUROSCI.2724-17.2018
Nicole M. Long
Department of Psychology, University of Oregon, Eugene, Oregon 97402
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Nicole M. Long
Brice A. Kuhl
Department of Psychology, University of Oregon, Eugene, Oregon 97402
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Brice A. Kuhl
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

Visual attention is thought to be supported by three large-scale frontoparietal networks: the frontoparietal control network (FPCN), the dorsal attention network (DAN), and the ventral attention network (VAN). The traditional view is that these networks support visual attention by biasing and evaluating sensory representations in visual cortical regions. However, recent evidence suggests that frontoparietal regions actively represent perceptual stimuli. Here, we assessed how perceptual stimuli are represented across large-scale frontoparietal and visual networks. Specifically, we tested whether representations of stimulus features across these networks are differentially sensitive to bottom-up and top-down factors. In a pair of pattern-based fMRI studies, male and female human subjects made perceptual decisions about face images that varied along two independent dimensions: gender and affect. Across studies, we interrupted bottom-up visual input using backward masks. Within studies, we manipulated which stimulus features were goal relevant (i.e., whether gender or affect was relevant) and task switching (i.e., whether the goal on the current trial matched the goal on the prior trial). We found that stimulus features could be reliably decoded from all four networks and, importantly, that subregions within each attentional network maintained coherent representations. Critically, the different attentional manipulations (interruption, goal relevance, and task switching) differentially influenced feature representations across networks. Whereas visual interruption had a relatively greater influence on representations in visual regions, goal relevance and task switching had a relatively greater influence on representations in frontoparietal networks. Therefore, large-scale brain networks can be dissociated according to how attention influences the feature representations that they maintain.

SIGNIFICANCE STATEMENT Visual attention is supported by multiple frontoparietal attentional networks. However, it remains unclear how stimulus features are represented within these networks and how they are influenced by attention. Here, we assessed feature representations in four large-scale networks using a perceptual decision-making paradigm in which we manipulated top-down and bottom-up factors. We found that top-down manipulations such as goal relevance and task switching modulated feature representations in attentional networks, whereas bottom-up manipulations such as interruption of visual processing had a relatively stronger influence on feature representations in visual regions. Together, these findings indicate that attentional networks actively represent stimulus features and that representations within different large-scale networks are influenced by different forms of attention.

  • attention
  • cognitive control
  • decoding
  • parietal cortex
  • prefrontal cortex
  • resting-state networks

Introduction

Visual attention is thought to be supported by several frontoparietal networks (Posner and Petersen, 1990; Corbetta and Shulman, 2002; Dosenbach et al., 2008). The idea that the brain is comprised of multiple functional networks has been inspired and elaborated by resting-state analyses of human fMRI data (Yeo et al., 2011), which reveal three networks of particular importance to attentional control: the frontoparietal control network (FPCN), the dorsal attention network (DAN), and the ventral attention network (VAN). Traditionally, these networks have been thought to support visual attention by biasing and evaluating sensory representations within visual cortical areas (Desimone and Duncan, 1995; Egner and Hirsch, 2005; Serences and Yantis, 2006; Gazzaley and Nobre, 2012). However, recent evidence from pattern-based fMRI studies has blurred the distinction between sensory representations in visual cortical areas and control processes in frontoparietal regions. Namely, there is accumulating evidence that frontoparietal regions actively represent stimulus features during visual attention and working memory (Ester et al., 2015; Lee and Kuhl, 2016; Xu, 2017). These findings suggest a potentially transformative approach for understanding the functional role of frontoparietal networks in visual attention: that frontoparietal networks can be characterized – and dissociated from visual cortical regions – in terms of how they represent stimuli in relation to bottom-up and top-down factors.

Attention manipulations that may dissociate stimulus representations in frontal and parietal cortices from those in visual cortex include robustness to interruption of visual processing, goal relevance, and task switching. For example, working memory representations in the intraparietal sulcus (Bettencourt and Xu, 2016) and prefrontal cortex (Miller et al., 1996) are more robust to distraction than are representations in visual cortical areas. Likewise, multiple frontal and parietal regions preferentially represent goal-relevant stimulus information, as shown via electrophysiological recordings in monkeys (Rainer et al., 1998; Swaminathan and Freedman, 2012; Roy et al., 2014; Sarma et al., 2016) and pattern-based fMRI in humans (Kuhl et al., 2013; Sreenivasan et al., 2014; Ester et al., 2015; Bracci et al., 2017). Finally, dorsal frontal and parietal regions show increased univariate activation on trials when goals change (task switching) (Braver et al., 2003; Monsell, 2003; Yeung et al., 2006; Ravizza and Carter, 2008; Bode and Haynes, 2009; Esterman et al., 2009). Collectively, these findings suggest that different forms of attention potentially differentiate stimulus representations across frontoparietal and visual regions. However, there has been limited application of “representation-based” analyses to large-scale networks. Do frontoparietal networks actively and coherently represent stimulus information? Are representations in different networks influenced by different forms of attention?

Here, we conducted a pair of pattern-based fMRI studies to determine how various attention-related manipulations (interruption of visual processing, goal relevance, and task switching) influence feature representations in frontoparietal attentional networks (FPCN, DAN, and VAN) and, as a comparison, within a network of visual regions (VisN). In both studies, subjects viewed faces that varied along two independent dimensions: gender (male vs female) and affect (happy vs grumpy). On each trial, subjects made a perceptual decision related to a cued face feature (e.g., “Male?”). Using pattern classification analyses, we first tested for representation of stimulus features within each network and, importantly, tested whether frontoparietal regions within a common network maintained “coherent” representations. Next, we determined how each attention manipulation influenced feature representations across networks. Visual interruption was manipulated across studies: in Study 1, stimuli were followed by a visual mask; in Study 2, there was a longer stimulus duration and no mask. Goal relevance was manipulated by varying the dimension (gender/affect) that was currently relevant. Finally, task switching was manipulated by alternating between goals on a trial-by-trial manner, resulting in trials in which goals repeated (stay trials) and trials in which goals changed (switch trials). We predicted that frontoparietal networks would actively and coherently represent stimulus features and that frontoparietal representations would be relatively more sensitive to top-down manipulations (goals, task switching), whereas representations in VisN would be relatively more sensitive to bottom-up manipulations (interruption).

Materials and Methods

Subjects

Thirty-two (19 female; mean age = 22 years) right-handed, native English speakers from the University of Oregon community participated in the fMRI studies. Sixteen subjects participated in Experiment 1 and 16 participated in Experiment 2. Four total subjects were excluded. In Experiment 1, one subject was excluded for poor task performance (65% accuracy, which was >4 SDs away from the average performance of 95%) and one was excluded for excessive head motion. In Experiment 2, two subjects were excluded for exiting the scanner before task completion (one subject complained of nausea and the other began coughing repeatedly; each missing two of six runs). Therefore, for each experiment, there was a final set of 14 subjects included for analyses. All subjects had normal or corrected-to-normal vision. Informed consent was obtained in accordance with the University of Oregon Institutional Review Board. The raw, de-identified data and the associated experimental and analysis codes used in this study can be accessed via the Kuhl laboratory website (http://kuhllab.com/publications/).

An additional 14 (5 female; mean age = 19) right-handed, native English speakers from the University of Oregon community participated in a follow-up behavioral study. All subjects had normal or corrected-to-normal vision. Informed consent was obtained in accordance with the University of Oregon Institutional Review Board and subjects received course credit for participating.

Materials

Stimuli consisted of 88 face images drawn from various internet sources. From this pool, 16 faces were designated as “target” faces, 24 as “filler” faces, and 48 as “localizer” faces. Within each set (target, filler, localizer) gender and affect were balanced: 1/4 of the faces were “happy, males”; 1/4 were “happy, females”; 1/4 were “grumpy, males”; and 1/4 were “grumpy, females.” Gender and affect were determined, with unanimous agreement, by two independent raters.

Experimental design and statistical analysis

Procedure and design.

Each trial began with the presentation of one of four “goal cues” or questions: “Male?,” “Female?,” “Happy?,” or “Grumpy?” The goal was presented for 1400 ms (Fig. 1A) and was immediately followed by a face stimulus. In Experiment 1, the face was presented for 100 ms and immediately followed by a visual mask composed of scrambled face parts for 500 ms. In Experiment 2, the face was presented for 600 ms with no mask. After the mask (Experiment 1) or face (Experiment 2), there was a 6 s interstimulus interval (ISI) during which a fixation cross was shown. On each trial, the subject's task was to respond “Yes” or “No” via button box as to whether the face matched the goal/question. Subjects could respond at any point after the onset of the face, including during the ISI.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Experimental design. A, In a given trial, a goal cue was presented for 1400 ms. There were one of four possible goal cues (“Male?,” “Female?,” “Happy?,” or “Grumpy?”). After the goal, a face was presented for 100 ms. In Experiment 1, the face was followed by a mask image for 500 ms. In Experiment 2, the face was presented for an additional 500 ms. A fixation cross was then presented for 6000 ms. Subjects responded “Yes” or “No” as to whether the face matched the goal. B, Faces varied along two dimensions: gender and affect. Note that faces shown here were not those used in the experiment. Permission was given for these faces to be published. C, Each block followed the same general structure. At the start of each block, subjects were cued as to which goals would be relevant for that block. Only two goals were relevant in each block (one from each dimension) and these goals alternated in a fixed AABBAABB schedule. This enforced a constant alternation between switch and stay trials. D, Four independently defined resting-state networks of a priori interest (Yeo et al., 2011).

Trials were organized into blocks. Each block contained 17 trials, with the first trial in each block representing a “filler” face. The designation of the first trial as a filler face was particularly important for analyses of “stay” versus “switch” trials because the first trial cannot be designated as either a stay or switch trial (see below). The stimuli for the remaining 16 trials in each block were target faces. Each of the 16 target faces appeared exactly once per block. Within each block, only two of the four goals were presented (either “Male?” or “Female?” or either “Happy?” or “Grumpy?”). The goals alternated in an A-A-B-B-A-A-B-B manner (Fig. 1C). Therefore, there was a constant alternation between switch trials (goal change) and stay trials (goal repeat). A 3 s get ready screen at the start of each block informed subjects which two goals would occur within that block.

Each of the six scan runs contained four blocks. The four blocks in each run comprised each of the four possible combinations of goals (Male/Grumpy, Female/Grumpy, Male/Happy, and Female/Happy). Block order was randomized across runs and across subjects. The order of the 16 target faces was also randomized in each block. Across the six scan runs, there was a grand total of 408 trials (including filler trials). To familiarize subjects with the task, before entering the scanner, subjects completed a 24-trial practice round. Subjects practiced each goal combination (e.g., “Male?/Happy?”) and the stimuli used were the 24 filler faces.

After the main experiment, subjects completed two localizer runs. However, because data from the localizer scan are not reported here, details of the task are not included.

The design of the behavioral follow-up study was identical to Experiment 1, with the exception of the ISI (3 s instead of 6 s) and the inclusion of a post-task questionnaire, in which subjects were asked increasingly specific questions as to whether they noticed the task switching structure. We first asked subjects if they noticed any patterns throughout the task. We then noted that there was a pattern to the goal cues and asked if they were aware of such a pattern. Finally, we explicitly asked subjects if they noticed the A-A-B-B-A-A-B-B structure of the goal cues.

fMRI data acquisition

Imaging data were collected on a Siemens 3 T Skyra scanner at the Robert and Beverly Lewis Center for NeuroImaging at the University of Oregon. Before the functional imaging, a whole-brain high-resolution anatomical image was collected for each subject using a T1-weighted protocol (grid size 256 × 256; 176 sagittal slices; voxel size 1 × 1 × 1 mm). Whole-brain functional images were collected using a T2*-weighted multiband accelerated EPI sequence (TR = 2 s; TE = 25 ms; flip angle = 90°; 72 horizontal slices; grid size 104 × 104; voxel size 2 × 2 × 2 mm). For the main experiment, six functional scans were collected, each consisting of 280 volumes. For the localizer task, two functional scans were collected each consisting of 225 volumes.

fMRI data preprocessing

Preprocessing of the functional data was conducted using FSL 5.0 (FMRIB Software Library, http://www.fmrib.ox.ac.uk/fsl; Smith et al., 2004) and custom scripts. Images were first corrected for head motion using MCFLIRT (Jenkinson et al., 2002). Motion-corrected images were smoothed with a Gaussian kernel with 1.7 mm SD (∼4 mm FWHM).

Network selection

We assessed feature representations in four resting-state networks defined from a large, independent sample of subjects (Yeo et al., 2011): the FPCN, DAN, VAN, and VisN (Fig. 1D). The resting-state networks were generated for each subject using their high-resolution anatomical image and the FreeSurfer cortical parcellation scheme (http://surfer.nmr.mgh.harvard.edu). The networks were then coregistered to the functional data.

Univariate analyses

Univariate data analyses were conducted under the assumptions of the general linear model (GLM) using SPM12 (http://www.fil.ion.ucl.ac.uk/spm). To test for univariate effects of switch versus stay trials, we defined a model with separate regressors for switch and stay trials. The model also included regressors for scan run and six motion parameters for each run. Switch versus stay trials were contrasted using paired-samples t tests resulting in subject-specific statistical parametric maps. These t values were then averaged within network, resulting in a single mean t statistic per network and subject. For each network, one-sample t tests were then applied across subjects to test for effects of switch versus stay trials at the group level.

Pattern classification analyses

For pattern classification analyses, functional data were detrended, high-pass filtered (0.01 Hz), and z-scored within scan (mean response of each voxel within each scan = 0). Next, data were temporally compressed via a weighted averaged of TRs 3, 4, and 5 (40%, 40%, 20%, respectively) relative to trial onset (representing 4–10 s after the goal was presented). TRs 3 and 4 were given greater weight because the hemodynamic response tends to peak at these TRs. The temporally compressed data resulted in a single spatial pattern of activity for each trial. Before pattern classification analyses were performed, an additional round of z-scoring was applied across voxels to the trial-specific spatial patterns. This final round of z-scoring resulted in each trial-specific spatial volume having a mean activation equal to 0. Therefore, mean univariate activity was matched precisely across all conditions and trial types (Kuhl and Chun, 2014; Long et al., 2016). Pattern classification analyses were performed using penalized (L2) logistic regression (penalty parameter = 1) implemented via the Liblinear toolbox (Fan et al., 2008) and custom MATLAB (RRID:SCR 001622) code. Classifier performance was assessed in two ways. “Classification accuracy” represented a binary coding of whether the classifier successfully guessed the queried feature of the face. We used classification accuracy for general assessment of classifier performance (i.e., whether features could be decoded). “Classifier evidence” was a continuous value reflecting the logit-transformed probability that the classifier assigned the correct feature each trial. Classifier evidence was used as a trial-specific, continuous measure of feature information, which was used to assess trial-level correlations in feature representations within and between networks (see below).

For each subject, four separate classifiers were trained to decode stimulus features. A given classifier was trained to discriminate either male from female faces (gender classifier) or happy from grumpy faces (affect classifier). Additionally, separate gender classifiers were applied for trials in which the goal was either “Male?” or “Female?” (i.e., trials in which gender was relevant) versus trials in which the goal was either “Happy?” or “Grumpy?” (i.e., trials in which affect was relevant). Likewise, separate affect classifiers were applied for trials in which affect was the relevant dimension versus trials in which gender was the relevant dimension. Goal-relevant feature representations were indexed by performance of the gender classifier on gender-relevant trials and performance of the affect classifier on affect-relevant trials. Goal-irrelevant feature representations were indexed by performance of the gender classifier on affect-relevant trials and performance of the affect classifier on gender-relevant trials. All classification analyses were performed using leave-one-run-out cross-validation. A critical element of our design and implementation of classification analyses is that we deliberately orthogonalized feature information from behavioral responses. In other words, there was no consistent mapping between feature information and motor response. As an example, during trials in which gender was relevant, 50% of the time, a female face would require a “Yes” response (i.e., on “Female?” trials) and, 50% of the time, a female face would require a “No” response (i.e., on “Male?” trials). Therefore, successful decoding of goal-relevant feature information cannot be attributed to decoding of the planned or executed motor responses. A second critical element of our design is that, across trials, goals and features were not always matched. In fact, they were independent. That is, when presented with a goal cue of “Male?,” the subsequently presented stimulus was equally likely to be a male or female face. This design feature critically allowed us to deconfound goal and feature information. In other words, decoding of stimulus features cannot be driven exclusively by goal information.

To decode top-down goals, an additional four classifiers were applied. The four classifiers corresponded to the four different possible combinations of goals in a given block: “Male?” versus “Grumpy?,” “Male?” versus “Happy?,” “Female?” versus “Grumpy?,” and “Female?” versus “Happy?” Classification was performed using leave-one-run-out cross-validation. Classification accuracy was averaged across all four classifiers to provide a single measure of goal decoding accuracy. Because goal decoding could potentially be driven by low-level information such as visual word form or subvocal articulation, we also tested whether goal classifiers generalized to different goals corresponding to the same dimensions. For example, a classifier trained to discriminate between “Male?” versus “Happy?” faces can also be thought of as a classifier that is discriminating between the gender versus affect dimensions. If so, then a classifier trained on “Male?” versus “Happy?” may successfully transfer to the discrimination of “Female?” versus “Grumpy?” Successful transfer would suggest that the goal representation is, at least in part, a representation of the relevant dimension as opposed to the specific word cue per se. Therefore, for each of the four goal classifiers described above, we tested for transfer to the “complementary” pair of goals; i.e., goal pairs that corresponded to discrimination between the same dimensions. This dimension decoding analysis was again performed using leave-one-run-out cross-validation so as to match the goal decoding analyses in terms of statistical power. Classification accuracy was averaged across all four transfer tests to provide a single aggregate value of dimension decoding accuracy.

Representational coherence analysis

To assess the coherence of representations within and between networks, we decomposed each of the three attentional networks into separate frontal and parietal ROIs: frontal-FPCN, parietal-FPCN, frontal-DAN, parietal-DAN, frontal-VAN, and parietal-VAN. None of the voxels from the frontal ROIs were contiguous with voxels from the parietal ROIs. These ROIs were generated using the average subject brain in Free Surfer. The ROIs were then projected from this volume space to surface space and then converted from average surface space to subject space. Because the number of voxels in these regions varied both within and across subjects and differences in ROI size are likely to influence classifier performance, classification analyses were performed by randomly subsampling 500 voxels from each of the six frontal and parietal ROIs. This process was repeated for 100 iterations for each ROI and subject, with each iteration involving a different random sample of 500 voxels. Classification of goal-relevant and goal-irrelevant features was performed using the same approach described above. Here, however, classifier evidence (a continuous value reflecting the strength of classifier information) was the critical dependent measure. For each subject, trial-by-trial measures of classifier evidence were correlated within network (e.g., a correlation between classifier evidence from frontal-FPCN and parietal-FPCN) and between network (e.g., frontal-FPCN and parietal-DAN). The correlation analyses were separately performed for relevant and irrelevant feature evidence. In total, we obtained 50,400 ρ values: 100 iterations × 9 correlations × 28 subjects × 2 relevance conditions. All ρ values were Fisher-z transformed before averaging across iterations and goal relevance. Finally, z-ρ values were averaged across the three “within” network correlations (e.g., correlations between frontal-FPCN and parietal-FPCN) and the six “between” network correlations (e.g., correlations between frontal-FPCN and parietal-DAN), resulting in two z-ρ values for each subject, mean within network correlation, and mean between network correlation.

Statistical analyses

One-sample t tests were used to compare representational coherence measures (z-ρ values) to zero. Paired-sample t tests were used to compare classification accuracy across subjects to chance decoding accuracy, as determined by permutation procedures. Namely, for each subject and network, we shuffled the condition labels of interest (e.g., “male” and “female” for the gender feature classifier) and then calculated classification accuracy. We repeated this procedure 1000 times for each network and subject and then averaged the 1000 shuffled accuracy values for each network and subject. These mean values were used as network- and subject-specific empirically derived measures of chance accuracy. Paired samples t tests compared the true (unshuffled) accuracy values to the shuffled accuracy values. For these paired-sample t tests, we report uncorrected p-values; however, all of the p-values exceeded the threshold for significance after Bonferroni correction; that is, after adjusting for the fact that we tested effects across four networks (i.e., a threshold of p = 0.0125). Mixed-effects ANOVAs were used to compare conditions and/or networks; experiment was always included as a factor when data from both experiments were included.

Results

Large-scale networks represent stimulus features

Motivated by recent evidence that activity patterns in frontoparietal regions represent stimulus features (Swaminathan and Freedman, 2012; Ester et al., 2015; Lee and Kuhl, 2016; Bracci et al., 2017), we first tested for representations of face features (gender, affect) within each of the three attentional networks (FPCN, DAN, VAN) and, for comparison, within VisN. Importantly, and as described in the Materials and Methods, our procedure deliberately deconfounded feature information from behavioral responses, so decoding accuracy cannot be explained by decoding of response preparation or execution. Averaging across the gender and affect classifiers (producing a single value per subject reflecting feature decoding), accuracy was above chance in each of the four networks (FPCN, t(27) = 5.82, p < 0.001; DAN, t(27) = 5.37, p < 0.001; VAN, t(27) = 5.42, p < 0.001; VisN, t(27) = 4.34, p < 0.001; Figure 2A), confirming the sensitivity of these large-scale networks to feature information. There were no significant differences in decoding accuracy for the gender versus affect classifiers for any of the four networks (t < 1, p > 0.35). A mixed-effects ANOVA with factors of network (FPCN, DAN, VAN, VisN) and experiment (1, 2) did not reveal a main effect of network (F(3,78) = 0.49, p = 0.69), indicating that overall feature decoding was comparable across networks. The effects of experiment are considered below.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Feature decoding across networks. A, Feature decoding averaged across experiments and goal relevance. Feature decoding was reliably above chance in all networks. B–D, Representational coherence within and between networks. B, Each attentional network was divided into discontiguous frontal and parietal subregions (here shown for a single subject). C, Correlations were computed for trial-level measures of goal-relevant feature evidence for each pair of frontal and parietal subregions. Within-network correlations (dark gray) corresponded to correlations between pairs of subregions from the same network (e.g., frontal-FPCN and parietal-FPCN). Between-network correlations (light gray) corresponded to correlations between pairs of subregions from different networks (e.g., frontal-FPCN and parietal-DAN). This analysis was then repeated for goal-irrelevant feature evidence and Fisher Z-transformed ρ values were then averaged across the goal-relevant and goal-irrelevant correlation analyses. D, Within-network correlations were reliably stronger than between-network correlations. ***p < 0.001.

Representational coherence within networks

The preceding results demonstrate that activity patterns in resting-state networks represent stimulus features. However, the use of large-scale networks as ROIs is predicated on the notion that individual components (brain regions) within these networks are acting together in a functionally relevant way to support behavior. To test for coherence of representations within the attentional networks, we divided each of the three attentional networks into frontal and parietal subregions (see Materials and Methods and ROIs from a sample subject in Fig. 2B) and correlated trial-by-trial feature evidence within and between each network (Fig. 2C). Correlations were significantly >0 both within networks (M = 0.15, SD = 0.03; t(27) = 24.7, p < 0.001) and between networks (M = 0.14, SD = 0.03; t(27) = 27.4, p < 0.001; Fig. 2D). A 2 × 2 mixed-effects ANOVA with factors of network pairing (within, between) and experiment (1, 2) revealed a main effect of network pairing (F(1,26) = 23.9, p < 0.001), with stronger correlations within networks than between networks. This main effect of network pairing did not interact with experiment (F(1,26) = 0.03, p = 0.86). Therefore, although feature representations were present across all of the attentional networks, there was greater representational coherence within networks than between networks, validating the use of these networks as large-scale ROIs for pattern-based analyses.

Robustness of feature representations to visual interruption

We next tested the relative sensitivity of feature representations in each network to the interruption of visual processing by comparing decoding accuracy across experiments. Sensitivity to visual interruption would be reflected by relatively lower decoding accuracy in Experiment 1 (100 ms stimulus + 500 ms visual mask) than in Experiment 2 (600 ms stimulus + no visual mask; Fig. 3) mixed-effects ANOVA with factors of experiment (1, 2) and network (FPCN, DAN, VAN, VisN) revealed a main effect of experiment (F(1,26) = 26.5, p < 0.001), with lower decoding accuracy in Experiment 1 than Experiment 2, consistent with a disruptive influence of interruption. Relative sensitivity to visual interruption markedly varied across networks, as reflected by a significant experiment by network interaction (F(3,78) = 6.2, p < 0.001).

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Feature decoding as a function of interruption. Interruption of visual processing was manipulated across experiments. In Experiment 1 (clear circles), stimuli were followed by a visual mask. In Experiment 2 (filled circles), there was a longer stimulus duration and no mask. There was a significant network-by-experiment interaction (p < 0.001). Significant differences between experiments were observed in VAN and VisN. Error bars indicate SEM. ∼p < 0.10, **p < 0.01, ***p < 0.001.

Post hoc independent sample t tests revealed no reliable differences between experiments in FPCN (t(26) = 1.1, p = 0.29), a trend in DAN (t(26) = 2.0, p = 0.06), and reliably greater feature decoding in Experiment 2 than Experiment 1 in VAN (t(26) = 3.2, p = 0.003) and VisN (t(26) = 5.8, p < 0.001). Therefore, the experimental manipulation of including a brief visual mask (Experiment 1) versus extended stimulus presentation (Experiment 2), the only difference between the two experiments, robustly diminished feature representations in VisN and VAN, but had relatively less influence on feature representations in DAN and FPCN.

Goal relevance influences feature representations

We next tested the relative sensitivity of feature representations in each network to trial-specific behavioral goals; that is, whether goal-relevant feature information was stronger than goal-irrelevant feature information (Kuhl et al., 2013; Roy et al., 2014; Sarma et al., 2016; Bracci et al., 2017). For example, on trials in which gender was the relevant dimension (“Male?” or “Female?” trials), goal-relevant feature information was indexed by accuracy of the gender classifier and goal-irrelevant feature information was indexed by accuracy of the affect classifier. To be clear, for this analysis, decoding accuracy refers only to the feature information in the actual face image. Therefore, if gender was the relevant dimension and the face image was a female face, then the classifier was accurate if it guessed “Female” regardless of whether the current goal was “Male?” or “Female?” This approach ensured that feature information was not confounded with goal information. Separate classifiers were trained/tested for relevant and irrelevant feature representations. That is, one set of classifiers was specifically trained and tested on goal-relevant feature information and a separate set was trained and tested on goal-irrelevant feature information. Separate classifiers were used for goal-relevant and goal-irrelevant feature decoding so that there was no assumption that goal-relevant and goal-irrelevant feature representations are encoded in a common (generalizable) way.

A mixed-effects ANOVA with factors of experiment, goal relevance (relevant, irrelevant), and network (FPCN, DAN, VAN, VisN) revealed a marginally significant main effect of relevance (F(1,26) = 3.93, p = 0.06), reflecting a trend for stronger decoding of goal-relevant than goal-irrelevant feature information (Fig. 4). The effect of relevance did not vary by experiment (F(1,26) = 0.07, p = 0.80). More critically, there was a significant interaction between network and goal relevance (F(3,78) = 3.12, p = 0.03), indicating that the sensitivity of feature representations to top-down goals varied across the networks. This interaction between network and relevance did not vary by experiment (F(3,78) = 1.24, p = 0.30).

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Feature decoding as a function of goal relevance. Orange indicates goal-relevant feature decoding (e.g., decoding whether a face was male vs female when the goal was “Male?” or “Female?”). Blue indicates goal-irrelevant feature decoding (e.g., decoding whether a face was male vs female when the goal was “Happy?” or “Grumpy?”). There was a significant interaction between network and goal relevance (p = 0.03). Goal-relevant feature decoding was reliably greater than goal-irrelevant feature decoding in VAN, with a similar trend in FPCN. ∼p < 0.10, ***p < 0.001.

For each network, we ran follow-up mixed-effects ANOVAs with factors of goal relevance (relevant, irrelevant) and experiment to test for effects of goal relevance within each network. A significant main effect of goal relevance (relevant > irrelevant) was observed in VAN (F(1,26) = 13.9, p < 0.001), with a similar trend in FPCN (F(1,26) = 3.7, p = 0.07). There was no effect of relevance in DAN (F(1,26) = 0.03, p = 0.87) or VisN (F(1,26) = 0.12, p = 0.73). None of the main effects of relevance interacted with experiment (F < 2.5, p >.1).

Task switches influence feature representations

In the preceding section, we considered whether goal-relevant features were more strongly represented than goal-irrelevant features. However, goal cues (and, therefore, the relevant feature dimension) changed every two trials, creating both switch (goal change) and stay (goal repeat) trials. To measure the influence of goal cue switches, we first probed the behavioral data for effects of task switching. Reaction times (RTs) were compared as a function of trial type (switch, stay) and experiment in a mixed-effects ANOVA. As expected, RTs were significantly greater for switch trials (M = 1423 ms, SD = 259 ms) than stay trials (M = 1398 ms, SD = 269 ms; F(1,26) = 12.7, p = 0.001; Fig. 5A). This effect did not interact with experiment (F(1,26) = 3.2, p = 0.08). Accuracy did not significantly differ across switch and stay trials (switch, M = 96.92%, SD = 3.17%; stay, M = 97.52%, SD = 2.17%; t(27) = −1.5, p = 0.13). There was a trend toward an interaction between switch/stay and experiment (F(1,26) = 4.03, p = 0.06), reflecting a relatively greater effect of accuracy (stay > switch) in Experiment 2.

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Effects of task switching. A, Difference in RTs (ms) between switch and stay trials. RTs were significantly slower for switch trials than stay trials. B, Univariate contrast of switch versus stay trials. Activation was significantly greater for switch than stay trials in both FPCN and DAN. C, Goal-relevant feature decoding separated by switch and stay trials. There was a significant interaction between switch/stay and network, reflecting greater goal-relevant feature decoding on stay than switch trials in FPCN; the opposite trends were true for DAN and VAN. ∼p < 0.10, *p < 0.05, **p < 0.01, ***p < 0.001.

To determine whether the predictable nature of goal cues (i.e., the A-A-B-B-A-A-B-B alternation) was detected by subjects and/or influenced switch costs (Rogers and Monsell, 1995; Monsell, 2003), we ran an independent behavioral study in which 14 subjects completed an experiment virtually identical to Experiment 1 (see Materials and Methods). Critically, however, this behavioral experiment also included a postexperiment questionnaire to assess whether subjects became aware of the alternating pattern of goal cues. Across all subjects, RTs were significantly greater for switch trials (M = 1298 ms, SD = 227 ms) than stay trials (M = 1257 ms, SD = 204 ms; t(13) = 2.58, p = 0.02), consistent with the data from Experiments 1 and 2. Of the 14 subjects, four were able to explicitly describe the task structure (i.e., they were aware of the pattern of alternation). Among these four subjects, switch costs were numerically greater (M = 59 ms) relative to subjects who were unaware of the task structure (M = 33 ms). These data indicate that, despite the highly structured task alternation, most subjects did not become aware of this structure. In addition, at least in this small sample of subjects, there was no evidence that awareness of the structure reduced switch costs. Therefore, it is unlikely that explicit awareness of the task structure had a major influence on fMRI-based effects of task switching.

Prior univariate fMRI studies of task switching have consistently reported increased activation during switch versus stay trials in dorsal frontoparietal regions (overlapping with FPCN and DAN), whereas more ventral regions (overlapping with VAN) are generally insensitive to switch effects (Kimberg et al., 2000; Rushworth et al., 2002; Braver et al., 2003; Brass and Von Cramon, 2004; Corbetta et al., 2008). In light of these prior studies, and as a comparison point for our decoding analyses, we tested for univariate effects of task switching in each of the four resting-state networks. Consistent with prior findings, we found reliably greater activation for switch than stay trials in FPCN (t(27) = 4.0, p < 0.001) and DAN (t(27) = 2.5, p = 0.02) and no effect in VAN (t(27) = 1.2, p = 0.23) or VisN (t(27) = 0.23, p = 0.82; Fig. 5B). Post hoc independent-samples t tests within each network did not reveal any between-experiment differences in switch effects in FPCN, DAN, or VAN (t < 1.5, p >.1). In VisN, there was a reliable difference between experiment (t(26) = −2.3, p = 0.03), although neither experiment alone showed a reliable difference between switch and stay trials. In Experiment 1, activation was relatively greater for stay than switch trials (t(13) = −1.7, p = 0.12), whereas in Experiment 2, activation was relatively greater for switch than stay trials (t(13) = 1.6, p = 0.14).

Next, we considered the novel question of whether task switches influenced the strength of feature representations. For this analysis, we used the same classifiers described above and then back-sorted trial-specific accuracy values as a function of trial type (switch vs stay). Decoding accuracy was entered into a mixed-effects ANOVA with factors of task switch (switch, stay), relevance (relevant, irrelevant), network, and experiment. The main effect of task switch was not significant (F(1,26) = 0.09, p = 0.77). However, there was a robust 3-way interaction between the effects of task switch, relevance, and network (F(3,78) = 4.50, p = 0.006). Considering goal-relevant feature information alone, a mixed-effects ANOVA with factors of task switch, network, and experiment revealed a significant interaction between task switch and network (F(3,78) = 4.87, p = 0.004). For goal-irrelevant feature information, a similar mixed-effects ANOVA did not reveal an interaction between task switch and network (F(3,78) = 0.60, p = 0.62). Therefore, the influence of task switches on feature representations varied across the networks, but only when considering goal-relevant feature representations.

The interaction between task switch and network for goal-relevant feature representations was driven by relatively stronger representations on switch versus stay trials in DAN and VAN and an opposite pattern in FPCN. This interaction did not significantly vary by experiment (F(3,78) = 2.25, p = 0.09). Follow-up mixed-effects ANOVAs with factors of task switch (switch, stay) and experiment revealed that, within FPCN, there was a significant effect of task switch (F(1,26) = 6.3, p = 0.02), reflecting relatively lower feature information on switch than stay trials. In contrast, there were opposite trends (greater feature information on switch than stay trials) in DAN (F(1,26) = 4.2, p = 0.05) and VAN (F(1,26) = 4.2, p = 0.05). In VisN, there was no effect of task switch (F(1,26) = 1.1, p = 0.31). The effect of task switch interacted with experiment in FPCN (F(1,26) = 6.6, p = 0.02), but not in DAN, VAN, or VisN (F < 2.2, p >.15). For FPCN, the interaction between experiment and task switch reflected a greater decoding advantage on stay versus switch trials in Experiment 2 compared with Experiment 1. Collectively, these data indicate that task switches had an effect on goal-relevant feature information, but this effect varied across networks and was most evident in the attentional networks.

Network representations of task goals

Although our central aim was to assess feature representations across networks, a complementary question is how/whether these networks represent top-down behavioral goals. Previous research has revealed top-down goal effects within multiple frontoparietal regions (Esterman et al., 2009; Greenberg et al., 2010; Liu and Hou, 2013; Waskom et al., 2014; Waskom and Wagner, 2017). Here, to test for goal representations, we trained and tested four separate pairwise classifiers to distinguish each combination of gender and affect goals. For example, one classifier was trained to dissociate “Male?” versus “Grumpy?” goal cues. Averaging across the pairwise classifiers, goal decoding was significantly above chance in each of the four networks (FPCN, t(27) = 8.8, p < 0.001; DAN, t(27) = 10.6, p < 0.001; VAN, t(27) = 5.9, p < 0.001; VisN, t(27) = 10.8, p < 0.001; Fig. 6). A mixed-effects ANOVA with factors of experiment and network revealed no main effect of experiment (F(1,26) = 0.76, p = 0.39), a main effect of network (F(3,78) = 8.53, p < 0.001), and no interaction between network and experiment (F(3,78) = 0.31, p = 0.81). Adding task switch (switch, stay) as a factor (Waskom et al., 2014) revealed no main effect of switch on goal decoding (F(1,26) = 0.01, p = 0.92), nor was there an interaction between the factors of task switch and network (F(3,78) = 0.70, p = 0.56).

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

Goal and dimension decoding. For goal decoding (teal), four different classifiers were trained to dissociate each of the possible pairs of goals that appeared in a given block. The goal pairs always consisted of one goal from each dimension (e.g., “Male?” vs “Grumpy?” goals). For dimension decoding (lavender), the classifier training was identical to goal, but the classifiers were tested on different goal pairs corresponding to the same dimensions. For example, a classifier trained to dissociate Male/Grumpy goals would be tested on Female/Happy goals. All networks showed reliable goal and dimension decoding. *p < 0.05, **p < 0.01, ***p < 0.001.

Because goals were communicated to subjects via word cues, successful goal decoding potentially reflects multiple influences: (1) “true” abstract goal representations, (2) representations of the visual word form of each goal cue, and/or (3) subvocal rehearsal of the word cues. Therefore, as a complementary measure, we also assessed “dimension” decoding; that is, decoding of the dimension (gender or affect) that was relevant on each trial. Dimension decoding was assessed by testing for transfer of classifiers across goal cues but within dimensions. For example, a classifier trained to discriminate “Male?” versus “Grumpy?” goals would be tested on “Female?” versus “Happy?” goals. Successful transfer would occur if the representation of the “Male?” goal is similar to the representation of the “Female?” goal and, likewise, if “Grumpy?” is similar to “Happy?” Although this analysis does not entirely rule out effects of visual word form or subvocal rehearsal, it at least reduces these concerns. One-sample t tests revealed reliable dimension decoding in all four networks (FPCN, t(27) = 7.9, p < 0.001; DAN, t(27) = 7.5, p < 0.001; VAN, t(27) = 5.1, p < 0.001; VisN, t(27) = 10.0, p < 0.001). A mixed-effects ANOVA with factors of experiment and network revealed no main effect of experiment (F(1,26) = 1.09, p = 0.31), a main effect of network (F(3,78) = 5.53, p = 0.002), and no reliable interaction between network and experiment (F(3,78) = 0.17, p = 0.92).

Finally, to compare goal decoding and dimension decoding directly, we ran a mixed-effects ANOVA with factors of goal level (goal decoding vs dimension decoding), network, and experiment. There was a significant main effect of goal level (F(1,26) = 25.86, p < 0.001), reflecting lower dimension decoding accuracy than goal decoding accuracy, which is not surprising given that dimension decoding requires transfer of the classifier to nonidentical goal cues. There was a significant main effect of network (F(3,78) = 8.32, p < 0.001), reflecting (as with the separate ANOVAs for goal decoding and dimension decoding) relatively lower decoding accuracy in VAN than in the other networks (Fig. 6). The interaction between goal level and network was also significant (F(3,78) = 2.95, p = 0.04), indicating that the “cost” of transferring across goal cues (but within dimensions) varied across networks. Namely, the difference between goal decoding and dimension decoding was most pronounced in VisN (Fig. 6).

Discussion

Here, using pattern-based fMRI methods and a perceptual decision-making task, we compared representations of stimulus features across multiple resting-state networks. We specifically targeted networks that are thought to contribute to attention and cognitive control (FPCN, DAN, and VAN; Dosenbach et al., 2008; Cole et al., 2013; Sestieri et al., 2017) and, as an important comparison, visual cortical networks. Although stimulus features were reliably decoded from each network, of critical interest was how feature representations in each network were influenced by three attentional manipulations: (1) interruption of visual processing, (2) goal relevance, and (3) task switching. Whereas bottom-up manipulations (interruption) had a relatively greater influence on feature representations within VisN, top-down manipulations (goals and task switches) had a relatively greater influence on representations in attentional networks. These findings reveal an important dissociation between feature representations in higher-level attentional networks and feature representations in visual cortex.

Whereas most decoding studies use spatially contiguous ROIs or searchlight analyses, we decoded from large, spatially discontiguous networks identified from independent studies of resting-state connectivity (Vincent et al., 2008; Yeo et al., 2011). We used these resting-state networks because they have been linked specifically to attentional processes (Fox et al., 2006; Corbetta et al., 2008; Gratton et al., 2017). That said, these networks have been defined based on correlations in univariate responses during rest and it was an open question whether regions within these networks would exhibit correlated feature representations, particularly after mean univariate responses were removed (see Materials and Methods). When specifically considering correlations between frontal and parietal regions from each attentional network, we found that decoded feature representations were more coherent within networks than between networks. Beyond validating our approach, this finding provides novel evidence that brain regions within attentional networks represent common stimulus information.

The traditional view of frontoparietal regions is that they bias and/or evaluate stimulus representations held in perceptual regions (Kastner et al., 1999; Zanto et al., 2010; Liu et al., 2011; Gazzaley and Nobre, 2012). However, our findings, along with other recent evidence (Kuhl et al., 2013; Bettencourt and Xu, 2016; Ester et al., 2016; Bracci et al., 2017), challenge this view by demonstrating active stimulus representations within frontoparietal regions. This raises the critical question of how frontoparietal representations differ from those in perceptual regions. We show that frontoparietal representations were sensitive to different forms of attention than representations in visual cortical areas. The fact that frontoparietal representations were relatively more sensitive to top-down factors helps to reconcile evidence that frontoparietal regions represent stimulus information with the putative role of these regions in top-down processing. Specifically, our results suggest a transformation of information from perceptual regions to frontoparietal regions that selectively represent and/or evaluate stimulus features. We next briefly consider the pattern of results for each attentional manipulation.

Previous work has shown that visual distraction disrupts working memory representations in visual cortex to a greater degree than representations in frontoparietal regions (Miller et al., 1996; Suzuki and Gottlieb, 2013; Woolgar et al., 2015; Bettencourt and Xu, 2016). Based on these findings, we predicted that feature representations in the attentional networks would be relatively less influenced by the across-experiment manipulation of visual interruption. Indeed, the effect of interruption (Experiment 1 vs Experiment 2) significantly differed across networks, with feature representations in VisN most strongly “suffering” from the interruption in bottom-up visual input (Fig. 3). Among the attentional networks, VAN showed a significant effect of interruption and there was a trend for DAN, whereas feature representations in FPCN were not influenced by interruption. The fact that VAN feature representations were influenced by interruption is consistent with the idea that VAN is involved in bottom-up attentional capture (Corbetta and Shulman, 2002).

One reason why frontoparietal regions may actively represent stimulus features is so that behaviorally relevant decisions can be made. Indeed, several recent studies have found that stimulus representations in frontoparietal regions are biased by task demands (Swaminathan and Freedman, 2012; Kuhl et al., 2013; Ester et al., 2016; Sarma et al., 2016; Bracci et al., 2017). We specifically designed our stimuli to be multidimensional so that we could test for flexible representation of individual features. As with visual interruption, we found that the influence of behavioral goals varied across networks (Fig. 4). In VAN, there was significantly greater representation of goal-relevant than goal-irrelevant features, with a similar trend in FPCN. At first pass, the effect of goal relevance in VAN seems at odds with the putative role of VAN in bottom-up attentional orienting (Corbetta and Shulman, 2002). For example, univariate responses in VAN increase for oddball stimuli or targets that appear at invalid locations (Bledowski et al., 2004; Kincade et al., 2005). However, more recent evidence suggests that VAN plays a role in comparing bottom-up input to top-down goals (Vossel et al., 2014; Gratton et al., 2017). In fact, a recent meta-analysis found greater VAN responses to task-relevant than task-irrelevant oddballs (Kim, 2014), suggesting that VAN′s response to exogenous stimuli is moderated by top-down goals. Therefore, the present finding of preferential decoding of goal-relevant features in VAN is consistent with the view that VAN plays a role in orienting attention to task-relevant perceptual input.

VisN feature representations were unaffected by goals. Although other studies have clearly found that top-down factors can influence stimulus representations in early visual areas (Jehee et al., 2011; Sprague and Serences, 2013; Ester et al., 2016), the present findings suggest that frontoparietal regions can impose their own attentional biases to favor goal-relevant features as opposed to simply inheriting biases imposed in visual cortical regions. Potentially, the lack of attentional bias in VisN in the present study reflects the fact that we used stimuli (faces) that are processed holistically. With different stimulus types and/or attentional manipulations, it is likely easier to gate processing at earlier stages.

A large body of previous research indicates that switching between tasks (goals) is associated with increased univariate activity in dorsal frontoparietal regions (Corbetta et al., 2008). Our finding of greater univariate activation for switch versus stay trials in FPCN and DAN, but not VAN or VisN, is consistent with this literature. However, we are not aware of prior pattern-based fMRI studies that have compared frontoparietal feature representations across switch versus stay trials, so our analysis of task switching effects on feature representations was necessarily more exploratory. We found that the influence of task switching varied across networks (Fig. 5) and, in particular, that switching effects were more apparent in the attentional networks than VisN. Given that task switching requires reconfiguration of top-down attention, this finding is consistent with the argument that feature representations in attentional networks are relatively more sensitive to top-down attention. Among the attentional networks, FPCN showed greater feature decoding on stay than switch trials, whereas DAN and VAN showed opposite trends. Although we did not predict this pattern a priori, the tendency for feature representations to be stronger on switch than stay trials (in DAN and VAN) is reminiscent of recent evidence for greater decoding of top-down goals on switch than stay trials (Waskom et al., 2014). For FPCN, it is notable that switch trials were associated with relatively greater univariate activity but relatively lower feature decoding, raising the possibility that these measures reflect opposing processes. Given the exploratory nature of this analysis, we believe this question requires additional investigation.

Although our primary focus was on frontoparietal representations of stimulus features, several prior studies have reported frontoparietal representations of top-down goals (Waskom et al., 2014; Hanson and Chrysikou, 2017; Loose et al., 2017; Waskom and Wagner, 2017; Qiao et al., 2017) Consistent with these findings, we observed significant goal decoding in all four networks. One open question is whether the strength of goal representations is influenced by task switches. Although there is some evidence that goal representations are relatively stronger on switch trials than stay trials (Waskom et al., 2014), others have failed to observe switch-related effects in task representations (Loose et al., 2017). Like Loose et al. (2017), we did not observe a significant difference in goal decoding as a function of task switching; however, we did find task-switching effects in the decoding of goal-relevant features. Therefore, additional research will be needed to better understand how and when task switching influences the strength of top-down goals and/or goal-relevant feature representations.

Given that classifiers trained on one pair of goals (e.g., “Male” vs “Grumpy” goals) reliably transferred to nonidentical goals that shared the same dimensions (e.g., “Female” vs “Happy” goals), this suggests that goal representations reflected, at least in part, information about or attention to the goal-relevant stimulus dimension (gender or affect). The fact that the “transfer cost” (goal vs dimension decoding) was relatively greatest in VisN is consistent with the idea that goal decoding in VisN was more closely related to low-level properties of the goals (e.g., the visual word form of the cue). Considering goal-decoding performance across networks also provides a useful comparison point for the feature decoding results. For example, comparing overall goal decoding versus decoding of goal-relevant features across the three attentional networks (a 2 × 3 ANOVA) revealed a highly significant interaction (F(2,52) = 7.40, p = 0.001), reflecting a dissociation between the networks that best represented goal-relevant features (VAN) versus the goals themselves (FPCN/DAN). Therefore, theoretical accounts of how these networks contribute to attention will benefit from considering, not only how feature representations vary across networks, but also the hierarchical organization of feature and goal representations (Koechlin and Summerfield, 2007; Badre, 2008).

In summary, we show that resting-state networks implicated in attentional control actively and coherently represent stimulus features and that network-based feature representations can be dissociated in terms of their sensitivity to various forms of attention (interruption of visual processing, goal relevance, and task switching). Whereas feature representations in visual cortical areas are sensitive to low-level manipulations (visual interruption), feature representations in attentional networks are sensitive to higher-level manipulations (goal relevance and task switching). At a broad level, these findings indicate that multiple networks actively represent stimulus features, with the nature of these feature representations providing insight into each network's functional role.

Footnotes

  • This work was supported by the Lewis Family Endowment to the University of Oregon, which supports the Robert and Beverly Lewis Center for NeuroImaging and by the National Institute of Neurological Disorders and Stroke–National Institutes of Health (Grant 1RO1NS089729 to B.A.K.). We thank Rosalie Samide and Sarah Sweigart for assistance with data collection.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to either Nicole Long or Brice Kuhl, Department of Psychology, 1227 University of Oregon Eugene, OR 97403. niclong{at}uoregon.edu or bkuhl{at}uoregon.edu

References

  1. ↵
    1. Badre D
    (2008) Cognitive control, hierarchy, and the rostro-caudal organization of the frontal lobes. Trends Cogn Sci 12:193–200. doi:10.1016/j.tics.2008.02.004 pmid:18403252
    OpenUrlCrossRefPubMed
  2. ↵
    1. Bettencourt KC,
    2. Xu Y
    (2016) Decoding the content of visual short-term memory under distraction in occipital and parietal areas. Nat Neurosci 19:150–157. doi:10.1038/nn.4174 pmid:26595654
    OpenUrlCrossRefPubMed
  3. ↵
    1. Bledowski C,
    2. Prvulovic D,
    3. Goebel R,
    4. Zanella FE,
    5. Linden DE
    (2004) Attentional systems in target and distractor processing: a combined ERP and fMRI study. Neuroimage 22:530–540. doi:10.1016/j.neuroimage.2003.12.034 pmid:15193581
    OpenUrlCrossRefPubMed
  4. ↵
    1. Bode S,
    2. Haynes JD
    (2009) Decoding sequential stages of task preparation in the human brain. Neuroimage 45:606–613. doi:10.1016/j.neuroimage.2008.11.031 pmid:19111624
    OpenUrlCrossRefPubMed
  5. ↵
    1. Bracci S,
    2. Daniels N,
    3. Op de Beeck H
    (2017) Task context overrules object-and category-related representational content in the human parietal cortex. Cereb Cortex 27:310–321. doi:10.1093/cercor/bhw419 pmid:28108492
    OpenUrlCrossRefPubMed
  6. ↵
    1. Brass M,
    2. von Cramon DY
    (2004) Decomposing components of task preparation with functional magnetic resonance imaging. J Cogn Neurosci 16:609–620. doi:10.1162/089892904323057335 pmid:15165351
    OpenUrlCrossRefPubMed
  7. ↵
    1. Braver TS,
    2. Reynolds JR,
    3. Donaldson DI
    (2003) Neural mechanisms of transient and sustained cognitive control during task switching. Neuron 39:713–726. doi:10.1016/S0896-6273(03)00466-5 pmid:12925284
    OpenUrlCrossRefPubMed
  8. ↵
    1. Cole MW,
    2. Reynolds JR,
    3. Power JD,
    4. Repovs G,
    5. Anticevic A,
    6. Braver TS
    (2013) Multi-task connectivity reveals exible hubs for adaptive task control. Nat Neurosci 16:1348–1355. doi:10.1038/nn.3470 pmid:23892552
    OpenUrlCrossRefPubMed
  9. ↵
    1. Corbetta M,
    2. Shulman GL
    (2002) Control of goal-directed and stimulus-driven attention in the brain. Nat Rev Neurosci 3:201–215. doi:10.1038/nrn755 pmid:11994752
    OpenUrlCrossRefPubMed
  10. ↵
    1. Corbetta M,
    2. Patel G,
    3. Shulman GL
    (2008) The reorienting system of the human brain: from environment to theory of mind. Neuron 58:306–324. doi:10.1016/j.neuron.2008.04.017 pmid:18466742
    OpenUrlCrossRefPubMed
  11. ↵
    1. Desimone R,
    2. Duncan J
    (1995) Neural mechanisms of selective visual attention. Annu Rev Neurosci 18:193–222. doi:10.1146/annurev.ne.18.030195.001205 pmid:7605061
    OpenUrlCrossRefPubMed
  12. ↵
    1. Dosenbach NU,
    2. Fair DA,
    3. Cohen AL,
    4. Schlaggar BL,
    5. Petersen SE
    (2008) A dual-networks architecture of top-down control. Trends Cogn Sci 12:99–105. doi:10.1016/j.tics.2008.01.001 pmid:18262825
    OpenUrlCrossRefPubMed
  13. ↵
    1. Egner T,
    2. Hirsch J
    (2005) Cognitive control mechanisms resolve conict through cortical amplification of task-relevant information. Nat Neurosci 8:1784–1790. doi:10.1038/nn1594 pmid:16286928
    OpenUrlCrossRefPubMed
  14. ↵
    1. Ester EF,
    2. Sprague TC,
    3. Serences JT
    (2015) Parietal and frontal cortex encode stimulus-specific mnemonic representations during visual working memory. Neuron 87:893–905. doi:10.1016/j.neuron.2015.07.013 pmid:26257053
    OpenUrlCrossRefPubMed
  15. ↵
    1. Ester EF,
    2. Sutterer DW,
    3. Serences JT,
    4. Awh E
    (2016) Feature-selective attentional modulations in human frontoparietal cortex. J Neurosci 36:8188–8199. doi:10.1523/JNEUROSCI.3935-15.2016 pmid:27488638
    OpenUrlAbstract/FREE Full Text
  16. ↵
    1. Esterman M,
    2. Chiu YC,
    3. Tamber-Rosenau BJ,
    4. Yantis S
    (2009) Decoding cognitive control in human parietal cortex. Proc Natl Acad Sci U S A 106:17974–17979. doi:10.1073/pnas.0903593106 pmid:19805050
    OpenUrlAbstract/FREE Full Text
  17. ↵
    1. Fan RE,
    2. Chang KW,
    3. Hsieh CJ,
    4. Wang XR,
    5. Lin CJ
    (2008) Liblinear: a library for large linear classification. The Journal of Machine Learning Research 9:1871–1874.
    OpenUrl
  18. ↵
    1. Fox MD,
    2. Corbetta M,
    3. Snyder AZ,
    4. Vincent JL,
    5. Raichle ME
    (2006) Spontaneous neuronal activity distinguishes human dorsal and ventral attention systems. Proc Natl Acad Sci U S A 103:10046–10051. doi:10.1073/pnas.0604187103 pmid:16788060
    OpenUrlAbstract/FREE Full Text
  19. ↵
    1. Gazzaley A,
    2. Nobre AC
    (2012) Top-down modulation: bridging selective attention and working memory. Trends Cogn Sci 16:129–135. doi:10.1016/j.tics.2011.11.014 pmid:22209601
    OpenUrlCrossRefPubMed
  20. ↵
    1. Gratton C,
    2. Neta M,
    3. Sun H,
    4. Ploran EJ,
    5. Schlaggar BL,
    6. Wheeler ME,
    7. Petersen SE,
    8. **Nelson SM
    (2017) Distinct stages of moment-to-moment processing in the cinguloopercular and frontoparietal networks. Cereb Cortex 27:2403–2417. doi:10.1093/cercor/bhw092 pmid:27095824
    OpenUrlCrossRefPubMed
  21. ↵
    1. Greenberg AS,
    2. Esterman M,
    3. Wilson D,
    4. Serences JT,
    5. Yantis S
    (2010) Control of spatial and feature-based attention in frontoparietal cortex. J Neurosci 30:14330–14339. doi:10.1523/JNEUROSCI.4248-09.2010 pmid:20980588
    OpenUrlAbstract/FREE Full Text
  22. ↵
    1. Hanson GK,
    2. Chrysikou EG
    (2017) Attention to distinct goal-relevant features differentially guides semantic knowledge retrieval. J Cogn Neurosci 29:1178–1193. doi:10.1162/jocn_a_01121 pmid:28294713
    OpenUrlCrossRefPubMed
  23. ↵
    1. Jehee JF,
    2. Brady DK,
    3. Tong F
    (2011) Attention improves encoding of task-relevant features in the human visual cortex. J Neurosci 31:8210–8219. doi:10.1523/JNEUROSCI.6153-09.2011 pmid:21632942
    OpenUrlAbstract/FREE Full Text
  24. ↵
    1. Jenkinson M,
    2. Bannister P,
    3. Brady M,
    4. Smith S
    (2002) Improved optimization for the robust and accurate linear registration and motion correction of brain images. Neuroimage 17:825–841. doi:10.1006/nimg.2002.1132 pmid:12377157
    OpenUrlCrossRefPubMed
  25. ↵
    1. Kastner S,
    2. Pinsk MA,
    3. De Weerd P,
    4. Desimone R,
    5. Ungerleider LG
    (1999) Increased activity in human visual cortex during directed attention in the absence of visual stimulation. Neuron 22:751–761. doi:10.1016/S0896-6273(00)80734-5 pmid:10230795
    OpenUrlCrossRefPubMed
  26. ↵
    1. Kim H
    (2014) Involvement of the dorsal and ventral attention networks in oddball stimulus processing: a meta-analysis. Hum Brain Mapp 35:2265–2284. doi:10.1002/hbm.22326 pmid:23900833
    OpenUrlCrossRefPubMed
  27. ↵
    1. Kimberg DY,
    2. Aguirre GK,
    3. D'Esposito M
    (2000) Modulation of task-related neural activity in task-switching: an fMRI study. Brain Res Cogn Brain Res 10:189–196. doi:10.1016/S0926-6410(00)00016-1 pmid:10978708
    OpenUrlCrossRefPubMed
  28. ↵
    1. Kincade JM,
    2. Abrams RA,
    3. Astafiev SV,
    4. Shulman GL,
    5. Corbetta M
    (2005) An event-related functional magnetic resonance imaging study of voluntary and stimulus-driven orienting of attention. J Neurosci 25:4593–4604. doi:10.1523/JNEUROSCI.0236-05.2005 pmid:15872107
    OpenUrlAbstract/FREE Full Text
  29. ↵
    1. Koechlin E,
    2. Summerfield C
    (2007) An information theoretical approach to prefrontal executive function. Trends Cogn Sci 11:229–235. doi:10.1016/j.tics.2007.04.005 pmid:17475536
    OpenUrlCrossRefPubMed
  30. ↵
    1. Kuhl BA,
    2. Chun MM
    (2014) Successful remembering elicits event-specific activity patterns in lateral parietal cortex. J Neurosci 34:8051–8060. doi:10.1523/JNEUROSCI.4328-13.2014 pmid:24899726
    OpenUrlAbstract/FREE Full Text
  31. ↵
    1. Kuhl BA,
    2. Johnson MK,
    3. Chun MM
    (2013) Dissociable neural mechanisms for goal-directed versus incidental memory reactivation. J Neurosci 33:16099–16109. doi:10.1523/JNEUROSCI.0207-13.2013 pmid:24107943
    OpenUrlAbstract/FREE Full Text
  32. ↵
    1. Lee H,
    2. Kuhl BA
    (2016) Reconstructing perceived and retrieved faces from activity patterns in lateral parietal cortex. J Neurosci 36:6069–6082. doi:10.1523/JNEUROSCI.4286-15.2016 pmid:27251627
    OpenUrlAbstract/FREE Full Text
  33. ↵
    1. Liu T,
    2. Hou Y
    (2013) A hierarchy of attentional priority signals in human frontoparietal cortex. J Neurosci 33:16606–16616. doi:10.1523/JNEUROSCI.1780-13.2013 pmid:24133264
    OpenUrlAbstract/FREE Full Text
  34. ↵
    1. Liu T,
    2. Hospadaruk L,
    3. Zhu DC,
    4. Gardner JL
    (2011) Feature-specific attentional priority signals in human cortex. J Neurosci 31:4484–4495. doi:10.1523/JNEUROSCI.5745-10.2011 pmid:21430149
    OpenUrlAbstract/FREE Full Text
  35. ↵
    1. Long NM,
    2. Lee H,
    3. Kuhl BA
    (2016) Hippocampal mismatch signals are modulated by the strength of neural predictions and their similarity to outcomes. J Neurosci 36:12677–12687. doi:10.1523/JNEUROSCI.1850-16.2016 pmid:27821577
    OpenUrlAbstract/FREE Full Text
  36. ↵
    1. Loose LS,
    2. Wisniewski D,
    3. Rusconi M,
    4. Goschke T,
    5. Haynes JD
    (2017) Switch independent task representations in frontal and parietal cortex. J Neurosci 37:8033–8042. doi:10.1523/JNEUROSCI.3656-16.2017 pmid:28729441
    OpenUrlAbstract/FREE Full Text
  37. ↵
    1. Miller EK,
    2. Erickson CA,
    3. Desimone R
    (1996) Neural mechanisms of visual working memory in prefontal cortex of the macaque. J Neurosci 16:5154–5167. pmid:8756444
    OpenUrlAbstract/FREE Full Text
  38. ↵
    1. Monsell S
    (2003) Task switching. Trends Cogn Sci 7:134–140. doi:10.1016/S1364-6613(03)00028-7 pmid:12639695
    OpenUrlCrossRefPubMed
  39. ↵
    1. Posner MI,
    2. Petersen SE
    (1990) The attention system of the human brain. Annu Rev Neurosci 13:25–42. doi:10.1146/annurev.ne.13.030190.000325 pmid:2183676
    OpenUrlCrossRefPubMed
  40. ↵
    1. Qiao L,
    2. Zhang L,
    3. Chen A,
    4. Egner T
    (2017) Dynamic trial-by-trial recoding of task-set representations in the frontoparietal cortex mediates behavioral exibility. J Neurosci 37:11037–11050. pmid:28972126
    OpenUrlAbstract/FREE Full Text
  41. ↵
    1. Rainer G,
    2. Asaad WF,
    3. Miller EK
    (1998) Selective representation of relevant information by neurons in the primate prefrontal cortex. Nature 393:577–579. doi:10.1038/31235 pmid:9634233
    OpenUrlCrossRefPubMed
  42. ↵
    1. Ravizza SM,
    2. Carter CS
    (2008) Shifting set about task switching: behavioral and neural evidence for distinct forms of cognitive exibility. Neuropsychologia 46:2924–2935. doi:10.1016/j.neuropsychologia.2008.06.006 pmid:18601941
    OpenUrlCrossRefPubMed
  43. ↵
    1. Rogers RD,
    2. Monsell S
    (1995) Costs of a predictible switch between simple cognitive tasks. Journal of Experimental Psychology General 124:207. doi:10.1037/0096-3445.124.2.207
    OpenUrlCrossRef
  44. ↵
    1. Roy JE,
    2. Buschman TJ,
    3. Miller EK
    (2014) PFC neurons reect categorical decisions about ambiguous stimuli. J Cogn Neurosci 26:1283–1291. doi:10.1162/jocn_a_00568 pmid:24405188
    OpenUrlCrossRefPubMed
  45. ↵
    1. Rushworth MF,
    2. Hadland KA,
    3. Paus T,
    4. Sipila PK
    (2002) Role of the human medial frontal cortex in task switching: a combined fMRI and TMS study. J Neurophysiol 87:2577–2592. doi:10.1152/jn.2002.87.5.2577 pmid:11976394
    OpenUrlCrossRefPubMed
  46. ↵
    1. Sarma A,
    2. Masse NY,
    3. Wang XJ,
    4. Freedman DJ
    (2016) Task-specific versus generalized mnemonic representations in parietal and prefrontal cortices. Nat Neurosci 19:143–149. doi:10.1038/nn.4168 pmid:26595652
    OpenUrlCrossRefPubMed
  47. ↵
    1. Serences JT,
    2. Yantis S
    (2006) Selective visual attention and perceptual coherence. Trends Cogn Sci 10:38–45. doi:10.1016/j.tics.2005.11.008 pmid:16318922
    OpenUrlCrossRefPubMed
  48. ↵
    1. Sestieri C,
    2. Shulman GL,
    3. Corbetta M
    (2017) The contribution of the human posterior parietal cortex to episodic memory. Nat Rev Neurosci 18:183–192. doi:10.1038/nrn.2017.6 pmid:28209980
    OpenUrlCrossRefPubMed
  49. ↵
    1. Smith SM,
    2. Jenkinson M,
    3. Woolrich MW,
    4. Beckmann CF,
    5. Behrens TE,
    6. Johansen-Berg H,
    7. Bannister PR,
    8. De Luca M,
    9. Drobnjak I,
    10. Flitney DE,
    11. Niazy RK,
    12. Saunders J,
    13. Vickers J,
    14. Zhang Y,
    15. De Stefano N,
    16. Brady JM,
    17. Matthews PM
    (2004) Advances in functional and structural MR image analysis and implementation as FSL. Neuroimage 23:S208–S219. doi:10.1016/j.neuroimage.2004.07.051 pmid:15501092
    OpenUrlCrossRefPubMed
  50. ↵
    1. Sprague TC,
    2. Serences JT
    (2013) Attention modulates spatial priority maps in the human occipital, parietal and frontal cortices. Nat Neurosci 16:1879–1887. doi:10.1038/nn.3574 pmid:24212672
    OpenUrlCrossRefPubMed
  51. ↵
    1. Sreenivasan KK,
    2. Vytlacil J,
    3. D'Esposito M
    (2014) Distributed and dynamic storage of working memory stimulus information in extrastriate cortex. J Cogn Neurosci 26:1141–1153. doi:10.1162/jocn_a_00556 pmid:24392897
    OpenUrlCrossRefPubMed
  52. ↵
    1. Suzuki M,
    2. Gottlieb J
    (2013) Distinct neural mechanisms of distractor suppression in the frontal and parietal lobe. Nat Neurosci 16:98–104. doi:10.1038/nn.3282 pmid:23242309
    OpenUrlCrossRefPubMed
  53. ↵
    1. Swaminathan SK,
    2. Freedman DJ
    (2012) Preferential encoding of visual categories in parietal cortex compared with prefrontal cortex. Nat Neurosci 15:315–320. doi:10.1038/nn.3016 pmid:22246435
    OpenUrlCrossRefPubMed
  54. ↵
    1. Vincent JL,
    2. Kahn I,
    3. Snyder AZ,
    4. Raichle ME,
    5. Buckner RL
    (2008) Evidence for a frontoparietal control system revealed by intrinsic functional connectivity. J Neurophysiol 100:3328–3342. doi:10.1152/jn.90355.2008 pmid:18799601
    OpenUrlCrossRefPubMed
  55. ↵
    1. Vossel S,
    2. Geng JJ,
    3. Fink GR
    (2014) Dorsal and ventral attention systems distinct neural circuits but collaborative roles. Neuroscientist 20:150–159. doi:10.1177/1073858413494269 pmid:23835449
    OpenUrlCrossRefPubMed
  56. ↵
    1. Waskom ML,
    2. Kumaran D,
    3. Gordon AM,
    4. Rissman J,
    5. Wagner AD
    (2014) Frontoparietal representations of task context support the exible control of goal-directed cognition. J Neurosci 34:10743–10755. doi:10.1523/JNEUROSCI.5282-13.2014 pmid:25100605
    OpenUrlAbstract/FREE Full Text
  57. ↵
    1. Waskom ML,
    2. Wagner AD
    (2017) Distributed representation of context by intrinsic subnetworks in prefrontal cortex. Proc Natl Acad Sci U S A 114:2030–2035. doi:10.1073/pnas.1615269114 pmid:28174269
    OpenUrlAbstract/FREE Full Text
  58. ↵
    1. Woolgar A,
    2. Williams MA,
    3. Rich AN
    (2015) Attention enhances multi-voxel representation of novel objects in frontal, parietal and visual cortices. Neuroimage 109:429–437. doi:10.1016/j.neuroimage.2014.12.083 pmid:25583612
    OpenUrlCrossRefPubMed
  59. ↵
    1. Xu Y
    (2017) Reevaluating the sensory account of visual working memory storage. Trends Cogn Sci 21:794–815. doi:10.1016/j.tics.2017.06.013 pmid:28774684
    OpenUrlCrossRefPubMed
  60. ↵
    1. Yeo BT,
    2. Krienen FM,
    3. Sepulcre J,
    4. Sabuncu MR,
    5. Lashkari D,
    6. Hollinshead M,
    7. Roffman JL,
    8. Smoller JW,
    9. Zöllei L,
    10. Polimeni JR,
    11. Fischl B,
    12. Liu H,
    13. Buckner RL
    (2011) The organization of the human cerebral cortex estimated by intrinsic functional connectivity. J Neurophysiol 106:1125–1165. doi:10.1152/jn.00338.2011 pmid:21653723
    OpenUrlCrossRefPubMed
  61. ↵
    1. Yeung N,
    2. Nystrom LE,
    3. Aronson JA,
    4. Cohen JD
    (2006) Between-task competition and cognitive control in task switching. J Neurosci 26:1429–1438. doi:10.1523/JNEUROSCI.3109-05.2006 pmid:16452666
    OpenUrlAbstract/FREE Full Text
  62. ↵
    1. Zanto TP,
    2. Rubens MT,
    3. Bollinger J,
    4. Gazzaley A
    (2010) Top-down modulation of visual feature processing: the role of the inferior frontal junction. Neuroimage 53:736–745. doi:10.1016/j.neuroimage.2010.06.012 pmid:20600999
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 38 (10)
Journal of Neuroscience
Vol. 38, Issue 10
7 Mar 2018
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Advertising (PDF)
  • Ed Board (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Bottom-Up and Top-Down Factors Differentially Influence Stimulus Representations Across Large-Scale Attentional Networks
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Bottom-Up and Top-Down Factors Differentially Influence Stimulus Representations Across Large-Scale Attentional Networks
Nicole M. Long, Brice A. Kuhl
Journal of Neuroscience 7 March 2018, 38 (10) 2495-2504; DOI: 10.1523/JNEUROSCI.2724-17.2018

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Bottom-Up and Top-Down Factors Differentially Influence Stimulus Representations Across Large-Scale Attentional Networks
Nicole M. Long, Brice A. Kuhl
Journal of Neuroscience 7 March 2018, 38 (10) 2495-2504; DOI: 10.1523/JNEUROSCI.2724-17.2018
Reddit logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • attention
  • cognitive control
  • decoding
  • parietal cortex
  • prefrontal cortex
  • resting-state networks

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Optogenetics reveals roles for supporting cells in force transmission to and from outer hair cells in the mouse cochlea
  • Pre-saccadic neural enhancements in marmoset area MT
  • Interareal synaptic inputs underlying whisking-related activity in the primary somatosensory barrel cortex
Show more Research Articles

Behavioral/Cognitive

  • Featural representation and internal noise underlie the eccentricity effect in contrast sensitivity
  • Dissociative effects of age on neural differentiation at the category and item level
  • Intonation Units in Spontaneous Speech Evoke a Neural Response
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
(JNeurosci logo)
(SfN logo)

Copyright © 2023 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.