Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
PreviousNext
Research Articles, Systems/Circuits

Clutter Substantially Reduces Selectivity for Peripheral Faces in the Macaque Brain

Jessica Taubert, Susan G. Wardle, Clarissa T. Tardiff, Amanda Patterson, David Yu and Chris I. Baker
Journal of Neuroscience 31 August 2022, 42 (35) 6739-6750; DOI: https://doi.org/10.1523/JNEUROSCI.0232-22.2022
Jessica Taubert
1Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, Maryland 20814
2School of Psychology, The University of Queensland, Brisbane, Queensland 4072, Australia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Susan G. Wardle
1Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, Maryland 20814
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Susan G. Wardle
Clarissa T. Tardiff
1Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, Maryland 20814
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Clarissa T. Tardiff
Amanda Patterson
1Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, Maryland 20814
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
David Yu
3Neurophysiology Imaging Facility, National Institutes of Health, Bethesda, Maryland 20814
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Chris I. Baker
1Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, Maryland 20814
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Chris I. Baker
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

According to a prominent view in neuroscience, visual stimuli are coded by discrete cortical networks that respond preferentially to specific categories, such as faces or objects. However, it remains unclear how these category-selective networks respond when viewing conditions are cluttered, i.e., when there is more than one stimulus in the visual field. Here, we asked three questions: (1) Does clutter reduce the response and selectivity for faces as a function of retinal location? (2) Is the preferential response to faces uniform across the visual field? And (3) Does the ventral visual pathway encode information about the location of cluttered faces? We used fMRI to measure the response of the face-selective network in awake, fixating macaques (two female, five male). Across a series of four experiments, we manipulated the presence and absence of clutter, as well as the location of the faces relative to the fovea. We found that clutter reduces the response to peripheral faces. When presented in isolation, without clutter, the selectivity for faces is fairly uniform across the visual field, but, when clutter is present, there is a marked decrease in the selectivity for peripheral faces. We also found no evidence of a contralateral visual field bias when faces were presented in clutter. Nonetheless, multivariate analyses revealed that the location of cluttered faces could be decoded from the multivoxel response of the face-selective network. Collectively, these findings demonstrate that clutter blunts the selectivity of the face-selective network to peripheral faces, although information about their retinal location is retained.

SIGNIFICANCE STATEMENT Numerous studies that have measured brain activity in macaques have found visual regions that respond preferentially to faces. Although these regions are thought to be essential for social behavior, their responses have typically been measured while faces were presented in isolation, a situation atypical of the real world. How do these regions respond when faces are presented with other stimuli? We report that, when clutter is present, the preferential response to foveated faces is spared but preferential response to peripheral faces is reduced. Our results indicate that the presence of clutter changes the response of the face-selective network.

  • clutter
  • face perception
  • face-selective
  • macaque fMRI
  • multiple objects
  • periphery

Introduction

Making sense of the world around us from the dense visual input that is collected on the retina is a computationally daunting task and one of the brain's greatest accomplishments. The network of face-selective regions that has been identified in the primate visual system (Kanwisher et al., 1997, Tsao et al., 2003; Rossion et al., 2012; Hung et al., 2015) has proven to be a valuable model for understanding how the brain builds meaningful representations of visual stimuli (Orban et al., 2014; Hesse and Tsao, 2020), however, our understanding of this network is constrained by several experimental limitations. For example, most studies that have examined the responsivity of the face-selective network have done so by presenting isolated face stimuli at fixation (Kanwisher et al., 1997; Leopold et al., 2006; Freiwald and Tsao, 2010; Bell et al., 2011; Popivanov et al., 2012; Taubert et al., 2015, 2018b; Premereur et al., 2016; Wardle et al., 2020), yet this is a situation atypical of the real world where faces are present in clutter and potentially across the visual field. Therefore, what is needed is a better understanding of how the face-selective network operates under more natural viewing conditions (Leopold and Park, 2020; Wardle and Baker, 2020; Fan et al., 2021).

Previous studies have identified the face-selective network in the macaque brain (Tsao et al., 2003; Freiwald, 2020; Taubert et al., 2020a). It has been reported previously that when two stimuli are presented simultaneously with a fixation point in between, face-selective neurons continue to respond to the presence of a face (Zoccolan et al., 2005; Bao and Tsao, 2018). However, in these studies the stimuli were presented in peripheral vision, equidistant from a central fixation point. We would argue that, normally, when a face is present but not fixated, that is likely because the subject is focused on another object in the visual field. Indeed, when a stimulus occupies foveal vision, its representation may dominate the ventral visual pathway (Ishai et al., 1999; Haxby et al., 2001). Further, we know that the ventral visual pathway is strongly biased toward foveal inputs (Frisén and Glansholm, 1975). Thus, an open question is whether the face-selective network continues to respond preferentially to faces when another stimulus is being fixated.

Here, we ask three fundamental questions about the responsivity of face-selective network in the macaque brain. First, does clutter reduce the response to faces as a function of retinal location? To address this question, we devised a series of functional imaging (fMRI) experiments where we presented faces to awake macaques in one of three retinal locations (at fixation, 8° to the left, and 8° to the right) while nonface stimuli occupied the two other remaining retinal locations. Although previous studies have demonstrated that the responses of face-selective populations are impervious to clutter (Zoccolan et al., 2005; Reddy and Kanwisher, 2007; Agam et al., 2010; Bao and Tsao, 2018) in the current study we presented three-item horizontal displays and required the subject to fixate on the central item. Thus, our expectation was that clutter would decrease the response to peripheral faces, while the response to foveal faces would be spared (Rolls et al., 2003). Second, is the preferential response to faces, at the population level, uniform across the visual field? To address this question, we scanned subjects while they were presented with isolated faces and objects, while manipulating their retinal location. Given the size of receptive fields in the posterior face patch (Issa and DiCarlo, 2012), we expected face-responsivity to be lower in the peripheral locations compared with the central foveal location. Third, does the ventral visual pathway encode information about the location of cluttered faces? We reasoned that even if the magnitude of the response to faces was attenuated by clutter, activity in the face-selective network might still encode the location of faces. To explore this possibility, we examined the pattern of activity across voxels in response to face stimuli using multivariate analyses.

Materials and Methods

Subjects

We tested seven rhesus macaques (Macaca mulatta, 6–13 years of age, 7–12.1 kg at time of testing). Two were female (subjects S and R). Since previous reports of similar experiments on this species have demonstrated that sample sizes of two to four are sufficient for scientific inference (Hadj-Bouziane et al., 2008; Fisher and Freiwald, 2015; Russ and Leopold, 2015; Taubert et al., 2020a, 2022; Zhang et al., 2020) for each separate experiment we recruited twot to four subjects. All subjects were acquired from the same primate breeding facility in the United States where they had social group histories as well as group housing experience until their transfer to the National Institute of Mental Health (NIMH) for quarantine approximately at the age of four years. After that, they were housed in a large colony room with auditory and visual contact with other conspecifics.

Subjects were surgically implanted with a headpost under sterile conditions using isoflurane anesthesia. For all subjects, the location of the headpost was planned to optimize the coverage of the inferior temporal lobe in fMRI experiments, at the expense of covering the parietal and frontal lobes. After recovery, the subjects were slowly acclimated to the experimental procedure; first they were trained to sit calmly in a restraint chair and fixate a small [0.4 degrees of visual angle (dva)] red central dot for long durations (∼8 min). During acclimatation and training, fixation within a circular window (radius = 2 dva) centered over the fixation dot resulted in juice delivery. All procedures were in accordance with the Guide for the Care and Use of Laboratory Animals and were approved by the NIMH Animal Care and Use Committee.

Data acquisition

Before each scanning session, an exogenous contrast agent [monocrystalline iron oxide nanocolloid (MION)] was injected into the femoral vein to increase the signal-to-noise ratio (Vanduffel et al., 2001; Taubert et al., 2020a). MION doses were determined independently for each subject (∼8–10 mg/kg).

Structural and functional data were acquired in a 4.7T, 60 cm vertical scanner (Bruker Biospec) equipped with a Bruker S380 gradient coil. Subjects viewed the visual stimuli projected onto a screen above their head through a mirror positioned in front of their eyes. We collected whole brain images with a four-channel transmit and receive radio frequency coil system (Rapid MR International). A low-resolution anatomic scan was also acquired in the same session to serve as an anatomic reference [modified driven equilibrium Fourier transform (MDEFT) sequence, voxel size: 1.5 × 0. 5 × 0.5 mm, FOV: 96 × 48 mm, matrix size: 192 × 96, echo time (TE): 3.95 ms, repetition time (TR): 11.25 ms]. Functional echoplanar imaging (EPI) scans were collected as 42 sagittal slices with an in-plane resolution of 1.5 × 1.5 mm and a slice thickness of 1.5 mm. The TR was 2.2 s and the TE was 16 ms (FOV: 96 × 54 mm, matrix size: 64 × 36 m: flip angle 75°). Eye position was recorded using an MR-compatible infrared camera (MRC Systems) fed into MATLAB (MathWorks, version R2018b) via a DATApixx hub (VPixx Technologies, Vision Science Solutions).

Localization data

First, we identified the regions of inferior temporal cortex (ITC) that responded preferentially to faces in all seven subjects using a standard face localization procedure (Tsao et al., 2003; Premereur et al., 2016; Taubert et al., 2020a, b). While the subjects were awake and fixating, we presented images (30 per category) of six different object categories (human faces, monkey faces, scenes, objects, phase scrambled human faces and phase scrambled monkey faces). Stimuli were cropped images presented on a square canvas that was 12 dva in height. All six categories were presented in each run in a standard on/off block design (12 blocks in total). Each block lasted for 16.5 s. During a “stimulus on” block, 15 images were presented one at a time for 900 ms and were followed by a 200-ms interstimulus interval (ISI). We removed any run from the analysis where the monkey did not fixate within a 4° window for >60% of the time.

Face-selective voxels in ITC were identified in all seven subjects using the following contrast: activations evoked by (human faces + monkey faces) > activations evoked by (scenes + objects; Fig. 1A). For every subject we used the same statistical threshold [q = 0.0001; controlled for multiple comparisons using the false discovery rate or (FDR)]; this yielded a different number of voxels for each subject [experiment 1: subject F = 577 voxels (lh = 309, rh = 268); subject K = 426 voxels (lh = 197, rh = 229); experiments 2–4: subject A = 443 voxels (lh = 236, rh = 207); subject H = 593 voxels (lh = 331, rh = 262); subject J = 523 voxels (lh = 253, rh = 270); subject S = 773 voxels (lh = 387, rh = 386); subject R = 519 voxels (lh = 279, rh = 240)]. In each of the seven subjects, this procedure identified all of the core face-selective regions that have been previously described, specifically the face patches known as AM, AL, AF, ML, MF, and PL (Tsao et al., 2006; Taubert et al., 2020a, 2022; Fig. 1A).

Visual stimulation

Four different fMRI experiments were conducted. In every experiment we presented visual stimuli in on/off block-designs. Every run began with two dummy pulses, and then 4.4 s of fixation before the onset of the experiment. This brief fixation period was included to help get the subject settled and was excluded from further analysis. Experimental runs included a number of stimulation blocks that were each followed by a fixation block of equal length. The exact number of stimulation blocks differed across experiments, always matching the number of experimental conditions. During stimulation blocks we presented stimulus displays containing single items or three items. Each stimulus display was presented for 900 ms, followed by a 200-ms ISI.

All stimuli (faces, objects, scenes, scrambled faces) were cropped to a square shape (6 dva in height) but color information was preserved. And any given stimulus was presented in one of three horizontal screen locations: at fixation, left of fixation and right of fixation. The center of a stimulus at fixation was the exact center of the screen (0,0). The center of a stimulus presented left of fixation was 8° away from the center of the screen (−8,0). Similarly, the center of a stimulus presented right of fixation was also 8° away from the center of the screen (+8,0).

Experiment 1

Stimuli were shown in triplets, with one stimulus at central fixation, flanked by a peripheral stimulus on each side (Fig. 1B). The stimuli were 15 macaque faces, 15 inanimate objects, 15 scenes, and 15 scrambled macaque faces. We paired each of the 15 macaque faces with a particular object and scene, creating 15 stimulus triplets in total. Using these triplets, we devised 6 unique conditions in a 2 × 3 repeated measures design; two levels of facial structure (intact or scrambled) and three levels of the “stimulus at fixation” manipulation (face, object, or scene). Across the levels of “stimulus at fixation” we controlled the stimulus that was presented in the central fixation location. The remaining two stimuli in each triplet were allocated to left or right locations at random. The only difference between the trials in the intact face conditions and those in the scrambled face conditions was the face stimulus used. Thus, when we subtract the response to a scrambled face condition from the corresponding intact face condition, the only difference between those conditions was the presence of a face. The specific object and scene were the same, and the relative location of all three stimuli was also the same. In experiment 1, every block lasted for 16.5 s. The order of the conditions, and the triplets within each condition, was determined at random at the beginning of each run. Each run lasted 202.4 s, during which we collected 92 volumes of data.

Experiment 2

In this experiment, only one stimulus was on screen at a time (Fig. 2A). The stimuli were photographs of 15 macaque faces and 15 objects. These were not the same stimuli used in experiment 1. The repeated measures design included six conditions; two levels of stimulus (faces or objects) and three levels of retinal location (at fixation, left, right; Fig. 2A and Fig. 3). All timing parameters were the same as those described for experiment 1.

Experiment 3

The stimuli were presented either alone at central fixation, or in triplets (Fig. 4C). The stimuli were 16 macaque faces, 16 objects, and 32 scenes (16 laboratory scenes and 16 natural scenes; Fig. 4A). None of these stimuli were used in experiment 1 or 2. Each laboratory scene was paired with a natural scene. These 16 unique pairs of scenes were each assigned to a particular object and a particular face (for an illustrative example, see Fig. 4C). In this factorial design, there were two levels of stimulus (face or object) and four levels of presentation (isolated at fixation, cluttered at fixation, cluttered left and cluttered right; Fig. 4C). We included face and object isolated conditions to verify that the new stimulus set was comparable to the one used for experiment 2 (Fig. 4C). Block duration was 17.6 s in experiment 3, and every run was 286 s in length.

Experiment 4

Stimuli were presented either alone at any one of the three visual field locations, or in triplets (Fig. 5A). The stimuli were the 15 faces, 15 objects, and 15 scenes used in experiment 1. The repeated measures design included six conditions; two levels of clutter (isolated or cluttered) and three levels of retinal location (at fixation, left, right; Fig. 5A). All timing parameters were the same as those described for experiment 1.

Fixation behavior

During every experiment, subjects were rewarded for maintained fixation at random intervals. The average time between rewards was typically 2 s (±400 ms) but these parameters were modified depending on the subject's behavior. Subjects had to fixate for at least 85% of the total run length (excluding the initial fixation period) for any given run to be included in the final analysis. We noted, however, that while the subjects were generally compliant during stimulation blocks, they tended to rest during the fixation blocks by either looking down or briefly closing their eyes. Since resting during fixation blocks was not problematic for the analysis, we placed a second behavioral criteria on each run; subjects also had to fixate for at least 95% of the total length of the condition blocks (concatenated).

Experiment 1 was conducted on two males (F and K). Subject F successfully completed 22 runs across two scan sessions (1980 volumes) and subject K completed 18 runs across two scan sessions (1620 volumes). Experiment 2 was conducted on two subjects (H and S; one female). Subject H successfully completed 70 runs (6300 volumes) over three sessions and subject S completed 31 runs (2790 volumes) over two sessions. We recruited three subjects for experiment 3 (J, S, and R; two female). Subject J successfully completed 23 runs across two sessions (2944 volumes), subject S completed 30 runs across two sessions (3840 volumes) and subject R completed 11 runs in just one session (1408 volumes). Finally, four subjects participated in experiment 4 (A, H, S, and R; two female). After applying the two-step behavioral criteria to remove poor quality runs, subject A completed 24 runs in one scan session (2160 volumes), subject H completed 40 runs (3600 volumes) across three sessions, subject S completed 15 runs in just one session (1350 volumes), and subject R completed 20 runs (1800 volumes) across two sessions.

We analyzed the eye-tracking data in experiments 3 and 4 to confirm that fixation behavior was similar across clutter absent and clutter present conditions. To do this we computed the proportion of time spent fixating for each unique condition (i.e., block) in every valid run that was included in the fMRI analyses. In experiment 3, the subjects completed 64 valid runs in total. We compared fixation behavior across the eight unique conditions in a one-way analysis of variance for repeated measures and found no evidence that fixation behavior varied across conditions (F(7,441) = 0.65, p = 0.71, ηp2 = 0.01). In experiment 4, the subjects completed 99 valid runs in total. We compared fixation behavior across the six unique conditions in a one-way analysis of variance for repeated measures and found no evidence that fixation behavior varied across conditions (F(5,490) = 0.37, p = 0.87, ηp2 = 0.004).

fMRI data analysis

To facilitate cortical surface alignments, we acquired high-resolution T1-weighted whole-brain anatomic scans in a 4.7T Bruker scanner with an MDEFT sequence. Imaging parameters were as follows: voxel size: 0.5 × 0.5 × 0.5 mm, TE: 4.9 ms, TR: 13.6 ms, flip angle: 14°.

All EPI data were analyzed using AFNI software (Cox, 1996; http://afni.nimh.nih.gov/afni). Raw images were first converted from Bruker into AFNI data file format. The data collected in a single session were first corrected for static magnetic field inhomogeneities using the PLACE algorithm (Xiang and Ye, 2007). The time series data were then slice-time corrected and realigned to the last volume of the last run. All the data for a given subject were registered to the corresponding high-resolution template for that subject, allowing for the combination of data across multiple sessions. Thus, all data were analyzed in individual subject space. The first two volumes of data in each EPI sequence were disregarded. The volume registered data were then despiked and spatially smoothed with a 3-mm Gaussian kernel and rescaled to reflect percent signal change from baseline.

We convolved the hemodynamic response function for MION exposure with the regressors of interest using an ordinary least squares regression (executed using the AFNI function '3dDeconvolve' with 'MIONN' as the response function). The regressors of no interest included in the model were 6 motion regressors (movement parameters obtained from the volume registration) and AFNI's baseline estimates and signal drifts (linear and quadratic).

When comparing β coefficients, we wanted to ensure that results were not influenced by small numbers of voxels with extreme β coefficients and, thus, we normalized the data using the min-max method within each subject (zi = {xi – min(x)}/{max(x) – min(x)}). All statistical comparisons were performed using custom scripts written in MATLAB (MathWorks, version R2018b).

Multivariate pattern analyses (experiment 4)

In experiment 4, we used a whole-brain decoding searchlight analysis which was implemented using The Decoding Toolbox (TDT; Hebart et al., 2014). The decoding was performed using a Newton linear SVM classifier with a radius of 3 voxels in each subject's native space. For cross-validation we used a leave-one-run-out procedure. Decoding accuracy was averaged across all cross-validation folds. To locate the areas that decoded face location when clutter was absent (C–), we performed three pairwise classifications; (1) Fixation_C– versus Left_C– (2) Fixation_C– versus Right_C– and (3) Left_C– versus Right_C–. Then we averaged decoding accuracy across all three classifications. Similarly, to locate the areas that decoded face location when clutter was present (C+), we performed three pairwise classifications; (1) Fixation_C+ versus Left_C+ (2) Fixation_C+ versus Right_C+ and (3) Left_C+ versus Right_C+. Then we averaged decoding accuracy across all three classifications. For this analysis, numerical chance was 50%.

Results

Loss of responsivity to peripheral faces when clutter is present (experiment 1)

Our first experiment was designed to measure the response of the face-selective network (Fig. 1A) to faces when they were presented in three-item cluttered displays. In every condition, a face (scrambled or intact) was presented alongside an object and a scene. Across conditions, we manipulated the identity of the stimulus that was presented at fixation (faces, objects, or scenes; Fig. 1B). We measured “face-responsivity,” defined as the response to intact faces relative to phase-scrambled faces (effectively noise but with similar low-level visual statistics to actual faces) in the same configuration. Thus, for every face-selective voxel we calculated the difference between the response to intact and scrambled faces (i.e., face-responsivity = β[intact faces] − β[scrambled faces]). In Figure 1C, we plot face-responsivity as a function of the stimulus at fixation. Overall, we found that face-responsivity was statistically higher when faces were being fixated than when objects (N = 1003, Z = −17.75, p < 0.0001; Wilcoxon signed-rank test with Bonferroni correction for multiple comparisons) or scenes (N = 1003, Z = −14.67, p < 0.0001) were fixated.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Reduced responses to peripheral faces in the presence of clutter. A, The face-selective network identified in subject H's data using a statistical threshold of q = 0.0001 False Discovery Rate (FDR). B, Illustrative examples of the stimulus arrays used across the six conditions in experiment 1. C, Face responsivity scores for all face-selective voxels as a function of the stimulus at fixation manipulation. The median difference scores were: faces at fixation = 0.39, objects at fixation = 0.03, scenes at fixation = 0.12.

To show that these findings replicated across subjects, we performed the same analysis separately for each subject. For subject K, we calculated face-responsivity across the three conditions (faces, objects, or scenes) for every voxel (x̃faces=0.33;x̃objects=0.10;x̃scenes=−0.009). Using related-samples Wilcoxon signed-rank tests, we found that face-responsivity was statistically higher when faces were being fixated than when objects (N = 426, Z = −7.07, p < 0.001) or scenes (N = 426, Z = −8.87, p < 0.0001) were fixated. When calculating face-responsivity for subject F (x̃faces=0.44;x̃objects=−0.01;x̃scenes=0.21), we found evidence of the same differences between conditions (faces > objects, N = 577, Z = −17.51, p < 0.0001; faces > scenes, N = 577, Z = −11.71, p < 0.0001). Therefore, our findings replicated across subjects.

These findings suggest that the face-selective network responds less to peripheral faces than foveal faces when clutter is present. However, it is not clear from these results whether the reduction in the response to face stimuli was driven by the presence of clutter per se or simply by the use of peripheral screen locations. Thus, we next measured the response of the face-selective voxels to isolated faces and objects presented in the same three screen locations used in experiment 1 to determine whether the face-selective network responds preferentially to peripheral faces compared with other stimuli, when clutter is not present.

The face-selective network responds more to peripheral faces than peripheral objects (experiment 2)

The goal of experiment 2 was to determine whether activity in the putative face-selective network is driven more by peripheral faces than peripheral objects in the absence of clutter. To this end, we ran a block-design fMRI experiment with six conditions, manipulating both the visual field location of the single stimulus presented on each trial (at fixation vs left hemifield vs right hemifield) and the stimulus category (faces vs objects; Fig. 2A). We included both faces and objects in this design so that we could measure face-selectivity (i.e., the preferential response to faces over objects) rather than face-responsivity. For the purposes of this analysis, we computed the responses in terms of the contralateral and ipsilateral responses (Fig. 2B), and we normalized the data within subject using the min-max method.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

The face-selective network responds more to peripheral face stimuli than peripheral object stimuli. A, Left, Examples of the face and object stimuli used in experiment 2. Right, Examples of the stimulus arrays used in experiment 2 (2 stimulus categories × 3 visual field locations factorial design). B, A schematic illustrating the relationship between the three visual field locations and the hemispheres (axial view of whole brains). C, A box plot showing the normalized response of face-selective voxels as a function of hemisphere. D, A box plot showing the FSI values of face-selective voxels as a function of hemisphere.

Using a set of three related samples Wilcoxon signed-rank tests we found that the response to isolated faces was maximal when presented at fixation (fixation vs contralateral, N = 1366, Z = −22.94, p < 0.0001; fixation vs ipsilateral, N = 1366, Z = −31.83, p < 0.0001; Fig. 2C). These results are consistent with the overrepresentation of foveal vision in temporal cortex (Dow et al., 1981; Van Essen et al., 1984). Further, we also found evidence that the magnitude of the response to faces in the contralateral hemifield was greater than the magnitude of the response to faces in the ipsilateral hemifield (N = 1366, Z = −28.34, p < 0.0001; Fig. 2C). The observed contralateral bias is also consistent with the current models of visuospatial encoding in the ventral visual pathway (Silson et al., 2021; Groen et al., 2022).

Next, we measured the preferential response to faces by computing a selectivity index for each voxel using the following equation: face selectivity index (FSI) = {βfaces – βobjects}/{|βfaces| + |βobjects|} (Tsao et al., 2006; Taubert et al., 2020a). We used three one-sample Wilcoxon signed-rank tests to test the null hypothesis that the median FSI value was 0 (i.e., there was no preference toward faces over objects in any of the location conditions). In doing so, we discovered that face-selective voxels continue to respond more to faces than objects, regardless of their location (fixation, median = 0.18, N = 1366, Z = 25, p < 0.0001; contralateral, median = 0.16, N = 1366, Z = 21.83, p < 0.0001; ipsilateral, median = 0.14, N = 1366, Z = 16.77, p < 0.0001). Nonetheless, FSI values were higher in the foveal condition than the two peripheral conditions (fixation vs contralateral, N = 1366, Z = −4.27, p < 0.0001; fixation vs ipsilateral, N = 1366, Z = −2.94, p = 0.003; Fig. 2D). Further, there was no evidence of a contralateral bias (contralateral vs ipsilateral, N = 1366, Z = −1, p = 0.31; Fig. 2D). Collectively, these findings indicate that, while the selectivity for face stimuli is not abolished by peripheral shifts of 8 dva, it is slightly reduced.

For subject S, we confirmed that the face-selective voxels responded more to faces than objects, regardless of their screen location (fixation, median = 0.10, N = 773, Z = 14.28, p < 0.0001; contralateral, median = 0.08, N = 773, Z = 11.03, p < 0.0001; ipsilateral, median = 0.09, N = 773, Z = 8.46, p < 0.0001). This was also true for subject F (fixation, median = 0.27, N = 593, Z = 20.03, p < 0.0001; contralateral, median = 0.22, N = 593, Z = 19.13, p < 0.0001; ipsilateral, median = 0.23, N = 593, Z = 14.98, p < 0.0001).

To validate the region-of-interest approach, we visualized the whole-brain contrast between faces and objects separately as a function of visual field location in the anatomic volume for individual subjects (Fig. 3A). These whole-brain contrasts indicated that the location of the face-selective voxels in ITC was stable across the three location conditions. We also examined the strength of the relationship between the three face conditions (fixation, left, and right conditions). We found that there were significant positive relationships between all pairs of conditions (fixation vs left, Spearman's ρ(1366) = 0.9, p < 0.0001; fixation vs right, Spearman's ρ(1366) = 0.83, p < 0.0001; left vs right, Spearman's ρ(1366) = 0.71, p < 0.0001). Thus, we are confident that the reductions in selectivity that we observed for peripheral faces were not driven by the fact that presenting faces in different retinotopic locations recruits different neural populations. In sum, the results of experiment 2 confirm that the face-selective network responds more to peripheral faces than peripheral objects when clutter is not present. In the next experiment, we used the same experimental design, except we added visual clutter to all stimulus displays. We compare the results to those of experiment 2 to determine whether the tolerance of peripheral shifts survives the addition of clutter.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

The contrast between faces and objects as a function of the screen location. Whole-brain contrast (β[faces] – β[objects]) in native space as a function of screen location (from left to right; left, fixation, right). For both subjects the statistical threshold was set at q = 0.001 (FDR). For subjects S and H, representative axial and coronal slices were selected basis on the anatomic location of the voxel in the right hemisphere with the highest differential response to faces over objects in the “at fixation” condition.

The face-selective network responds less selectively to peripheral faces when clutter is present (experiment 3)

In experiment 3, our goal was to measure the selectivity for face stimuli across the visual field, as in experiment 2, but in the presence of clutter. We designed an experiment with eight conditions in which we manipulated the relative location of faces and objects in three-item cluttered displays (Fig. 4C). To determine whether face-selectivity is impervious to clutter, we compared face-selectivity when faces/objects were presented at fixation, to face-selectivity when faces/objects were presented away from fixation, either in the contralateral or ipsilateral hemifield. We performed these analyses, separately, for each subject.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Clutter substantially reduces the selectivity for peripheral faces. A, Examples of the stimuli used in experiment 3. Face and object stimuli were paired with two scenes. B, Box plot showing the response to face and object stimuli plotted as a function of experiment (experiment 2, blue; experiment 3, red). For this plot the data were normalized and, thus, range from 0 to 1. C, Illustrative examples of the eight unique conditions used in experiment 3. D, A box plot showing the results of experiment 3 (i.e., FSI values as a function of location condition when stimuli are presented in three item displays). FSI values range from −1 to 1. When an FSI value equals 0, the voxel responded equally to both faces and objects.

For subject S, we found that face-selectivity (i.e., FSI values) was significantly higher when faces/objects were presented at fixation compared with when they were presented in the contralateral (N = 773, Z = −5.37, p < 0.001) or ipsilateral hemifield (N = 773, Z = −15.17, p < 0.0001). We found the same pattern of results for subject R (fixation vs contralateral, N = 519, Z = −18.63, p < 0.0001; fixation vs ipsilateral, N = 519, Z = −19.09, p < 0.0001) and subject J (fixation vs contralateral, N = 523, Z = −19.14, p < 0.0001; fixation vs ipsilateral, N = 523, Z = −18.88, p < 0.0001). These results indicate that selectivity is reduced for peripheral faces, compared with foveal faces, when clutter is present. As such, these results appear at odds the results of experiment 2. Next, we compared the results of experiments 2 and 3 more directly.

Clutter reduces the selectivity for peripheral faces (comparison of experiments 2 and 3)

Since experiments 2 and 3 employed different stimuli, we used the isolated face and object conditions, which were present in both experiments, to determine whether the stimulus sets evoked the same response from the face-selective voxels. After normalizing the data within subject, we used two Mann–Whitney U tests (two-tailed, controlled for multiple comparisons using the Bonferroni rule) to test for differences in the normalized fMRI signal across the two experiments. We found no evidence that the face stimuli elicited a differential response between experiments (N = 3181, Z = −0.27, p = 0.78; Fig. 4B). Similarly, we found no evidence that the object stimuli elicited a differential response between experiments (N = 3181, Z = −0.09, p = 0.93; Fig. 4B). Therefore, we moved forward with a comparison between experiments 2 and 3.

To determine whether the preferential response to faces over objects was impacted by the addition of clutter, we used a set of three Mann–Whitney U tests to compare face-selectivity (FSI) across experiments 2 and 3 (two-tailed, controlled for multiple comparisons using the Bonferroni rule). Again, the only difference between the fixation, contralateral and ipsilateral conditions across the two experiments was that in experiment 2 the stimuli were presented in isolation, whereas in experiment 3 the stimuli were presented in clutter. In the fixation condition, we discovered that the distribution of FSI values measured in experiment 2 (median FSI = 0.18; Fig. 2D) was no different from the distribution of FSI values measured in experiment 3 (median FSI = 0.18, N = 3181, Z = 1.25, p = 0.21; Fig. 4D) indicating that, when faces and objects are being foveated, the preferential response to faces is impervious to clutter.

In contrast, when peripheral faces were presented in the contralateral hemifield, we found that the FSI values measured in experiment 2 (median FSI = 0.16; Fig. 2D) were much higher than those measured in experiment 3 (median FSI = 0.02, N = 3181, Z = −18.64, p < 0.0001; Fig. 4D). This was also true when peripheral faces were presented in the ipsilateral hemifield [median FSI (experiment 2) = 0.14 (Fig. 2D); median FSI (experiment 3) = −0.0007, N = 3181, Z = −16.94, p < 0.0001 (Fig. 4D)]. These results indicate that the preference for peripheral, but not foveal, faces is significantly reduced by clutter.

However, because the observations in experiments 2 and 3 were based on a different number of subjects, and each subject completed a different number of runs, we could not compare the isolated and cluttered conditions using a more powerful within-subjects analysis. This approach also prevented the employment of multivariate analyses to examine the pattern of activity across face-selective voxels. Thus, the final experiment (experiment 4) was designed so that brain activity in response to isolate faces and cluttered faces could be more directly compared within subjects.

Does selectivity for peripheral faces increase toward the anterior pole?

It has been suggested that the size of neuronal receptive fields may systematically increase toward the anterior regions of the ventral visual pathway (Desimone and Gross, 1979; Ito et al., 1995; also see Silson et al., 2021). Thus, it is possible that the anterior regions of the face-selective network might respond more selectively for peripheral face stimuli than the posterior regions. To test this idea, we examined the relationship between a voxel's relative location in the brain, along the posterior-anterior (P-A) axis, and its FSI across our experimental conditions (experiment 2, fixation, contralateral, ipsilateral; experiment 3, fixation, contralateral, ipsilateral). To do this we took the P-A coordinates of all the face-selective voxels in an individual subject's mask and standardized them with reference to the most posterior voxel (i.e., P-A coordinates; 0 = the most posterior face-selective voxels, >0 = relative distance, in 1.5-mm increments, from most anterior face-selective voxels). Next, we used Spearman correlations to assess the relationship between relative anatomic location (relative P-A coordinates; Fig. 5) and FSI, expecting negative relationships because single unit recordings have indicated that the posterior regions are generally more face-selective than the anterior regions for foveal stimuli, perhaps owning to changes in how discrete identities are represented (Freiwald and Tsao, 2010; Bell et al., 2011; Taubert et al., 2015, 2018b). The results revealed significant negative relationships across all conditions in experiment 2, when clutter was absent (Fig. 5). This was also true when clutter was present and the face stimuli were presented away from fixation (Fig. 5); however, when clutter was present and the face stimuli were presented at fixation the direction of the relationship between relative anatomic location and FSI was altered (Fig. 5). Fishers z-transformations were used to compare the correlations across the clutter absent and clutter present conditions. This analysis revealed evidence that, when faces were presented at fixation, there was a stronger correlation when clutter was present than when clutter was absent with a change in the sign of the correlation (z = −12.74, p < 0.001, two-tailed, the observed p-value has been corrected for multiple comparisons using the Bonferroni rule). Similarly, when faces were presented in the contralateral hemifield there was a stronger negative correlation when clutter was present than when clutter was absent (z = 2.45, p = 0.03, two-tailed, p-value corrected for multiple comparisons). In contrast, when faces were presented in the ipsilateral hemifield, we found no evidence that the negative correlations differed (z = −2.04, p = 0.12, two-tailed, p-value corrected for multiple comparisons).

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Clutter changes the relationship between anatomic location and face-selectivity. Left, Side view of a partially inflated cortical surface. The arrow indicates the relative location of the voxels in the analysis (i.e., P-A coordinates). Right, Scatterplots showing the correlation between relative anatomic location (x-axis/color = P-A coordinates) of a voxel and its FSI (y-axis = FSI). The solid red lines reflect the best-fitting linear relationships (y = mx + b). Spearman's ρ values are provided (**p < 0.001, *p = 0.01).

Three-item displays evoke more activity than isolated faces (experiment 4)

In the fourth and final experiment we employed a powerful repeated measures design to examine the response to faces in the three retinal locations while also manipulating the presence of clutter (Fig. 6A). First, we subtracted the normalized response to the cluttered condition from the corresponding isolated condition for each of the three levels of location (Fig. 6B). This analysis revealed that, in general, the face-selective voxels responded more to the cluttered conditions than the isolated conditions [median diff (fixation) = −0.06, null hypothesis is that the median = 0, N = 2328, Z = −26.48, p < 0.0001; median diff (contralateral) = −0.02, null hypothesis is that the median = 0, N = 2328, Z = −9.62, p < 0.0001; median diff (ipsilateral) = −0.1, null hypothesis is that the median = 0, N = 2328, Z = −30.33, p < 0.0001]. These results indicate that, in general, face-selective voxels respond more to complex scenes, comprised of multiple items (only one being a face), than singular isolated faces presented on uniform gray backgrounds.

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

There is no contralateral bias when clutter is present. A, Illustrative examples of the conditions used in experiment 4. Colored outlines provide a guide for comparisons presented in B, C. B, Box plot indicating that face-selective voxels tended to respond more in the clutter present conditions than the clutter absent conditions. Yellow, faces presented at fixation; green, faces presented on the contralateral hemifield; purple, faces presented on the ipsilateral hemifield. C, Box plot visualizing the contralateral bias in the clutter absent condition and the clutter present condition (blue).

The contralateral bias is eliminated by clutter (experiment 4)

We computed the contralateral bias for every face-selective voxel by subtracting the response to faces presented on the ipsilateral hemifield from the response to faces presented on the contralateral hemifield. We repeated this procedure for the isolated face conditions (i.e., clutter absent) and the cluttered face conditions (i.e., clutter present; see Fig. 6C). Then we compared the contralateral bias across the two clutter conditions using a related-samples Wilcoxon signed-rank test (two-tailed). We found there was a significant difference between the contralateral bias measured when clutter was absent (median bias = 0.07), compared with when clutter was present (median bias = −0.001, N = 2328, Z = −26.43, p < 0.0001). Next, we used two one-sample Wilcoxon signed-rank tests (two-tailed) to determine whether the contralateral bias in either condition was significantly different from zero. This revealed a contralateral bias in the isolated face conditions, when clutter was absent (N = 2328, Z = 37.1, p < 0.0001; Fig. 6C). In contrast, when clutter was present, there was evidence of a slight ipsilateral bias (N = 2328, Z = −3.33, p = 0.001; Fig. 6C).

To determine whether this finding was reliable across subjects, we compared the contralateral bias across clutter conditions, for each of the four subjects, separately. Consistent with the overall finding, we found that every subject had a significantly larger contralateral bias in the clutter absent condition than in the clutter present condition [subject H, (x̃absent=0.04)>(x̃present=0.01), N = 593, Z = −8.02, p < 0.001; subject A, (x̃absent=0.1)>(x̃present=0.02), N = 443, Z = −15.7, p < 0.0001; subject S, (x̃absent=0.07)>(x̃present=−0.03), N = 773, Z = −22.26, p < 0.0001; subject R, (x̃absent=0.09)>(x̃present=0.02), N = 519, Z = −7.29, p < 0.001].

One interpretation of these results is that when there are multiple items present in the visual field, as there often are in complex natural scenes, the magnitude of response of the face-selective network does not reflect the horizontal location of a face. This raises the question, does the ventral visual pathway encode information about the location of visual stimuli at all or, alternatively, does activity in the ventral visual pathway primarily reflect only the form and identity of the foveated stimulus (Ishai et al., 1999; Haxby et al., 2001)?

Information about face location can be decoded from ITC (experiment 4)

Thus far, the results of this study (experiments 1–4) have repeatedly demonstrated that when clutter is present, and a nonface stimulus is being foveated, there is a reliable reduction in the response of face-selective voxels to peripheral faces (i.e., face responsivity and selectivity were significantly reduced, and the contralateral bias was eliminated). Yet, behavioral observations have indicated that human and nonhuman primates detect and orient their gaze toward faces that are initially presented away from fixation, prioritizing them over other targets and task objectives (Crouzet et al., 2010; Landman et al., 2014; Sadagopan et al., 2017; Taubert et al., 2017, 2018a; Keys et al., 2021). How are peripheral faces detected?

To determine whether the face-selective network encodes information about the location of peripheral faces we employed multivariate analyses to examine the data from experiment 4 (n = 4). We performed two decoding searchlight analyses in each subject's native space to compare decoding of the face's location in the visual field (at fixation, left periphery, or right periphery) separately for the clutter absent and the clutter present conditions (Fig. 7). These analyses revealed that when faces were presented alone (in the clutter absent conditions), the regions with the highest location decoding performance included early visual cortex (EVC). This is not surprising given the retinotopic nature of stimulus-evoked activity in early visual areas and strong contralateral biases that are known to influence activity in early visual areas (Silson et al., 2021; Groen et al., 2022). We note that successful location decoding is also possible further along the ventral visual pathway, which is consistent with the contralateral bias reported in experiment 2 (Fig. 7).

Figure 7.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 7.

Information about the location of cluttered faces is retained by the ventral visual pathway. Results of a whole-brain search light analysis decoding face location (left, clutter absent; right, clutter present). The results are presented separately for each subject (chance = 50%). The hot colors indicate the brain regions with the highest decoding performance, which for visualization is thresholded at > 70% decoding accuracy for all subjects except subject H (threshold > 65%), as classifier performance was lower for this subject.

In contrast, when clutter was present, the results of the searchlight analysis changed; most notably the regions with the highest location decoding performance no longer included EVC (Fig. 7). Instead, we observed the highest decoding performance for the visual field position of faces in regions of the ventral visual pathway seated on the lower bank of the STS, the fundus of the STS, and in the lateral convexity (i.e., locations approximating the face-selective network). In Figure 8A, we compare the searchlight results across EVC and face-selective cortex (in the ventral visual pathway) and after combining the data across subjects. For voxels in the EVC, we found a decrease in classifier performance following the addition of visual clutter (x̃absent=80.42% > x̃present=57.08%, N = 7560, Z = −74.74, p < 0.001; Fig. 8A). However, for voxels in the face-selective network, when we compared classifier performance in the clutter absent (x̃=64.44%) and clutter present (x̃=65.55%) conditions we found evidence that the classifier performed reliably better following the addition of visual clutter (absent < present, N = 2329, Z = 2.74, p = 0.006; Fig. 8A). These results confirm the observations in Figure 7. Out of interest, we also investigated the relationship between a face-selective voxel's relative anatomic location and classifier performance. The results show that the posterior regions of face-selective network typically performed better than anterior regions of the face-selective network (Fig. 8B), but this is true across both clutter conditions (Fig. 8B). Overall, these results provide some indication that the face-selective regions of the ventral visual pathway retain information about the location of faces in three item displays, even in the presence of clutter.

Figure 8.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 8.

Information about the location of cluttered faces decreases in EVC. A, Box plot showing average classifier performance (output of search light analysis) for two regions of interest: EVC and the face-selective network. The EVC region of interest was defined for each subject based on localizer data. Voxels responded more significantly to scrambled face conditions than the implicit baseline; voxel-wise statistical threshold, p = 1.0 × 10−35, cluster threshold, 200 voxels. B, Scatterplot showing the correlation between relative anatomic location of a face-selective voxel (P-A coordinates) and classifier performance. The solid lines reflect the best-fitting linear relationships (y = mx + b) for the clutter absent (red) and clutter present (blue) conditions. Spearman's ρ values are provided (***p < 0.0001).

Discussion

Our motivation for this present study was to characterize how the putative face-selective network in the macaque brain responds to faces under cluttered viewing conditions. Across a series of four experiments, we showed that the face-selective network does respond preferentially to peripheral faces, relative to peripheral objects, however, this preferential response is quenched when another stimulus occupies foveal vision. In experiment 1, we found that face-selective voxels in ITC were less responsive to peripheral faces in clutter (Fig. 1C), and in experiment 3, we found that face-selective voxels in ITC exhibited reduced selectivity to peripheral faces in clutter (Fig. 4C). Taken together, these findings provide much needed evidence that when faces are present in the periphery, but something else is being foveated, there is a significant reduction in the characteristic response profile of the face-selective network. Therefore, the tolerance of visual clutter that the face-selective network has exhibited in the past (Zoccolan et al., 2005; Reddy and Kanwisher, 2007; Bao and Tsao, 2018) is dependent on context.

These findings are consistent with the previous observation that ITC neurons have receptive fields that shrink in size following the addition of visual clutter (Rolls et al., 2003). It was asserted that this was a physiological response enabling ITC to provide an unambiguous representation of the stimulus at fixation. Indeed, the ventral visual pathway is optimized for processing foveal inputs (Frisén and Glansholm, 1975) and representing the fine-grained details of the faces and objects we are looking directly at, to the exclusion of all else (Ishai et al., 1999; Haxby et al., 2001; Rolls et al., 2003). It follows that, when clutter is present, the ventral visual pathway will prioritize the foveated stimulus and lose information about peripheral stimuli. Although, the results of experiment 4 reveal that the classically defined face-selective network retains information about the location of peripheral faces even when clutter is present (Fig. 7). Where this information is inherited from remains an open question that will need to be addressed by future research.

Interestingly, when we investigated the relationship between face selectivity and anatomic location in the brain, we found what we expected, face selectivity was higher in the posterior regions of the network than in the anterior regions of the network, except for one condition (Fig. 5). When foveated faces were presented in clutter, we found that FSI was greater in the anterior regions than in the posterior regions (Fig. 5). These observations further indicate that the presence of clutter changes how the face-selective network is operationalized, although exactly how increases and decreases in face-selectivity facilitate processing remains poorly understood. Again, these results highlight important gaps in our knowledge regarding how faces and other objects are processed by the visual system under more naturalistic demands.

A key feature of the current study is that we were able to distinguish between foveated and peripheral stimuli in clutter by using triplet displays with a single stimulus at fixation and two equidistant peripheral stimuli. Previous human fMRI studies investigating the representation of multiple objects have used two-item displays, with the object pairs equidistant from a central fixation point. These studies have reported that the responses to these two-item displays are well predicted by an averaging of the responses to the same objects presented in isolation (Macevoy and Epstein, 2009; Reddy et al., 2009; MacEvoy and Epstein, 2011; Baeck et al., 2013; Song et al., 2013). Further, in some areas of the ventral stream, the relationship between brain activity for isolated versus multiple objects is modulated by whether the object pairs are presented in a meaningful spatial configuration, such as a bottle positioned over a glass (Baeck et al., 2013; Quek and Peelen, 2020) or a person interacting with an object (Baldassano et al., 2017). The present study demonstrates that in addition to effects of spatial context, the response of the ventral visual pathway to multiple objects is also modulated by which item is fixated.

An outstanding question is how the visual system codes for the details of simultaneously viewed faces and objects. In the present study we found a marked reduction in the face-selective response when a nonface stimulus was being foveated. Previous studies using two-item peripheral displays reported that the category-selective response is preserved under conditions of clutter (Reddy and Kanwisher, 2007; Reddy et al., 2009), suggesting that the distinction between foveated and peripheral stimuli is important for understanding the coding of multiple objects. There is evidence from human intracranial recordings in temporal cortex for “robustness” to clutter even in the early part of the response profile (Agam et al., 2010). Robustness to clutter has been argued to be computationally advantageous for processing multiple stimuli (Cox and Riesenhuber, 2015), and the linear mixing of stimulus representations has been calculated to entail a significant cost to encoding accuracy (Orhan and Ma, 2015). Attention to a particular stimulus in multi-object displays modulates its representation (Kastner et al., 1998; Reddy et al., 2009), and thus attention may make an important contribution to the untangling of competing stimulus representations (Orhan and Ma, 2015).

Together, these results highlight the importance of increasing ecological validity when studying the visual system. We will only discover how vision is accomplished by the primate brain by placing realistic demands on the visual system (Leopold and Park, 2020; Fan et al., 2021). For example, in the present study we found no contralateral bias in face-responsivity under conditions of clutter in three-item displays (Fig. 5C), but we did replicate the well-known contralateral bias for isolated face stimuli (Fig. 2C; Silson et al., 2021; Groen et al., 2022). This observation serves as a reminder that anything we learn about the response profile of the face-selective network or the processing of signals in ITC under typical experimental conditions of fixating a single stimulus presented in isolation in the visual field needs to be tested for contextual tolerance (Wardle and Baker, 2020) and generalization to more naturalistic viewing conditions.

Footnotes

  • This work was supported by the Intramural Research Program of the National Institute of Mental Health (NIMH) Grants ZIAMH002918 (to Leslie G. Ungerleider) and ZIAMH002909 (to C.I.B.) and by the Australian Research Council Grant FT200100843 (to J.T.). Dr. Leslie G. Ungerleider supervised the early stages of this project (experiments 1 and 2); however, she passed away before the final results were available and could not assist with the analysis nor the design of follow-up experiments (experiments 3 and 4). Thus, the manuscript in no way reflects her interpretation of the data. We thank the Neurophysiology Imaging Facility Core (NIMH, National Institute of Neurological Disorders and Stroke, National Eye Institute) for functional and anatomical MRI scanning, with special thanks to Aidan Murphy, Charles Zhu, and Frank Ye for technical assistance.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Jessica Taubert at j.taubert{at}uq.edu.au

SfN exclusive license.

References

  1. ↵
    1. Agam Y,
    2. Liu H,
    3. Papanastassiou A,
    4. Buia C,
    5. Golby AJ,
    6. Madsen JR,
    7. Kreiman G
    (2010) Robust selectivity to two-object images in human visual cortex. Curr Biol 20:872–879. doi:10.1016/j.cub.2010.03.050 pmid:20417105
    OpenUrlCrossRefPubMed
  2. ↵
    1. Baeck A,
    2. Wagemans J,
    3. Op de Beeck HP
    (2013) The distributed representation of random and meaningful object pairs in human occipitotemporal cortex: the weighted average as a general rule. Neuroimage 70:37–47. doi:10.1016/j.neuroimage.2012.12.023 pmid:23266747
    OpenUrlCrossRefPubMed
  3. ↵
    1. Baldassano C,
    2. Beck DM,
    3. Fei-Fei L
    (2017) Human-object interactions are more than the sum of their parts. Cereb Cortex 27:2276–2288.
    OpenUrlCrossRefPubMed
  4. ↵
    1. Bao P,
    2. Tsao DY
    (2018) Representation of multiple objects in macaque category-selective areas. Nat Commun 9:1774. doi:10.1038/s41467-018-04126-7 pmid:29720645
    OpenUrlCrossRefPubMed
  5. ↵
    1. Bell AH,
    2. Malecek NJ,
    3. Morin EL,
    4. Hadj-Bouziane F,
    5. Tootell RB,
    6. Ungerleider LG
    (2011) Relationship between functional magnetic resonance imaging-identified regions and neuronal category selectivity. J Neurosci 31:12229–12240. doi:10.1523/JNEUROSCI.5865-10.2011 pmid:21865466
    OpenUrlAbstract/FREE Full Text
  6. ↵
    1. Cox RW
    (1996) AFNI: software for analysis and visualization of functional magnetic resonance neuroimages. Comput Biomed Res 29:162–173. doi:10.1006/cbmr.1996.0014 pmid:8812068
    OpenUrlCrossRefPubMed
  7. ↵
    1. Cox PH,
    2. Riesenhuber M
    (2015) There is a “U” in clutter: evidence for robust sparse codes underlying clutter tolerance in human vision. J Neurosci 35:14148–14159. doi:10.1523/JNEUROSCI.1211-15.2015 pmid:26490856
    OpenUrlAbstract/FREE Full Text
  8. ↵
    1. Crouzet SM,
    2. Kirchner H,
    3. Thorpe SJ
    (2010) Fast saccades toward faces: face detection in just 100 ms. J Vis 10:16.1–17. doi:10.1167/10.4.16 pmid:20465335
    OpenUrlCrossRefPubMed
  9. ↵
    1. Desimone R,
    2. Gross CG
    (1979) Visual areas in the temporal cortex of the macaque. Brain Res 178:363–380. doi:10.1016/0006-8993(79)90699-1
    OpenUrlCrossRefPubMed
  10. ↵
    1. Dow BM,
    2. Snyder AZ,
    3. Vautin RG,
    4. Bauer R
    (1981) Magnification factor and receptive field size in foveal striate cortex of the monkey. Exp Brain Res 44:213–228.
    OpenUrlCrossRefPubMed
  11. ↵
    1. Fan S,
    2. Dal Monte O,
    3. Chang SWC
    (2021) Levels of naturalism in social neuroscience research. iScience 24:102702. doi:10.1016/j.isci.2021.102702 pmid:34258547
    OpenUrlCrossRefPubMed
  12. ↵
    1. Fisher C,
    2. Freiwald WA
    (2015) Whole-agent selectivity within the macaque face-processing system. Proc Natl Acad Sci U S A 112:14717–14722. doi:10.1073/pnas.1512378112 pmid:26464511
    OpenUrlAbstract/FREE Full Text
  13. ↵
    1. Freiwald WA
    (2020) The neural mechanisms of face processing: cells, areas, networks, and models. Curr Opin Neurobiol 60:184–191. doi:10.1016/j.conb.2019.12.007 pmid:31958622
    OpenUrlCrossRefPubMed
  14. ↵
    1. Freiwald WA,
    2. Tsao DY
    (2010) Functional compartmentalization and viewpoint generalization within the macaque face-processing system. Science 330:845–851. doi:10.1126/science.1194908 pmid:21051642
    OpenUrlAbstract/FREE Full Text
  15. ↵
    1. Frisén L,
    2. Glansholm A
    (1975) Optical and neural resolution in peripheral vision. Invest Ophthalmol 14:528–536. pmid:1140911
    OpenUrlAbstract/FREE Full Text
  16. ↵
    1. Groen IIA,
    2. Dekker TM,
    3. Knapen T,
    4. Silson EH
    (2022) Visuospatial coding as ubiquitous scaffolding for human cognition. Trends Cogn Sci 26:81–96. doi:10.1016/j.tics.2021.10.011 pmid:34799253
    OpenUrlCrossRefPubMed
  17. ↵
    1. Hadj-Bouziane F,
    2. Bell AH,
    3. Knusten TA,
    4. Ungerleider LG,
    5. Tootell RB
    (2008) Perception of emotional expressions is independent of face selectivity in monkey inferior temporal cortex. Proc Natl Acad Sci U S A 105:5591–5596. doi:10.1073/pnas.0800489105 pmid:18375769
    OpenUrlAbstract/FREE Full Text
  18. ↵
    1. Haxby JV,
    2. Gobbini MI,
    3. Furey ML,
    4. Ishai A,
    5. Schouten JL,
    6. Pietrini P
    (2001) Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293:2425–2430. doi:10.1126/science.1063736 pmid:11577229
    OpenUrlAbstract/FREE Full Text
  19. ↵
    1. Hebart MN,
    2. Görgen K,
    3. Haynes JD
    (2014) The decoding toolbox (TDT): a versatile software package for multivariate analyses of functional imaging data. Front Neuroinform 8:88. doi:10.3389/fninf.2014.00088 pmid:25610393
    OpenUrlCrossRefPubMed
  20. ↵
    1. Hesse JK,
    2. Tsao DY
    (2020) The macaque face patch system: a turtle's underbelly for the brain. Nat Rev Neurosci 21:695–716. doi:10.1038/s41583-020-00393-w pmid:33144718
    OpenUrlCrossRefPubMed
  21. ↵
    1. Hung CC,
    2. Yen CC,
    3. Ciuchta JL,
    4. Papoti D,
    5. Bock NA,
    6. Leopold DA,
    7. Silva AC
    (2015) Functional mapping of face-selective regions in the extrastriate visual cortex of the marmoset. J Neurosci 35:1160–1172. doi:10.1523/JNEUROSCI.2659-14.2015 pmid:25609630
    OpenUrlAbstract/FREE Full Text
  22. ↵
    1. Ishai A,
    2. Ungerleider LG,
    3. Martin A,
    4. Schouten JL,
    5. Haxby JV
    (1999) Distributed representation of objects in the human ventral visual pathway. Proc Natl Acad Sci U S A 96:9379–9384. doi:10.1073/pnas.96.16.9379 pmid:10430951
    OpenUrlAbstract/FREE Full Text
  23. ↵
    1. Issa EB,
    2. DiCarlo JJ
    (2012) Precedence of the eye region in neural processing of faces. J Neurosci 32:16666–16682. doi:10.1523/JNEUROSCI.2391-12.2012 pmid:23175821
    OpenUrlAbstract/FREE Full Text
  24. ↵
    1. Ito M,
    2. Tamura H,
    3. Fujita I,
    4. Tanaka K
    (1995) Size and position invariance of neuronal responses in monkey inferotemporal cortex. J Neurophysiol 73:218–226. doi:10.1152/jn.1995.73.1.218
    OpenUrlCrossRefPubMed
  25. ↵
    1. Kanwisher N,
    2. McDermott J,
    3. Chun MM
    (1997) The fusiform face area: a module in human extrastriate cortex specialized for face perception. J Neurosci 17:4302–4311. doi:10.1523/JNEUROSCI.17-11-04302.1997
    OpenUrlAbstract/FREE Full Text
  26. ↵
    1. Kastner S,
    2. De Weerd P,
    3. Desimone R,
    4. Ungerleider LG
    (1998) Mechanisms of directed attention in the human extrastriate cortex as revealed by functional MRI. Science 282:108–111. doi:10.1126/science.282.5386.108 pmid:9756472
    OpenUrlAbstract/FREE Full Text
  27. ↵
    1. Keys RT,
    2. Taubert J,
    3. Wardle SG
    (2021) A visual search advantage for illusory faces in objects. Atten Percept Psychophys 83:1942–1953. doi:10.3758/s13414-021-02267-4 pmid:33768481
    OpenUrlCrossRefPubMed
  28. ↵
    1. Landman R,
    2. Sharma J,
    3. Sur M,
    4. Desimone R
    (2014) Effect of distracting faces on visual selective attention in the monkey. Proc Natl Acad Sci U S A 111:18037–18042. doi:10.1073/pnas.1420167111 pmid:25472846
    OpenUrlAbstract/FREE Full Text
  29. ↵
    1. Leopold DA,
    2. Park SH
    (2020) Studying the visual brain in its natural rhythm. Neuroimage 216:116790. doi:10.1016/j.neuroimage.2020.116790 pmid:32278093
    OpenUrlCrossRefPubMed
  30. ↵
    1. Leopold DA,
    2. Bondar IV,
    3. Giese MA
    (2006) Norm-based face encoding by single neurons in the monkey inferotemporal cortex. Nature 442:572–575. doi:10.1038/nature04951 pmid:16862123
    OpenUrlCrossRefPubMed
  31. ↵
    1. Macevoy SP,
    2. Epstein RA
    (2009) Decoding the representation of multiple simultaneous objects in human occipitotemporal cortex. Curr Biol 19:943–947. doi:10.1016/j.cub.2009.04.020 pmid:19446454
    OpenUrlCrossRefPubMed
  32. ↵
    1. MacEvoy SP,
    2. Epstein RA
    (2011) Constructing scenes from objects in human occipitotemporal cortex. Nat Neurosci 14:1323–1329. doi:10.1038/nn.2903 pmid:21892156
    OpenUrlCrossRefPubMed
  33. ↵
    1. Orban GA,
    2. Zhu Q,
    3. Vanduffel W
    (2014) The transition in the ventral stream from feature to real-world entity representations. Front Psychol 5:695. doi:10.3389/fpsyg.2014.00695 pmid:25071663
    OpenUrlCrossRefPubMed
  34. ↵
    1. Orhan AE,
    2. Ma WJ
    (2015) Neural population coding of multiple stimuli. J Neurosci 35:3825–3841. doi:10.1523/JNEUROSCI.4097-14.2015 pmid:25740513
    OpenUrlAbstract/FREE Full Text
  35. ↵
    1. Popivanov ID,
    2. Jastorff J,
    3. Vanduffel W,
    4. Vogels R
    (2012) Stimulus representations in body-selective regions of the macaque cortex assessed with event-related fMRI. Neuroimage 63:723–741. doi:10.1016/j.neuroimage.2012.07.013 pmid:22796995
    OpenUrlCrossRefPubMed
  36. ↵
    1. Premereur E,
    2. Taubert J,
    3. Janssen P,
    4. Vogels R,
    5. Vanduffel W
    (2016) Effective connectivity reveals largely independent parallel networks of face and body patches. Curr Biol 26:3269–3279. doi:10.1016/j.cub.2016.09.059 pmid:27866893
    OpenUrlCrossRefPubMed
  37. ↵
    1. Quek GL,
    2. Peelen MV
    (2020) Contextual and spatial associations between objects interactively modulate visual processing. Cereb Cortex 30:6391–6404. doi:10.1093/cercor/bhaa197 pmid:32754744
    OpenUrlCrossRefPubMed
  38. ↵
    1. Reddy L,
    2. Kanwisher N
    (2007) Category selectivity in the ventral visual pathway confers robustness to clutter and diverted attention. Curr Biol 17:2067–2072. doi:10.1016/j.cub.2007.10.043 pmid:17997310
    OpenUrlCrossRefPubMed
  39. ↵
    1. Reddy L,
    2. Kanwisher NG,
    3. VanRullen R
    (2009) Attention and biased competition in multi-voxel object representations. Proc Natl Acad Sci U S A 106:21447–21452. doi:10.1073/pnas.0907330106 pmid:19955434
    OpenUrlAbstract/FREE Full Text
  40. ↵
    1. Rolls ET,
    2. Aggelopoulos NC,
    3. Zheng F
    (2003) The receptive fields of inferior temporal cortex neurons in natural scenes. J Neurosci 23:339–348. doi:10.1523/JNEUROSCI.23-01-00339.2003
    OpenUrlAbstract/FREE Full Text
  41. ↵
    1. Rossion B,
    2. Hanseeuw B,
    3. Dricot L
    (2012) Defining face perception areas in the human brain: a large-scale factorial fMRI face localizer analysis. Brain Cogn 79:138–157. doi:10.1016/j.bandc.2012.01.001 pmid:22330606
    OpenUrlCrossRefPubMed
  42. ↵
    1. Russ BE,
    2. Leopold DA
    (2015) Functional MRI mapping of dynamic visual features during natural viewing in the macaque. Neuroimage 109:84–94. doi:10.1016/j.neuroimage.2015.01.012 pmid:25579448
    OpenUrlCrossRefPubMed
  43. ↵
    1. Sadagopan S,
    2. Zarco W,
    3. Freiwald WA
    (2017) A causal relationship between face-patch activity and face-detection behavior. Elife 6:e18558. doi:10.7554/eLife.18558
    OpenUrlCrossRefPubMed
  44. ↵
    1. Silson EH,
    2. Groen IIA,
    3. Baker CI
    (2021) Direct comparison of contralateral bias and face/scene selectivity in human occipitotemporal cortex. Brain Struct Funct 227:1405–1421.
    OpenUrl
  45. ↵
    1. Song Y,
    2. Luo YL,
    3. Li X,
    4. Xu M,
    5. Liu J
    (2013) Representation of contextually related multiple objects in the human ventral visual pathway. J Cogn Neurosci 25:1261–1269. doi:10.1162/jocn_a_00406 pmid:23772557
    OpenUrlCrossRefPubMed
  46. ↵
    1. Taubert J,
    2. Van Belle G,
    3. Vanduffel W,
    4. Rossion B,
    5. Vogels R
    (2015) Neural correlate of the thatcher face illusion in a monkey face-selective patch. J Neurosci 35:9872–9878. doi:10.1523/JNEUROSCI.0446-15.2015 pmid:26156988
    OpenUrlAbstract/FREE Full Text
  47. ↵
    1. Taubert J,
    2. Wardle SG,
    3. Flessert M,
    4. Leopold DA,
    5. Ungerleider LG
    (2017) Face pareidolia in the rhesus monkey. Curr Biol 27:2505–2509.e2. doi:10.1016/j.cub.2017.06.075 pmid:28803877
    OpenUrlCrossRefPubMed
  48. ↵
    1. Taubert J,
    2. Flessert M,
    3. Wardle SG,
    4. Basile BM,
    5. Murphy AP,
    6. Murray EA,
    7. Ungerleider LG
    (2018a) Amygdala lesions eliminate viewing preferences for faces in rhesus monkeys. Proc Natl Acad Sci U S A 115:8043–8048. doi:10.1073/pnas.1807245115 pmid:30012600
    OpenUrlAbstract/FREE Full Text
  49. ↵
    1. Taubert J,
    2. Van Belle G,
    3. Vogels R,
    4. Rossion B
    (2018b) The impact of stimulus size and orientation on individual face coding in monkey face-selective cortex. Sci Rep 8:10339. doi:10.1038/s41598-018-28144-z pmid:29985387
    OpenUrlCrossRefPubMed
  50. ↵
    1. Taubert J,
    2. Japee S,
    3. Murphy AP,
    4. Tardiff CT,
    5. Koele EA,
    6. Kumar S,
    7. Leopold DA,
    8. Ungerleider LG
    (2020a) Parallel processing of facial expression and head orientation in the macaque brain. J Neurosci 40:8119–8131. doi:10.1523/JNEUROSCI.0524-20.2020 pmid:32928886
    OpenUrlAbstract/FREE Full Text
    1. Taubert J,
    2. Wardle SG,
    3. Ungerleider LG
    (2020b) What does a “face cell” want? Prog Neurobiol 195:101880. doi:10.1016/j.pneurobio.2020.101880
    OpenUrlCrossRef
  51. ↵
    1. Taubert J,
    2. Wardle SG,
    3. Tardiff CT,
    4. Koele EA,
    5. Kumar S,
    6. Messinger A,
    7. Ungerleider LG
    (2022) The cortical and subcortical correlates of face pareidolia in the macaque brain. Soc Cogn Affect Neurosci. Advance online publication. Retrieved April 21, 2022. doi: 10.1093/scan/nsac031.
    OpenUrlCrossRef
  52. ↵
    1. Tsao DY,
    2. Freiwald WA,
    3. Knutsen TA,
    4. Mandeville JB,
    5. Tootell RB
    (2003) Faces and objects in macaque cerebral cortex. Nat Neurosci 6:989–995. doi:10.1038/nn1111 pmid:12925854
    OpenUrlCrossRefPubMed
  53. ↵
    1. Tsao DY,
    2. Freiwald WA,
    3. Tootell RB,
    4. Livingstone MS
    (2006) A cortical region consisting entirely of face-selective cells. Science 311:670–674. doi:10.1126/science.1119983 pmid:16456083
    OpenUrlAbstract/FREE Full Text
  54. ↵
    1. Vanduffel W,
    2. Fize D,
    3. Mandeville JB,
    4. Nelissen K,
    5. Van Hecke P,
    6. Rosen BR,
    7. Tootell RB,
    8. Orban GA
    (2001) Visual motion processing investigated using contrast agent-enhanced fMRI in awake behaving monkeys. Neuron 32:565–577. doi:10.1016/S0896-6273(01)00502-5
    OpenUrlCrossRefPubMed
  55. ↵
    1. Van Essen DC,
    2. Newsome WT,
    3. Maunsell JH
    (1984) The visual field representation in striate cortex of the macaque monkey: asymmetries, anisotropies, and individual variability. Vision Res 24:429–448.
    OpenUrlCrossRefPubMed
  56. ↵
    1. Wardle SG,
    2. Baker C
    (2020) Recent advances in understanding object recognition in the human brain: deep neural networks, temporal dynamics, and context. F1000Res 9:590. doi:10.12688/f1000research.22296.1
    OpenUrlCrossRefPubMed
  57. ↵
    1. Wardle SG,
    2. Taubert J,
    3. Teichmann L,
    4. Baker CI
    (2020) Rapid and dynamic processing of face pareidolia in the human brain. Nat Commun 11:4518. doi:10.1038/s41467-020-18325-8 pmid:32908146
    OpenUrlCrossRefPubMed
  58. ↵
    1. Xiang QS,
    2. Ye FQ
    (2007) Correction for geometric distortion and N/2 ghosting in EPI by phase labeling for additional coordinate encoding (PLACE). Magn Reson Med 57:731–741. doi:10.1002/mrm.21187 pmid:17390358
    OpenUrlCrossRefPubMed
  59. ↵
    1. Zhang H,
    2. Japee S,
    3. Stacy A,
    4. Flessert M,
    5. Ungerleider LG
    (2020) Anterior superior temporal sulcus is specialized for non-rigid facial motion in both monkeys and humans. Neuroimage 218:116878. doi:10.1016/j.neuroimage.2020.116878
    OpenUrlCrossRef
  60. ↵
    1. Zoccolan D,
    2. Cox DD,
    3. DiCarlo JJ
    (2005) Multiple object response normalization in monkey inferotemporal cortex. J Neurosci 25:8150–8164. doi:10.1523/JNEUROSCI.2058-05.2005 pmid:16148223
    OpenUrlAbstract/FREE Full Text
Back to top

In this issue

The Journal of Neuroscience: 42 (35)
Journal of Neuroscience
Vol. 42, Issue 35
31 Aug 2022
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Clutter Substantially Reduces Selectivity for Peripheral Faces in the Macaque Brain
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Clutter Substantially Reduces Selectivity for Peripheral Faces in the Macaque Brain
Jessica Taubert, Susan G. Wardle, Clarissa T. Tardiff, Amanda Patterson, David Yu, Chris I. Baker
Journal of Neuroscience 31 August 2022, 42 (35) 6739-6750; DOI: 10.1523/JNEUROSCI.0232-22.2022

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Clutter Substantially Reduces Selectivity for Peripheral Faces in the Macaque Brain
Jessica Taubert, Susan G. Wardle, Clarissa T. Tardiff, Amanda Patterson, David Yu, Chris I. Baker
Journal of Neuroscience 31 August 2022, 42 (35) 6739-6750; DOI: 10.1523/JNEUROSCI.0232-22.2022
Reddit logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • clutter
  • face perception
  • face-selective
  • macaque fMRI
  • multiple objects
  • periphery

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Optogenetics reveals roles for supporting cells in force transmission to and from outer hair cells in the mouse cochlea
  • Pre-saccadic neural enhancements in marmoset area MT
  • Interareal synaptic inputs underlying whisking-related activity in the primary somatosensory barrel cortex
Show more Research Articles

Systems/Circuits

  • Pre-saccadic neural enhancements in marmoset area MT
  • Interareal synaptic inputs underlying whisking-related activity in the primary somatosensory barrel cortex
  • The Structure of Hippocampal CA1 Interactions Optimizes Spatial Coding across Experience
Show more Systems/Circuits
  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
(JNeurosci logo)
(SfN logo)

Copyright © 2023 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.