Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Research Articles, Systems/Circuits

Color and Spatial Frequency Provide Functional Signatures of Retinotopic Visual Areas

Spencer R. Loggia, Stuart J. Duffield, Kurt Braunlich and Bevil R. Conway
Journal of Neuroscience 8 January 2025, 45 (2) e1673232024; https://doi.org/10.1523/JNEUROSCI.1673-23.2024
Spencer R. Loggia
1National Eye Institute, Bethesda, Maryland 20892
2Department of Neuroscience, Brown University, Providence, Rhode Island
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Spencer R. Loggia
Stuart J. Duffield
1National Eye Institute, Bethesda, Maryland 20892
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Kurt Braunlich
1National Eye Institute, Bethesda, Maryland 20892
3National Institute of Mental Health, Bethesda, Maryland 20892
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Bevil R. Conway
1National Eye Institute, Bethesda, Maryland 20892
3National Institute of Mental Health, Bethesda, Maryland 20892
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Bevil R. Conway
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

Primate vision relies on retinotopically organized cortical parcels defined by representations of hemifield (upper vs lower visual field), eccentricity (fovea vs periphery), and area (V1, V2, V3, V4). Here we test for functional signatures of these organizing principles. We used functional magnetic resonance imaging to measure responses to gratings varying in spatial frequency, color, and saturation across retinotopically defined parcels in two macaque monkeys, and we developed a Sparse Supervised Embedding (SSE) analysis to identify stimulus features that best distinguish cortical parcels from each other. Constraining the SSE model to distinguish just eccentricity representations of the voxels revealed the expected variation of spatial frequency and S-cone modulation with eccentricity. Constraining the model according to the dorsal/ventral location and retinotopic area of each voxel provided unexpected functional signatures, which we investigated further with standard univariate analyses. Posterior parcels (V1) were distinguished from anterior parcels (V4) by differential responses to chromatic and luminance contrast, especially of low-spatial-frequency gratings. Meanwhile, ventral parcels were distinguished from dorsal parcels by differential responses to chromatic and luminance contrast, especially of colors that modulate all three cone types. The dorsal/ventral asymmetry not only resembled differences between candidate dorsal and ventral subdivisions of human V4 but also extended to include all retinotopic visual areas, starting in V1 and increasing from V1 to V4. The results provide insight into the functional roles of different retinotopic areas and demonstrate the utility of SSE as a data-driven tool for generating hypotheses about cortical function and behavior.

  • color
  • cortical organization
  • fMRI
  • macaque
  • retinotopy
  • vision

Significance Statement

This study demonstrates a new analysis, Sparse Supervised Embedding (SSE), which promises to be useful for visualizing and understanding complex neuroimaging datasets. The paper uses SSE to explore the functional roles of retinotopic visual areas (V1, V2, V3, V4, V3a, MT). The results show that retinotopic areas parcellated by representations for eccentricity and upper/lower visual hemifield have functional signatures, which are defined by unique combinations of responses to color, spatial frequency, and contrast. The functional signatures provide hypotheses for the different roles that the parcels play in vision and help resolve apparent differences between human and macaque visual cortex organization.

Introduction

The computations performed by each retinotopically organized visual area in the putative visual-processing hierarchy (V1, V2, V3, V4) remain unclear. The retinotopic organization of these areas leaves an imprint on higher-order areas (Hasson et al., 2003; Lafer-Sousa and Conway, 2013; Conway, 2018; Arcaro et al., 2017; Klink et al., 2021) and influences visual behavior. For instance, acuity is determined by a target's location relative to the fovea, and perception varies with polar angle (Carrasco and McElree, 2001; Levine and McAnany, 2005; Quek and Finkbeiner, 2015; Himmelberg et al., 2023). Such perceptual asymmetries are not fully explained by retinal factors, pointing to cortical explanations (Kupers et al., 2022), but the distinct roles of different retinotopic areas remain elusive. One idea is that the sequence of retinotopic areas builds complex receptive fields (Freeman et al., 2013; Vernon et al., 2016; Ponce et al., 2019). According to this idea, the visual areas, either singly or in clusters, represent computational units (Wandell et al., 2005; Winawer and Witthoft, 2015), but there are other possibilities. For example, multiple areas could reflect selective pressure to smoothly represent the visual field at multiple scales (Tarhan et al., 2021). Determining the roles of retinotopic areas would be facilitated by quantitative data on cortical responses in macaque monkey, a standard model of the human visual system.

Here we ask: To what extent can combinations of simple stimulus features distinguish retinotopically defined parcels? Our objectives are twofold. First, to advance a quantitative framework for understanding neural responses across the visual-processing hierarchy and their connection to perception, and second, to facilitate linking neural activity in monkeys to psychophysical and imaging data in humans. Achieving these objectives will allow tests of evolutionary homologies and help reveal the extent to which the visual systems are similar in macaque and human.

We first used functional magnetic resonance imaging (fMRI) to parcellate the macaque visual cortex by eccentricity, upper and lower visual field, and visual area. We then used fMRI to measure responses to gratings varying in color, spatial frequency, and cone contrast, as done in human subjects (Wade et al., 2002; Brewer et al., 2005; Liu and Wandell, 2005; Mullen, 2019). We analyze the data with a new multivariate analysis method we call Sparse Supervised Embedding (SSE) that discovers differences between subdivisions of a dataset in a high-dimensional space. Similar methods such as linear discriminant analysis (LDA) or linear classifiers have been used to separate stimulus-defined classes by voxel responses (Cox and Savoy, 2003; Pardo et al., 2006; Lin et al., 2021). SSE flips this logic, instead uncovering sparse combinations of stimuli that best distinguish cortical parcels. The results provide a functional signature for the cortical parcels, somewhat analogous to voxel decomposition (Norman-Haignere, 2015).

Unlike traditional univariate analyses, which report response magnitude to individual stimuli, SSE uncovers structure determined by the relative responses to all stimuli in a dataset. SSE is well suited to our purpose because rather than simply classifying parcels based on the average response of voxels in each parcel, it finds a space that yields high variance between these average responses and low variance of the voxel responses within each parcel. The axes of this space are those that best separate the cortical parcels and are interpretable as linear combinations of features—in our case, gratings varying in color, contrast, and spatial frequency. The organization of the parcels along the axes, to the extent there is any organization, will be informative about the functional relationships of the parcels. In the present work, SSE discovered an axis that organizes the parcels by eccentricity, and the axis features are those known to vary with eccentricity (Wright and Johnston, 1983; Curcio et al., 1991; Vanston and Crognale, 2018; Broderick et al., 2022), which validates the approach. SSE also produced axes that separate parcels by upper and lower visual field and by location along the putative cortical processing hierarchy, discovering functional signatures of retinotopic visual areas and their upper and lower visual field representations.

Materials and Methods

fMRI acquisition

Imaging acquisition was the same as in Lafer-Sousa et al. (2012) and Lafer-Sousa and Conway (2013). The V1 data analyzed presently contributed to those two reports, but none of the data on responses in V2, V3, V4, MT, or V3a presented here have previously been published; all the data analyzed in the present report are available in open access (https://openneuro.org/datasets/ds005521). Two alert rhesus macaques (7–8 kg, M1 and M2) were scanned at the Martinos Imaging Center at Massachusetts General Hospital in a 3 Tesla Allegra scanner (Siemens) using a custom-made four–channel send–receive surface coil (Athinoula A. Martinos Center for Biomedical Imaging). Images were acquired using standard echoplanar imaging methods with 2 s repetition time (TR), each repetition acquiring a 98 × 63 × 98 voxel matrix with 1 mm isotropic voxels. Animals were trained using juice rewards to sit in a sphinx position in a custom-built plastic chair placed inside the bore of the horizontal scanner while fixating a central target on a display screen. Animals were required to fixate throughout the experiment to receive reward. Head position was maintained with custom plastic head posts that were surgically implanted (see below, Surgical procedures; Lafer-Sousa et al., 2012). Eye movements were tracked with an infrared eye tracker (ISCAN). Animals were rewarded for maintaining fixation within a degree of the central fixation target. Monocrystalline iron oxide nanoparticle (MION; AMAG Pharmaceuticals; 8–10 mg/kg, diluted in approximately equal volume of saline) was injected intravenously in the saphenous vein immediately prior to scanning to improve the magnetic resonance signal (Vanduffel et al., 2001). High-resolution anatomical scans (0.35 mm × 0.35 mm × 0.35 mm in M1 and 0.35 mm × 0.4 mm × 0.35 mm in M2) were obtained while the animals were lightly sedated during a separate scanning session. All imaging and surgical procedures follow local and National Institutes of Health guidelines and were approved by the Harvard Medical School Institutional Animal Care and Use Committee.

Experimental design and statistical analysis

Four fMRI experiments were conducted over the course of seven sessions in M1 and six sessions in M2. In Experiment 1, we measured responses to vertical and horizontal flickering checkerboard wedges to define retinotopic areas (Fig. 1A). In Experiment 2, we measured responses to checkerboard patterns restricted to rings of different eccentricity (Fig. 1B). In Experiment 3, we measured responses to color-gray gratings in which the colors were defined by a cone-opponent color space (Fig. 2A,B; MacLeod and Boynton, 1979; Derrington et al., 1984); the stimuli varied in hue and saturation (Fig. 2C). Finally, in Experiment 4, we measured responses to heterochromatic gratings also defined by the cone-opponent color space but varying in spatial frequency (Fig. 2D). Each scan session consisted of 13–24 stimulus runs, details of which are described below.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Parcellation of the retinotopic visual cortex in macaque monkey. A, Individual-specific functional parcellation. To delineate retinotopic areas in each macaque, using a block design, the animals were shown checkerboard patterns along the vertical and horizontal meridians (icon). The panel shows the contrast maps between these two conditions displayed on the inflated surface of Monkey 1 (M1); responses biased for the vertical meridian (cool colors) and horizontal meridian (warm colors). Each retinotopic area is bounded by peak activation along the horizontal meridian (solid black line) and vertical meridian (dotted black line). The most posterior horizontal meridian delineates dorsal V1 (representing the lower visual field) from ventral V1 (representing the upper visual field). The most posterior vertical meridian representation separates V1 from V2. Progressing from posterior to anterior, V2 is then separated from V3 by the next horizontal meridian representation, and V3 is separated from V4 by the next vertical meridian representation. V4 is bounded anteriorly by a fragmented horizontal meridian representation. V3a is not clearly visible in the lateral projection shown in the figure (it is situated on the dorsal surface), but it is contained by its own meridian representations. The solid white line extending anteriorly from the horizontal meridian representation in V1 shows the separation of dorsal and ventral parcels that tracks through the foveal confluence. B, To define voxels by eccentricity responses, checkerboard rings and circles were presented in a blocked paradigm, and the statistical contrasts between these conditions were used to create parcels encompassing the central 1.5° and nested annuli extending to 3.5°, 7°, and 10° (icon). The eccentricity parcellation is shown for M1. Note the existence of two representations of the fovea; the larger one corresponds to the confluence of the V1–V4 cluster and the other corresponds to the MT cluster.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Overview of the stimuli used to identify the functional signatures for retinotopic parcels. A, The colors of the stimuli were defined in a cone-opponent color space. An equiluminant plane through the space is shown. Colors included those of the two cardinal cone-opponent axes, the two intermediate axes, and the luminance axis (orthogonal to the equiluminant plane, inset). These axes are labeled LM (0 and 180°), daylight (45 and 225°), S (90 and 270°), antidaylight (135 and 315°), and luminance (LMS). B, CIE-1931 chromaticity diagram showing the various saturation levels of the stimuli colors used in one set of experiments (Experiment 3). The continuous color horseshoe shows where the spectrum plots in the chromaticity space, while the dots show the stimuli and the asterisk indicates the chromaticity of the adapting gray background. See Table 1 for the cone contrasts of the stimuli. C, The set of color-gray gratings used in Experiment 3, varying in hue angle (columns) and saturation (rows). The spatial frequency of the gratings was 0.5 CpD. D, The set of heterochromatic gratings used in Experiment 4, varying in axis through the color space (columns) and spatial frequency (rows, CpD). See Table 2 for the cone contrasts of the stimuli.

The color space is defined by two equiluminant cone-opponent axes, one of which consists of colors that modulate responses of the L and M cones without changing the activity of the S cones (the L-M axis, demarcated in Fig. 2A by color angles 0 and 180), and the other consists of colors that modulate responses of S cones without changing the activity of the L or M cones (demarcated by color angles 90 and 270). The space has an orthogonal luminance axis that is modulated by the change in L, M, and S cones (Fig. 2A, inset). Colors in each experiment selectively modulated the L-M axis, the S axis, the luminance axis, or two intermediate axes of the equiluminant plane that modulate all three cone types. The colors of the two intermediate axes correspond roughly to the daylight locus (orange/blue; 45 and 225°) and the antidaylight locus (green/magenta; 135 and 315°). We refer to these five chromatic dimensions as LM, S, LMS, daylight, and antidaylight, following others (Goddard et al., 2010; Lafer-Sousa et al., 2012). The gamut of the color space was bounded by the monitor gamut (Fig. 2B, triangle).

The stimuli were presented on a screen 48.2 cm away from the animal, with a JVC DKLA projector (1,024 × 768 pixel resolution). For Experiments 1, 3, and 4, the projected image subtended 22.5 × 16.9 cm (27 × 20 degrees of visual angle, DvA), and for Experiment 2, the projected image subtended 35 × 26.8 cm (41 × 31 DvA). The projector's luminance output was linearized, and colors were measured and calibrated using spectral readings taken from a spectroradiometer (PR-655, Photo Research). Spectra were multiplied by the Judd-revised CIE 1931 color matching functions to derive CIE xy coordinates (Fig. 2B). All stimuli were presented in a blocked paradigm. Stimulus blocks were interleaved with adapting-gray blocks.

The first experiment (Experiment 1, Fig. 1A) allowed us to identify horizontal and vertical meridian representations which define the boundaries between retinotopic visual areas (for review, see Vanduffel et al., 2014). Stimuli consisted of black-and-white checkerboard wedges that flickered between complementary checkerboard patterns every 1 s at 99% luminance contrast. The stimuli were two wedges that radiated from the central fixation spot. In one block, the wedges spanned the vertical meridian (60° wedge); in the other block, the wedges spanned the horizontal meridian (30° wedge). Blocks were 32 s (16 TRs), and each run had 16 blocks. For all runs, the conditions were ordered horizontal, gray, vertical, and gray, repeated four times. A total of 13 runs of this experiment were collected in M1, and 14 runs were collected in M2.

The second experiment was an eccentricity mapping experiment (Experiment 2, Fig. 1B). Stimuli consisted of LMS, LM, or S checkerboards presented in rings centered on the fixation point; the checkerboards flickered between complementary checkerboard patterns every 1 s. Rings extended from the central fixation spot to a width of 20 DvA, with each successively larger ring having an inner radius equal to the outer radius of the previous ring. Rings had outer radii of 1.5°, 3.5°, 7°, and 10°. Blocks showing one of the rings were interleaved with blocks of the adapting-gray background. Blocks were 32 s (16 TRs), and each run had 17 blocks. There were three run orders, with only the colors of the checkboards varying across the orders. For all three run orders, the gray blocks and rings proceeded as gray, 1.5°; gray, 3.5°; gray, 7°; and gray, 10°, repeated twice, with a final gray block. For the first order, the first four ring blocks were S checkerboards, and the second four ring blocks were LM; the second order had S followed by LMS checkerboards; the third order had LMS followed by LM checkerboards. A total of 24 runs were collected in M1 and 21 runs were collected in M2.

In the third experiment, we presented color-gray “trapezoidal” gratings with a spatial frequency of 0.5 cycles per degree (CpD; Fig. 2C). The structure of the gratings is trapezoidal when plotting the cone contrast of the colors along the line perpendicular to the gratings. The saturated or fully gray portions of the gratings made up 80% of each cycle; the transition between the fully gray and fully saturated portions of the gratings was progressive such that the cone contrast from one side of the grating to the other was linear. This spatial structure helps mitigate chromatic aberration. The gratings were presented in nine hues, defined by the eight poles of the four axes in the equiluminant color plane and the luminance axis. Each hue was presented at four saturation levels (10, 30, 50, or 95% of the maximum saturation of the display for the colored gratings; 1, 2.5, 5.5, or 8.7% luminance contrast for the achromatic gratings; contrasts were computed as Michelson contrast; Table 1). In total, there were 32 colored grating blocks and 4 achromatic grating blocks. The gratings drifted horizontally at a speed of 0.8 cycles per second, and the direction of the drift changed every 2 s. In each run, blocks of gratings were interleaved with blocks of gray. The blocks were 28 s (14 TRs), and each run had 19 blocks (10 blocks of gray and 9 blocks of gratings). The full cycle of 36 gratings was presented over four runs. Each of the four runs presented 8 of the 32 colored gratings, plus one of the 4 achromatic gratings (the eight gratings in each run were unique, i.e., not shown in any of the other three runs). Each block within a run was a unique color, randomly selected from one of the four saturation levels in such a way that all hues were presented in all runs. For each run, the hues were presented in one of the four counterbalanced sequences, and the achromatic grating was shown in the middle of the sequence. This stimulus paradigm structure helps ensure that the intervening gray blocks provide a baseline reference throughout the experiment. A total of 49 runs were collected in M1, and 58 runs were collected in M2. Each block type was presented between 26 and 28 times.

View this table:
  • View inline
  • View popup
Table 1.

Michelson cone contrasts for L, M, and S channels for Experiment 3 gratings stimuli

The fourth experiment presented heterochromatic gratings with a spatial frequency of either 0.2, 4, or 8 CpD. The colors of the two phases of a given grating were defined by the poles of LM axis, the S axis, or the luminance axis (Fig. 2D). The colors of the colored gratings were set at 90% of the saturation of the display defined in the DKL-MB color space; the achromatic gratings were of either 90 or 9% luminance contrast (see Table 2 for cone contrasts). Blocks of gratings were interleaved with blocks of gray. During a given block, the grating was stationary and alternated in counter-phase between complementary patterns every 1 s. The blocks were 32 s (16 TRs), and each run had 25 blocks, which included all 12 gratings conditions and the 13 intervening gray blocks. The stimulus blocks were shown in a counterbalanced sequence across runs. A total of 14 runs were collected in M1, and 13 runs were collected in M2. Each block type was presented 27 times.

View this table:
  • View inline
  • View popup
Table 2.

Michelson cone contrasts for L, M, and S channels for Experiment 4 dichromatic gratings stimuli

fMRI data preprocessing is described below. F tests for linear regressions were computed with MATLAB's fitlm function. Multiway ANOVAs, ANCOVAs, and subsequent Tukey's honest significant difference (HSD) tests were computed with MATLAB's anovan, ancova, and multcompare functions. Multiway ANCOVA was computed using the MANCOVAN toolbox (Gruner, 2010). In cases where bootstrapping was used to generate confidence intervals, data were independently resampled 1,000 times. Variance and expected value were determined by fitting a normal distribution over all resamples. P values and statistical tests used are presented in the figures, figure legends, and results section. The SSE analysis method is described in detail below, SSE analysis.

fMRI preprocessing

The raw data were unpacked from DICOM (Digital Imaging and Communications) to NIfTI (Neuroimaging Informatics Technology Initiative) format using dcm2niix (Li et al., 2016). The images underwent thermal denoising using the NORDIC algorithm (Vizioli et al., 2021) to improve the signal-to-noise ratio. Images were reoriented from the sphinx position. Data were motion corrected with the FSL (fMRI-Brain Software Library) motion-correction algorithm (MCFLIRT) with 12 degrees of freedom (Smith et al., 2004). The functional volumes were then coregistered to the anatomical volumes using ITK-SNAP's linear registration tool v3.6.0 (Yushkevich et al., 2006) and ANTs nonlinear registration algorithm (Avants et al., 2011) to get the closest mapping of the functional volumes to the anatomical image. Blocks were one-hot encoded and convolved with the MION hemodynamic response function (HRF) to create design matrices (Vanduffel et al., 2001). No spatial smoothing was applied. Nilearn's general linear model module (https://nilearn.github.io/) was used to calculate β coefficients and create statistical contrast maps, using drift regressors up to the third-order polynomial to account for fMRI signal drift. The β coefficients for each block were divided by the β coefficients for the intervening gray blocks for each run.

Anatomical processing and region of interest definition

To create surfaces of the macaque brain, the high-resolution anatomical images were skull stripped, white matter regions were labeled, and surfaces were generated and inflated with FreeSurfer (Smith et al., 2004). Retinotopic parcels were defined using functional data obtained in each animal (Experiment 1). Significance maps showing the contrast of responses to vertical versus horizontal were shown on the inflated surface of each animal, and the peak bias of the vertical-greater-than-horizontal responses was used to define the boundary between V1 and V2 and the boundary between V3 and V4. The peak bias of the horizontal-greater-than-vertical responses were used to define the upper and lower visual field representations of V1 and the boundary between V2 and V3. MT+ was defined such that it bordered V4d along a shared horizontal meridian and featured a continuous eccentricity map in a three quarters circle around the foveal representation in the superior temporal sulcus (Fig. 1B; Brewer et al., 2002; Kolster et al., 2014). Surface labels of the retinotopic areas were transformed into the anatomical volume space. The functionally defined retinotopic areas were cross-referenced with the Paxinos illustrated atlas (Paxinos et al., 2000), the D99 atlas (Reveley et al., 2016), and the CHARM atlas (Jung et al., 2021), producing retinotopic regions of interest (ROIs) for each subject.

Retinotopic areas V1, V2, V3, and V4 were subdivided into dorsal and ventral ROIs by extending the horizontal meridian of V1 through the center of the foveal confluence in each hemisphere of each subject's surface (Fig. 1A, white line). The surface labels were then back-projected into the anatomical space and used as a reference to divide the retinotopic ROIs into ventral and dorsal parcels, corresponding to the upper and lower visual field representations of each retinotopic area. V3a and MT+, being located on the dorsal surface of the cortex, were not subdivided, despite these areas having representations of the upper and lower visual field (Gattass et al., 1988; Kolster et al., 2014; Zhu and Vanduffel, 2019).

To determine eccentricity representations, statistical contrasts of the responses elicited by neighboring rings were generated, and voxels were assigned to eccentricities eliciting the maximum z-score, moving outward from foveal regions to the periphery (Fig. 1B). Together, the functional parcellation provides the eccentricity, visual area, and upper-versus-lower visual field representation of each voxel.

SSE analysis

We developed a data-driven method that discovers a low-dimensional space E that best captures the relationships between the retinotopic parcels, R. We refer to this method as SSE. The goal of the method is to use the responses to the various visual stimuli to uncover functional relationships between the parcels. So, unlike other approaches that use fMRI responses to determine relationships among stimuli, we use the fMRI responses to determine relationships among the cortical parcels. The results allow us to test the hypothesis that color gratings of varying spatial frequency and contrast provide a functional signature of the parcels. Given a number of voxels v, a number of stimulus conditions m, and a number component vectors k, we find the (m × k) projection matrix T that takes the input data matrix (in this case a v × m standardized β coefficient matrix) into a k-dimensional Euclidean space E, such that T maximizes the pairwise distance between parcel centers while minimizing variance within those parcels. The final objective is the sum of the interparcels distance term, the negated intraparcel variance term, and an L1 regularization term over T as follows (Eq. 1):argmaxT∑∀f1∈R∑∀f2∈RMf1,f2(βf1T¯−βf2T¯)2|R|2−1|R|∑∀f∈R∑|cov(βfT)|+S∑|T|.

Equation 1: the SSE objective function

In Equation 1, R is the set of ROI labels (and |R| is the number of ROIs), T is the transform we are trying to discover (the set of k length m component vectors), β is the beta matrix with m features, k is the number of target components or dimensions, and M is a weight matrix that allows control over the importance of separating each pair of classes. We use cov to denote computing the covariance matrix over the embedding dimensions. S (the sparsity term) scales the magnitude of the L1 regularization term; it is used to reduce overfitting, reduce variability between independent runs, and encourage more interpretable component vectors. For most of our analyses, S is set to 0.005; in a control analysis, we set S to zero (Fig. 4)—the control shows that the S parameter is useful, but the conclusions do not depend on it. The model is optimized via gradient descent, using the ADAM optimization method (Kingma and Ba, 2017), and implemented in PyTorch v.1.13.

The SSE method is similar in some respects to LDA. LDA finds a projection hyperplane that maximizes Fisher's criterion, i.e., the ratio of interclass variance to intraclass variance analytically (Xanthopoulos et al., 2013). Our objective function is related but differs in three important respects. First, LDA is primarily a classification method and projects c classes of input data into a space with c − 1 orthogonal dimensions. Equivalently, this can be thought of as finding c − 1 hyperplanes that best separate the classes. Since we solve our objective function numerically and seek the relationships between classes rather than their classification, we specify the dimensionality of the embedding space. Second, to encourage sparsity, we add L1 regularization of discriminant vectors. Third, instead of between-class variance, we maximize the pairwise Euclidean distance between each class centroid. This allows for control over which class pairs are separated by defining a weighting parameter over the pairwise distance matrix.

After optimization, SSE gives us two useful pieces of information. First, it provides the relative locations of the parcels in E. Any reliable structure in the arrangement of the parcels is of interest because the objective function itself makes no attempt to enforce any relationship besides being statistically different from each other. Second, it provides weights of the input features. The features are the basis for separating the parcels. These two pieces of information together provide information about the global relationships between all the parcels with respect to the stimulus features. The analysis therefore discovers the extent to which responses to simple color gratings provide functional signatures for visual cortex parcels defined by eccentricity, upper-versus-lower visual field, and retinotopic area.

Estimates of error in the SSE models were established by bootstrapping. The input fMRI dataset was resampled over independent runs 1,000 times, and a separate randomly initialized SSE model was fit on each resampling. For each model, the columns of T span a k-dimensional vector subspace E of the m-dimensional feature space. We are interested in the variability of E. However, the spanning components T may differ between fits despite E being near identical since any E has infinitely many unique bases. For each run, we want to find a new T* that is comparable across models. In a three-dimensional feature space, E would be a plane spanned by two vectors, the columns of T. We are interested only in how E is angled in this space. For every independent run, we find a rotation matrix in E that minimizes the variance over all T, yielding T*. Variance over all T* will be nonzero only due to variance in E between models. This allows us to ignore random variance in the rotation of the basis and focus on the variance in T* caused by differences in the lower-dimensional representations of the full feature space. We use bootstrapping to estimate the distributions of the model parameters and the mean of the data projection onto E. For additional confirmation of model results, we perform cross-validation. The data were divided into two sets composed of even and odd runs. The model was fit (i.e., T was estimated) on one dataset and tested (i.e., data were projected onto E) on the other independent dataset.

β coefficients of the stimuli from Experiments 3 and 4 in both monkeys were concatenated into a voxels × conditions matrix. Each voxel was assigned to a retinotopic area and an eccentricity preference. We ran two SSE analyses that differ in M. First, we maximized the separability of different ROIs across the four eccentricity parcels. To do this, we set M such as to not consider differences across the different retinotopic areas either within or between eccentricity levels; the only relationships considered were between different eccentricity levels of the same retinotopic area. This allowed us to find one component that best discriminates between more central and more peripheral parcels. Second, we set M to ignore differences across eccentricity, only considering the pairwise distance between all retinotopic areas within each eccentricity level. This has the effect of limiting differences between ROIs due to variation in the size of eccentricity representations. The positive and negative features of the recovered components were z-score contrasted and projected onto the inflated surface of M1, to show the topography of the response to the different component vectors. In these contrasts, the positive and negative features were scaled to sum to 1 and −1 respectively. Table 3 lists the SSE conditions and their corresponding stimuli, illustrated in Figure 2.

View this table:
  • View inline
  • View popup
Table 3.

Conditions corresponding to feature names used in display of SSE discriminant/component vectors

Univariate analysis

For the univariate analyses, β coefficients for each condition were calculated for each run in which that stimulus condition was presented. Statistical tests used the β coefficients from each run as individual observations. Percentage signal change for each condition was calculated by dividing the β coefficient of the condition by the β coefficient of the baseline and multiplying by 100.

We computed features that describe the extent to which each voxel was modulated by the LM, S, daylight, and antidaylight conditions using the fMRI responses to Experiment 3. Each condition was estimated by averaging the β coefficients computed from the two color-gray gratings that comprise the feature. For example, the LM feature was calculated using responses elicited by the 0° (pink–gray) grating and the response elicited by the 180° (cyan–gray) grating. Pairs of gratings that made up these features always appeared within the same run at the same saturation, and responses to them were averaged for subsequent statistical analysis.

In all plots of different univariate metrics (Figs. 3C,D, 8, 9), error bars were estimated via bootstrapping over 1,000 iterations. Each iteration's input β matrix was obtained by averaging N samples drawn with replacement from the set of responses to each feature, where N was the total number of stimulus blocks for that feature.

Contrast response functions (CRFs) were generated by plotting the fMRI signal change as a function of the Michelson cone contrast of the stimuli (Fig. 8A), computed as the vector length of the individual cone contrasts (Table 1), using the approach of Liu and Wandell (2005), and then averaged across the two monochromatic gratings. This calculation is detailed in Equation 2, where L, M, and S are the individual cone contrasts and the subscripts a and b indicate the monochromatic grating type.Contrast=La2+Ma2+Sa2+Lb2+Mb2+Sb22

So for LM_high_sat gratings, a would be the DKL 0 95% saturation grating, b would be the DKL 180 95% saturation grating, and the overall contrast would be 12.0. The CRF slopes and errors were estimated by fitting a linear regression model to the signal change of each feature as a function of cone contrast for each ROI, repeated for each bootstrapped sample (Fig. 8B). The statistical tests comparing the slopes were computed using a one-way ANCOVA. The difference in luminance bias for dorsal-versus-ventral parcels was computed by subtracting the bootstrapped luminance response from the bootstrapped color response for both ventral and dorsal regions and then subtracting the dorsal difference from the ventral difference (Fig. 9B).

Surgical procedures

Surgical methods are the same as described in Lafer-Sousa et al. (2012).

Results

To what extent can retinotopic parcels be distinguished by fMRI responses to simple grating stimuli that vary in spatial frequency, color, and contrast? To answer the question, we collected fMRI responses to these stimuli in two macaque monkeys, and we fit aSSE model to the responses of cortical parcels defined by retinotopic area, upper-versus-lower visual field, and eccentricity preference of each voxel. The SSE model finds a sparse linear transformation (T) of the β coefficients for each stimulus condition that maximizes the pairwise distance between the mean of each parcel while minimizing variances within each class. As described in the methods, the outcome of the SSE model discovers a low-dimensional stimulus feature space that maximally separates the parcels, and the resulting organization of the parcels is informative about their functional relationships.

Functional signatures of eccentricity responses

We first used the SSE model to determine a functional signature of eccentricity, irrespective of the retinotopic area. To do this we disregarded the separation of visual area ROIs—setting the indexes in M that correspond to pairwise distances between ROIs within the same eccentricity level to zero. The result yields a single component along which the eccentricity parcels are systematically arranged (Fig. 3A). The feature contributions to the discriminating component (Fig. 3B, top) show that more foveal voxels were defined by higher relative weighting for responses to higher spatial frequency (LM) gratings and to a lesser extent high saturation antidaylight gratings and middle frequency S gratings. More peripheral voxels were characterized by higher relative weighting for responses to low-spatial-frequency (LMS) gratings and high saturation S gratings. Figure 3B projects the contrast between positive and negative weighted features of this component on the inflated cortical surface of M1 (Fig. 3B, bottom). The result discovers the foveal confluence of the retinotopic cortex (centered on the asterisk, compare with Fig. 1B). Foveal representations were also discovered further along the inferior temporal cortex, confirming prior reports using a univariate contrast (Lafer-Sousa and Conway, 2013).

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Functional signature of cortical representations of eccentricity recovered with SSE using responses to colored gratings varying in hue, saturation, and spatial frequency. A, One SSE model was fit on all fMRI data (Experiments 3 and 4) obtained in two macaque monkeys. Masking parameter M (Eq. 1) was set to constrain the model to separate parcels by the eccentricity preferences of the voxels and not their retinotopic area or hemifield (upper/lower) representation (see Materials and Methods). B, Top, The feature weights for the eccentricity-discriminating component. Only features with weight significantly different from zero are shown. Bottom, Projection of z-score contrast between positively and negatively weighted features on M1's cortical surface. C, The response to heterochromatic gratings varying in color (luminance, LM, and S) and spatial frequency (0.2, 4, and 8 CpD), across different eccentricity-preferring parcels. D, Same data as Figure 3C, replotted to display frequency response functions for luminance, LM, and S axes at each eccentricity level. E, The response to the highest-saturation color–gray gratings (0.5 CpD) within cortical parcels with different eccentricity preference.

The features that comprise the eccentricity component are consistent with the expectation that foveal representations of the visual cortex are tuned for higher spatial frequencies compared with peripheral representations, in both humans (Henriksson et al., 2009; Broderick et al., 2022) and monkeys (Schiller et al., 1976; De Valois et al., 1982). Peripheral representations, meanwhile, are expected to be more responsive to low spatial frequencies and S cone signals given that the relative proportion of S cones (Curcio et al., 1991) and the apparent saturation of S cone-modulating stimuli (Vanston and Crognale, 2018) increase with eccentricity (pre-retinal macular pigment may also play a role).

The results of the SSE model are multivariate. To compare the results with published reports that use univariate methods, we computed a univariate measure of the responses to the various stimulus conditions as a function of eccentricity. Figure 3C shows how responses to the 0.2, 4, and 8 CpD gratings vary as a function of eccentricity, for the three types of gratings defined by the three cardinal directions in cone-opponent color space (luminance, S, and L-M). Responses to low-frequency luminance gratings increased from fovea to periphery [slope, 0.30; p = 0.01; F test (df, 106)]—this result is consistent with an increase in response to low-spatial-frequency luminance gratings from fovea to periphery observed in the human visual cortex (Bayram et al., 2016). Responses to the higher spatial frequencies (4 and 8 CpD) decreased with eccentricity for both LM and LMS gratings (for 4 CpD, Lum slope, −0.22; p = 0.004; S − [L + M] slope, −0.06; p = 0.034; L-M slope, −0.35; p = 0.007; for 8 CpD, Lum slope, −0.13; p = 0.003; S − [L + M] slope, 0.02; p = 0.39 L-M slope, −0.11; p = 0.035; F tests; df, 106). Figure 3D provides another view on these data, plotting spatial frequency response functions for LMS, LM, and LMS gratings at each eccentricity level. For comparison, results obtained in humans with fMRI show gradual increases in responses for low-frequency (0.27 CpD) chromatic gratings as one goes from the fovea to the periphery and gradual decreases in responses to higher-frequency (4.4 CpD) L-M, but not S, gratings (D’Souza et al., 2016).

These relative patterns in response are leveraged in the multivariate analysis, which uncovers the combination of features that best perform the separation of fovea from periphery. Because the multivariate analysis is determined by the relative responses among a set of stimuli, it is not always easy to visualize its outcome given univariate analyses, which simply report the magnitude of responses to individual stimuli independent of the responses to any other stimuli. But as the analyses relating to eccentricity show, the outcome of the multivariate analysis is consistent with the univariate analysis.

To begin to relate published perceptual measurements to color, contrast, and spatial frequency, to cortical responses (often reported with univariate analyses), we investigated the univariate responses across eccentricity to the highest-saturation low-spatial-frequency gratings. Figure 3E shows the responses to the highest-saturation color–gray gratings (0.5 CpD) for each eccentricity level. Responses to luminance, daylight, and S gratings were lower for the fovea-preferring voxels, while responses to antidaylight and LM gratings were not different for fovea-preferring and periphery-preferring voxels (t test, response of foveal-vs-peripheral voxels, luminance p = 0.004; S − [L + M], p = 0.0006; L-M, p = 0.087; antidaylight, p = 0.97; daylight, p = 0.0016; Tukey's HSD). For comparison, Vanston and Crognale (2018) found that suprathreshold stimuli in the periphery, compared with the fovea, were perceived with lower contrast for LM stimuli and higher contrast for S stimuli. Despite there being no difference in antidaylight high saturation response between the most central and most peripheral parcels, this condition appears significant for separating the eccentricity parcels in the SSE analysis. This is likely because the antidaylight condition elicits clearly different responses for the two intermediate eccentricity parcels.

The univariate and SSE analyses uncover patterns of response largely expected from fMRI in humans. Responses in human V1 elicited by LMS, S − [L + M], and L-M gratings with a spatial frequency of 0.55 CpD increase as one goes from the central 2° to 8–10°  (D’Souza et al., 2016). Luminance gratings cause the greatest increase in response from foveal-preferring voxels compared with peripheral-preferring voxel, while S − [L + M] gratings elicit the smallest increase. Mullen et al. (2007) show that S − [L + M] gratings, also with a spatial frequency of 0.5 CpD, elicit the greatest increase in response from 1°- to 6°-preferring voxels of human V1. They also report a decrease in response elicited by L-M gratings across eccentricity.

The univariate results show an asymmetry in the responses to daylight versus antidaylight colors as a function of eccentricity: daylight colors were better than antidaylight colors at separating fovea from periphery (Fig. 3E, bottom panel). As far as we are aware, no data have been published comparing responses to antidaylight and daylight gratings across eccentricity in human subjects. Goddard et al. (2010) did find generally higher fMRI responses to antidaylight gratings than daylight gratings across the visual cortex. The stimuli in that study were somewhat different from those used presently: they were presented on a luminance-contrast pedestal (not equiluminant), were lower mean luminance, increased in spatial frequency from the center to the edge, and were circular.

Control analyses

The SSE analysis includes a sparsity parameter, which is a scalar coefficient on an L1 (or Lasso) regularization term in the objective function. This can improve generalization and interpretability, although the specific setting for the sparsity parameter is not well prescribed. To test the impact of the sparsity setting, we performed the eccentricity analysis by setting the sparsity parameter to zero. The results are comparable with those obtained with the sparsity setting of 0.005 (Fig. 4). The six strongest features are the same (compare with Fig. 3). The inclusion of a sparsity value in the analysis recovers two additional components, both having relatively weak contributions. We include the sparsity setting throughout the analysis even though it does not have a substantial impact on the conclusions because it appears to extract more information and may be useful if the SSE approach is adopted for other more complex neuroscience applications.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Cortical representations of eccentricity recovered with SSE without L1 regularization, i.e., Sparsity parameter is set to 0. A, One SSE model was fit on all fMRI data (Experiments 3 and 4) obtained in two macaque monkeys. Masking parameter M was set to constrain the model to separate parcels by eccentricity preferences of the voxels and not their retinotopic area or hemifield (upper/lower) representation (see Materials and Methods). The response to the component increases with eccentricity (linear regression slope, 0.33; p = 0.0). B, The feature weights for the eccentricity-discriminating component. Only features with weight significantly different from zero are shown.

We also assessed the reliability of the results by cross-validation. We fit (i.e., found component T) and evaluated (i.e., projected the parcels onto the subspace spanned by T) the model using separate datasets, A and B (Fig. 5). A and B were alternate runs of fMRI data collected in Experiments 3 and 4. The cross-validated results show higher variability than the results obtained with all the data due to the decreased power in the subdivided dataset (evident as relatively larger error bars and fewer significant components), but the main conclusions drawn from the analysis are consistent with those obtained using all the data (Fig. 3).

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Cortical representations of eccentricity recovered with SSE validated across two independent datasets, A and B, with L1 regularization, i.e., sparsity parameter is set to 0.005. A, One SSE model was fit on all fMRI data in (Experiments 3 and 4) in Set A and the projection of Set B is shown (top), and one was fit on Set B and the projection of Set A is shown (bottom). B, The feature weights for the eccentricity-discriminating component. Only features with weight significantly different from zero are shown.

Functional signatures of retinotopic visual areas

We next used the SSE approach to test for functional signatures of the retinotopic visual areas and their dorsal-versus-ventral subdivisions. To do this, we fit the SSE model with two components using information matched for each eccentricity preference (Fig. 6), which removes the impact of eccentricity on the voxel responses. The result of this analysis recovers a striking pattern. The first component separated parcels along the putative visual-processing hierarchy, from left-to-right (x-axis) in each panel of Figure 6A: V1, V2, and V3/V4 (upper triangle, circle, inverted triangle, and cross). The second component separated the ventral and dorsal subdivisions of each visual area, top-to-bottom (y-axis): V1d, V2d, V3d, and V4d showed more positive weight compared with V1v, V2v, V3v, and V4v (gray symbols are above black symbols). The second component grouped the two dorsal areas, V3a (diamond) and MT+ (square), with the dorsal subdivisions of the other retinotopic areas.

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

Functional signatures of cortical representations of retinotopic areas and their dorsal/ventral subdivisions recovered with SSE using responses to colored gratings varying in hue, saturation, and spatial frequency. A, The mean voxel response for retinotopic areas and their dorsal/ventral subdivisions along the two maximally discriminating components discovered by the SSE model, for voxels defined by eccentricity preference (inset icon, most foveal-preferring in the leftmost panel; most peripheral-preferring in the rightmost panel. Error bars are 95% confidence intervals via 1,000 bootstrapping iterations). B, The difference in the Component 2 response between dorsal and ventral subdivisions of each retinotopic area. Violin plot distributions estimated by 1,000 bootstrapping iterations. The difference increases along the visual hierarchy. C, The feature contributions of Components 1 and 2. Only features significantly different from zero are shown. D, A surface projection of the z-score contrast between the two features in Component 1 with the most positive weights and the most negative weights. E, As in D, a surface projection of the z-score contrast between the two features in Component 2 with the most positive weights and most negative weights. The dotted line shows the division between dorsal and ventral parcels.

The difference between ventral and dorsal parcels along Component 2 was progressively greater along the putative visual-processing hierarchy (Fig. 6B; slope, 0.09; p = 1.1 × 10−98). A dorsal/ventral asymmetry in V1 has not previously been described; we note that it was only evident in the present report using the multivariate analysis which recovers structure across visual areas and eccentricities. The value for Component 2 was only significantly different for dorsal and ventral V1 parcels for the three peripheral parcels (Fig. 6A).

Figure 6C shows the feature contributions for the two components. Component 1 corresponds to strong differences between negative-weight features (lm_low_freq, lm_med_sat, lms_mid_freq, and daylight_med_sat) versus positive-weight features (lm_high_freq, dalight_low_sat, lms_low_sat). The sets of positive-weight features and negative-weight features are each not explained by a simple common feature property. For example, the negative-weight features include stimuli with relatively higher saturation (the lm_low_freq stimulus is a heterochromatic grating with reddish and greenish bands that has the highest color contrast of any stimulus we used) as well as stimuli that have no color contrast (e.g., the lms_mid_freq stimulus), while the positive-weight features include stimuli with low and high spatial frequency. The positive-weight features are low saturation or low color contrast, except the lm_high_freq stimulus which has high saturation but also high spatial frequency. The negative-weight features are high color contrast or high saturation and relatively lower spatial frequency compared with the positive-weight features. Component 2 predominantly reflects differences in responses to color versus luminance, particularly responses to daylight-axis colors. We underscore that these analyses do not recover the univariate features that most strongly drive voxels but rather uncover the sets of relative differences (reflecting interactions among features) that capture the functional signatures of the voxels. These complex functional signatures can be explored by investigating univariate responses, as we do in subsequent analyses (Figs. 8, 9).

To help visualize the results, we projected on the cortical surface of the right hemisphere of M1 the contrast map generated by contrasting positive-weight features with negative-weight features, for Component 1 (Fig. 6D) and Component 2 (Fig. 6E). The surface projection of Component 1 shows a gradient progressing from posterior (bluer) to anterior (redder), whereas the surface projection of Component 2 is predominantly redder in dorsal parcels (above the white line) and bluer in ventral parcels (below the white line).

We validate the SSE results by cross-validation: fitting and evaluating on separate independent datasets, A and B, composed of alternating runs of fMRI Experiments 3 and 4 (Fig. 7). As in the earlier cross-validation, variability increases due to the decreased power in the subdivided dataset (evident as relatively larger error bars and fewer significant components), but the main conclusions are consistent with those obtained using all the data (compare Fig. 7 with Fig. 6).

Figure 7.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 7.

Functional signatures of retinotopic areas and their dorsal/ventral subdivisions recovered with SSE validated across two independent datasets, A and B, with L1 regularization, i.e., sparsity parameter is set to 0.005. A, One SSE model was fit on all fMRI data in (Experiments 3 and 4) in Set A and the projection of Set B for each eccentricity level is shown (top), and one was fit on Set B and the projection of Set A for each eccentricity level is shown (bottom). B, The difference in the Component 2 response between dorsal and ventral subdivisions of each retinotopic area. Violin plot distributions estimated by 1,000 bootstrapping iterations. The difference increases along the visual hierarchy (top, slope, 0.069; p = 1 × 10−26; bottom, slope, 0.0823; p = 1 × 10−31). C, The feature weights for the eccentricity-discriminating components fit on Set A (top) and Set B (bottom). Only features with weight significantly different from zero are shown.

To better understand what underlies the functional signatures of visual areas and their dorsal/ventral subdivisions, we related the outcomes of the SSE analyses and the univariate analyses. But once again, we underscore that the SSE recovers interactions among features that constitute the functional signatures not a straightforward combination of univariate features. The output of the SSE analysis might not be easily visualized as combinations of univariate results, even though we believe it provides important clues regarding the computational role(s) of the cortical parcels.

In the SSE analysis that separated parcels by the visual area (Fig. 6C, Component 1), negative-weight features correspond to more posterior areas and positive-weight features correspond to more anterior areas. Component 1 in that analysis was defined by one strong positive-weight feature that has no chromatic contrast, while the two strongest negative-weight features have relatively high chromatic contrast. This pattern of results suggests that differences in chromatic CRFs are partially what distinguish posterior from anterior visual areas. We analyzed the fMRI responses to the different saturation stimuli as a function of visual area and dorsal/ventral subdivision (Figs. 8, 9). Figure 8A shows the CRFs for each visual area to each color axis and luminance gratings. We determined the slope of the CRFs for stimuli defined by each color axis within each visual area (slopes were a line fit to the fMRI response as a function of contrast; see Materials and Methods; Fig. 8B). There was a main effect of parcel on the CRF slope; CRFs for all color axes were progressively shallower from posterior to anterior visual areas.

Figure 8.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 8.

Univariate analyses of the responses of retinotopically defined parcels to colored gratings varying in saturation or spatial frequency. A, CRFs for LM, S, luminance, daylight, and antidaylight gratings in each ROI. Cones contrast is the average Michelson cone contrast of the two monochromatic gratings on each axis in DKL color space. B, CRF slope for the fMRI responses to colors defined by various directions through color space (top labels) decrease from V1 to V4 (error bars represent 95% C.I.); CRFs for V3a and MT are shown on the right. The CRF is progressively shallower for areas from posterior to anterior. C, Spatial frequency response functions for LM, S, and LMS gratings in each ROI.

Figure 9.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 9.

A, The percentage fMRI signal change in response to the gratings defined by various directions through color space (key) for the dorsal and ventral subdivisions of V1, V2, V3, and V4. Each bar shows the average response to the three gratings of highest saturation for each color axis (Fig. 3C). The responses to gratings modulated along the luminance axis were significantly greater than the responses to gratings modulated along the daylight axis in dorsal parcels, but not ventral parcels (two-way ANOVA, main effects of color, F = 58; p = 0; region, F = 6.3; p = 0.01; and an interaction of region and color, F = 2.7; p = 0.03; post hoc test comparing response to luminance gratings vs daylight gratings, Tukey's HSD, dorsal luminance–dorsal daylight, p = 2.8 × 10−6; ventral luminance–ventral daylight, p = 0.93). B, Violin plot comparing dorsal-versus-ventral regions for the difference between the responses to gratings modulating along the luminance axis versus the responses to gratings modulating along the other color axes (from left to right, daylight, S, L-M, and antidaylight). Dorsal parcels show a bias for luminance regardless of what color is used as the comparative response (note the p values in the plot), but the bias is greatest when compared with responses along the daylight axis; violin plot distributions estimated by 1,000 bootstrap iterations. C, The percentage signal change for gratings defined by each direction through color space, combined across all saturation levels. Gratings along the daylight axis elicited a greater response than colored gratings along all other axes (excluding luminance) for responses averaged across all retinotopic parcels (left plot, daylight vs S, p = 0.001; daylight vs LM, p = 8.9 × 10−10; daylight vs antidaylight, p = 2.9 × 10−10). Right panels show this analysis for each visual area separately (multiple comparisons render statistical tests insignificant). Error bars are 95% confidence intervals computed by bootstrapping across runs.

Component 1 also recovers significant weight for the features corresponding to low-spatial-frequency luminance and LM gratings. We show spatial frequency response functions in Figure 8C. Response to low-spatial-frequency luminance gratings was relatively lower in V1 compared with more anterior areas, and response to lower frequency LM gratings was relatively higher in V1 compared with more anterior areas. Note that the difference between low-spatial-frequency LMS and LM gratings increases along the visual hierarchy, a feature that is uncovered by the SSE analysis in Component 1 (Fig. 6C).

The SSE model also suggests that differential responses to daylight and luminance gratings separate dorsal and ventral parcels. To evaluate this directly, we plot the fMRI responses to each color grating averaged across the four saturation levels for dorsal-versus-ventral parcels (Fig. 9A). There was a difference in fMRI responses of dorsal and ventral parcels [two-way ANOVA, main effects of color (F = 58; p = 0), region (F = 6.3; p = 0.01), and to a lesser extent an interaction of parcel and color (F = 2.7; p = 0.03)]. Post hoc tests reveal that although responses to individual color conditions did not differ between dorsal and ventral parcels (Tukey's HSD, p > 0.05), responses to luminance-contrast gratings were greater than responses to daylight gratings for dorsal but not ventral parcels (Tukey's HSD, dorsal luminance–dorsal daylight, p = 2.8 × 10−6; ventral luminance–ventral daylight, p = 0.93).

To investigate the increase in color response relative to luminance response in the ventral subdivisions compared with the dorsal subdivisions, we computed a univariate feature specified by the SSE results. The feature is the difference in luminance bias between dorsal and ventral parcels, where “luminance bias” was computed as the difference in response to luminance gratings and the response to each of the different colored gratings. The results are shown in Figure 9. The dorsal parcels showed a luminance bias regardless of which color grating was used to assess it, although not all the comparisons were significant (daylight-vs-luminance, p = 0.03; LM-vs-luminance, p = 0.05; S-vs-luminance, p = 0.06; antidaylight-vs-luminance, p = 0.11; one-sided bootstrap t tests; Fig. 9B).

The signatures of retinotopic areas and their dorsal/ventral subdivisions recovered by SSE consistently assigned high weight to the responses to daylight-axis modulating gratings (daylight-axis gratings are weighted highly in both Components 1 and 2; Fig. 6C). So, we investigated whether cortical responses were higher to gratings that modulated along the daylight axis compared with gratings that modulated along other directions in color space. Across all retinotopic parcels, the average fMRI response was highest to luminance-modulating gratings (Fig. 9C, open bars). Among the responses to colored gratings, the highest response was to gratings that modulate along the daylight axis (Fig. 9C, left panel; daylight vs S, p = 0.001; daylight vs LM, p = 9 × 10−10; daylight vs antidaylight, p = 3 × 10−10). This pattern was consistent in all visual areas (Fig. 9C, right panels) but varied somewhat across areas (two-way ANOVA, main effects of both area and color axis; F = 85; p = 0.0 for ROI; F = 61; p = 0.8 for color axis). Figure 9C also shows the strong luminance bias of V3a and MT as reported previously (Conway and Tsao, 2005).

Discussion

This study presents fMRI data of the macaque visual cortex to a battery of gratings varying in hue, contrast, and spatial frequency. The data are analyzed both with traditional univariate methods such as contrast-sensitivity curves and with a multivariate dimensionality-reduction approach that we call SSE. Our goal was to discover functional signatures for retinotopic visual areas and their organization by eccentricity and upper/lower visual hemifield. In the SSE analysis, each stimulus condition constitutes one dimension in a high-dimensional space; we used SSE to recover three axes, or components, through this space, each defined by a multivariate combination of stimulus features. One component explained the eccentricity representation; the other two components separated visual areas and their dorsal/ventral subdivisions. The results offer insight into the roles of different visual areas, the origin of behavioral asymmetries between upper and lower visual fields, and the functional similarity between macaques and humans. More broadly, the results provide a proof-of-principle of the utility of the SSE approach in neuroscience.

A functional signature of cortical responses defined by eccentricity

To recover components unrelated to eccentricity, we set the SSE model to disregard the distance between visual areas and their ventral and dorsal subdivisions. The result is a single component that robustly separates the representation of fovea from periphery (Fig. 3). The features of the component are consistent with expectations: foveal representations are tuned for higher spatial frequencies compared with peripheral representations (Schiller et al., 1976; De Valois et al., 1982; Henriksson et al., 2009; Broderick et al., 2022), and peripheral representations are relatively more responsive to S cone signals (Curcio et al., 1991; Vanston and Crognale, 2018). The result implies that across individuals, the pattern of responses across retinotopic cortex is predictive of stimulus color. In other words, the brain response to a given color is somewhat comparable from one person to the next, as predicted from other neurophysiological measurements (Rosenthal et al., 2021).

Functional signatures of retinotopic area and upper/lower visual field representations

To recover components unrelated to eccentricity, we set the SSE model to disregard the separation of parcels with different eccentricity preference. The result was striking: the model discovered two components that discriminated visual areas by their anatomical location and their dorsal/ventral subdivisions (Fig. 6). Components recovered in the SSE analysis correspond to multivariate combinations of the stimuli, which likely reflect the fact that visual neurons are sensitive along multiple dimensions. For example, parvocellular LGN cells, which provide a major input to the visual cortex, show sharper spatial frequency tuning when tested using luminance contrast than color (Wiesel and Hubel, 1966). The functional signature of Component 1, which distinguished posterior areas from anterior areas also reflects the flatter CRFs observed in more anterior areas (Boynton et al., 1996; Olman et al., 2004; Buracas and Boynton, 2007; Tregillus et al., 2021). The functional signature of Component 2, meanwhile, discriminated dorsal parcels from ventral parcels, with strong weight to chromatic features, especially daylight colors. This result extends to macaques an observation in humans: compared with dorsal partitions, ventral partitions across cortical areas generally show relatively greater responses to color (Goddard et al., 2011).

Dorsal V1 has relatively more cortical real estate compared with ventral V1, in monkeys (Van Essen et al., 1984) and humans (Silva et al., 2018). This asymmetry correlates with higher contrast sensitivity in the lower visual field (Himmelberg et al., 2023; recall dorsal V1 maps the ventral visual field). Component 2 here confirms the higher contrast sensitivity among dorsal parcels. But Component 2 cannot be solely explained by higher contrast sensitivity of dorsal parcels, as its features were also related to color modulation. Two Component 2 features comprise daylight-axis colors. Because the color statistics of objects correspond largely to daylight-axis colors (Rosenthal et al., 2018), we speculate that the present results reflect specializations not only of dorsal parcels for higher contrast sensitivity but also ventral parcels for better color detection of behaviorally relevant parts of scenes (objects).

Asymmetries in color versus luminance responses of dorsal and ventral subdivisions of macaque V4 have not been found by others (Wade et al., 2008). The present results suggest that measuring colors defined by intermediate axes, along with controls for eccentricity, may be necessary to make this observation. The present results also suggest that dorsal parcels are separated from ventral parcels for all visual areas, but to a progressively greater extent along the posterior-to-anterior sequence of visual areas, such that the dorsal/ventral asymmetry in V1 is only evident in the multivariate pattern of results not the univariate analyses. The multivariate pattern of results described presently suggest that the asymmetry in dorsal and ventral V4 may be inherited and amplified from V1, just as the foveal bias of V1 is amplified by V4 (Motter, 2009).

SSE as a probe of monkey–human cortical homologies

To what extent are macaque and human visual areas homologous? Homology is reasonably well supported for V1, V2, V3, and inferior temporal areas (Lafer-Sousa et al., 2016). But there are conflicting views regarding V4 (Wade et al., 2002; Fize et al., 2003; Brewer et al., 2005; Larsson and Heeger, 2006; Hansen et al., 2007; Winawer et al., 2010; Roe et al., 2012; Vanduffel et al., 2014; Winawer and Witthoft, 2015). The argument for species divergence of V4 has taken two routes. The first concerns the retinotopic organization of the dorsal and ventral candidate parcels (Tootell, 2001; Wade et al., 2002; Brewer et al., 2005; Larsson and Heeger, 2006; Hansen et al., 2007; Winawer et al., 2010; Winawer and Witthoft, 2015). The second concerns the extent of color sensitivity. In humans, candidate dorsal V4, to the extent it is identified, shows relatively lower responsivity to color compared with ventral V4, while both dorsal and ventral portions of V4 in monkeys supposedly show comparable responses to color (Goddard et al., 2011). But here we find that dorsal and ventral partitions of monkey V4 diverge in color responsivity, with dorsal regions less responsive to color than ventral regions. While the present results do not resolve whether V4 is the same in macaques and humans, they are consistent with V4 homology across macaques and humans, and they underscore the utility of multivariate tools such as SSE in addressing the question.

The origin of dorsal/ventral functional asymmetries

Could a dorsal/ventral asymmetry be inherited from asymmetric retinal photoreceptor distributions? The inferior retina has slightly higher cone density than the superior retina in both humans (Curcio et al., 1991; Song et al., 2011) and macaques (Perry and Cowey, 1985; Packer et al., 1999), consistent with the higher chromatic responses of the ventral retinotopic parcels. The superior retina also has relatively more rods compared with the inferior retina, resulting in a higher relative number of rods to cones (Curcio and Allen, 1990). This asymmetry may contribute to the contrast response differences observed between dorsal and ventral parcels. But retinal factors cannot fully explain the cortical asymmetries for at least three reasons. First, the retinal ganglion cell density is markedly higher in the superior retina (Curcio and Allen, 1990; Watson, 2014), opposite to the asymmetry in cone density. Second, cone distributions and preretinal optical factors are insufficient to explain asymmetries in visual performance (Kupers et al., 2019). And third, differences in cone density cannot account for functional asymmetries between dorsal and ventral parcels observed for daylight versus antidaylight colors. We hypothesize that the dorsal/ventral asymmetries across the retinotopic cortex arise from selective sampling of geniculate input by V1, which is then amplified along the cortical hierarchy.

Just as the retinotopic organization of the cortex determines aspects of visual behavior, we expect that the asymmetry of dorsal-versus-ventral parcels also corresponds to computational objectives of the cortex. Responses to daylight colors appear to distinguish ventral and dorsal parcels, particularly in V3 and V4 (Fig. 9A). One possibility is that this dorsal/ventral asymmetry reflects an adaptation for processing differences in natural scene statistics between the upper and lower visual field. The dorsal/ventral asymmetry implies that visual discrimination is more likely to have useful information defined by daylight colors in the upper visual field compared with the lower visual field. Daylight-axis colors are important for object recognition and scene segmentation (Lafer-Sousa et al., 2012; Pearce et al., 2014; Rosenthal et al., 2018). A potential behavioral advantage for color-in-service-of-object vision in the upper visual field may parallel the behavioral advantage in the upper visual field found for face perception (Quek and Finkbeiner, 2014, 2015; Tsurumi et al., 2022) and shape perception (Zito et al., 2016).

Broader applications of the SSE method

Neural tuning properties differentiate retinotopic areas, as discovered using parametrically generated texture stimuli (Freeman et al., 2013; Ziemba et al., 2016). Building on the conceptual framework established by Hubel and Wiesel, which considers complex stimulus selectivity as a chain of computations starting with punctate center-surround receptive fields of retinal ganglion cells, our work uses SSE to determine whether visual areas, not just neurons, can be characterized by functional signatures constructed from simpler building blocks. The present results serve as a proof-of-principle that SSE could be fruitfully applied to identify functional signatures using building blocks beyond gratings. For instance, we aim to use this approach to identify abstract rules common to various subregions (color-biased domains, face patches, place-biased domains) within each of the four stages of the inferior temporal cortex.

Footnotes

  • We thank Rosa Lafer-Sousa for help collecting the functional magnetic resonance imaging data and Eli Merriam, Chris Baker, Christian Quaia, and two anonymous reviewers for their constructive feedback on drafts of this work. We thank L. Wald and A. Mareyam for providing the four-channel magnetic resonance coil. This research was supported by the National Eye Institute Intramural Research Program, as well as grants from the National Science Foundation (0918064 to B.R.C.), the US National Institutes of Health (EY023322 to B.R.C.), the Whitehall Foundation (B.R.C.), and Wellesley College (B.R.C.). The imaging was carried out at the Athinoula A. Martinos Center for Biomedical Imaging at the Massachusetts General Hospital, using resources provided by the Center for Functional Neuroimaging Technologies (P41EB015896) and a P41 Biotechnology Resource Grant supported by the National Institute of Biomedical Imaging and Bioengineering. This work also involved the use of instrumentation supported by the US National Institutes of Health Shared Instrumentation Grant Program and/or High-End Instrumentation Grant Program (S10RR021110).

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Bevil R. Conway at bevil.conway{at}nih.gov or Spencer R. Loggia at spencer.loggia{at}nih.gov.

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.

References

  1. ↵
    1. Arcaro MJ,
    2. Schade PF,
    3. Vincent JL,
    4. Ponce CR,
    5. Livingstone MS
    (2017) Seeing faces is necessary for face-domain formation. Nat Neurosci 20:1404–1412. https://doi.org/10.1038/nn.4635 pmid:28869581
    OpenUrlCrossRefPubMed
  2. ↵
    1. Avants BB,
    2. Tustison NJ,
    3. Song G,
    4. Cook PA,
    5. Klein A,
    6. Gee JC
    (2011) A reproducible evaluation of ANTs similarity metric performance in brain image registration. Neuroimage 54:2033–2044. https://doi.org/10.1016/j.neuroimage.2010.09.025 pmid:20851191
    OpenUrlCrossRefPubMed
  3. ↵
    1. Bayram A,
    2. Karahan E,
    3. Bilgiç B,
    4. Ademoglu A,
    5. Demiralp T
    (2016) Achromatic temporal-frequency responses of human lateral geniculate nucleus and primary visual cortex. Vision Res 127:177–185. https://doi.org/10.1016/j.visres.2016.09.001
    OpenUrlCrossRefPubMed
  4. ↵
    1. Boynton GM,
    2. Engel SA,
    3. Glover GH,
    4. Heeger DJ
    (1996) Linear systems analysis of functional magnetic resonance imaging in human V1. J Neurosci 16:4207–4221. https://doi.org/10.1523/jneurosci.16-13-04207.1996 pmid:8753882
    OpenUrlAbstract/FREE Full Text
  5. ↵
    1. Brewer AA,
    2. Liu J,
    3. Wade AR,
    4. Wandell BA
    (2005) Visual field maps and stimulus selectivity in human ventral occipital cortex. Nat Neurosci 8:1102–1109. https://doi.org/10.1038/nn1507
    OpenUrlCrossRefPubMed
  6. ↵
    1. Brewer AA,
    2. Press WA,
    3. Logothetis NK,
    4. Wandell BA
    (2002) Visual areas in macaque cortex measured using functional magnetic resonance imaging. J Neurosci 22:10416–10426. https://doi.org/10.1523/jneurosci.22-23-10416.2002 pmid:12451141
    OpenUrlAbstract/FREE Full Text
  7. ↵
    1. Broderick WF,
    2. Simoncelli EP,
    3. Winawer J
    (2022) Mapping spatial frequency preferences across human primary visual cortex. J Vis 22:3. https://doi.org/10.1167/jov.22.4.3 pmid:35266962
    OpenUrlCrossRefPubMed
  8. ↵
    1. Buracas GT,
    2. Boynton GM
    (2007) The effect of spatial attention on contrast response functions in human visual cortex. J Neurosci 27:93–97. https://doi.org/10.1523/jneurosci.3162-06.2007 pmid:17202476
    OpenUrlAbstract/FREE Full Text
  9. ↵
    1. Carrasco M,
    2. McElree B
    (2001) Covert attention accelerates the rate of visual information processing. Proc Natl Acad Sci U S A 98:5363–5367. https://doi.org/10.1073/pnas.081074098 pmid:11309485
    OpenUrlAbstract/FREE Full Text
  10. ↵
    1. Conway BR
    (2018) The organization and operation of inferior temporal cortex. Annu Rev Vis Sci 4:381. https://doi.org/10.1146/annurev-vision-091517-034202 pmid:30059648
    OpenUrlCrossRefPubMed
  11. ↵
    1. Conway BR,
    2. Tsao DY
    (2005) Color architecture in alert macaque cortex revealed by fMRI. Cereb Cortex 16:1604–1613. https://doi.org/10.1093/cercor/bhj099 pmid:16400160
    OpenUrlPubMed
  12. ↵
    1. Cox DD,
    2. Savoy RL
    (2003) Functional magnetic resonance imaging (fMRI) “brain reading”: detecting and classifying distributed patterns of fMRI activity in human visual cortex. Neuroimage 19:261–270. https://doi.org/10.1016/s1053-8119(03)00049-1
    OpenUrlCrossRefPubMed
  13. ↵
    1. Curcio CA,
    2. Allen KA
    (1990) Topography of ganglion cells in human retina. J Comp Neurol 300:5–25. https://doi.org/10.1002/cne.903000103
    OpenUrlCrossRefPubMed
  14. ↵
    1. Curcio CA,
    2. Allen KA,
    3. Sloan KR,
    4. Lerea CL,
    5. Hurley JB,
    6. Klock IB,
    7. Milam AH
    (1991) Distribution and morphology of human cone photoreceptors stained with anti-blue opsin. J Comp Neurol 312:610–624. https://doi.org/10.1002/cne.903120411
    OpenUrlCrossRefPubMed
  15. ↵
    1. Derrington AM,
    2. Krauskopf J,
    3. Lennie P
    (1984) Chromatic mechanisms in lateral geniculate nucleus of macaque. J Physiol 357:241–265. https://doi.org/10.1113/jphysiol.1984.sp015499 pmid:6512691
    OpenUrlCrossRefPubMed
  16. ↵
    1. De Valois RL,
    2. Albrecht DG,
    3. Thorell LG
    (1982) Spatial frequency selectivity of cells in macaque visual cortex. Vision Res 22:545–559. https://doi.org/10.1016/0042-6989(82)90113-4
    OpenUrlCrossRefPubMed
  17. ↵
    1. D’Souza DV,
    2. Auer T,
    3. Frahm J,
    4. Strasburger H,
    5. Lee BB
    (2016) Dependence of chromatic responses in V1 on visual field eccentricity and spatial frequency: an fMRI study. J Opt Soc Am A 33:A53. https://doi.org/10.1364/josaa.33.000a53
    OpenUrlCrossRef
  18. ↵
    1. Fize D,
    2. Vanduffel W,
    3. Nelissen K,
    4. Denys K,
    5. d’Hotel CC,
    6. Faugeras O,
    7. Orban GA
    (2003) The retinotopic organization of primate dorsal V4 and surrounding areas: a functional magnetic resonance imaging study in awake monkeys. J Neurosci 23:7395–7406. https://doi.org/10.1523/JNEUROSCI.23-19-07395.2003 pmid:12917375
    OpenUrlAbstract/FREE Full Text
  19. ↵
    1. Freeman J,
    2. Ziemba CM,
    3. Heeger DJ,
    4. Simoncelli EP,
    5. Movshon JA
    (2013) A functional and perceptual signature of the second visual area in primates. Nat Neurosci 16:974–981. https://doi.org/10.1038/nn.3402 pmid:23685719
    OpenUrlCrossRefPubMed
  20. ↵
    1. Gattass R,
    2. Sousa A,
    3. Gross C
    (1988) Visuotopic organization and extent of V3 and V4 of the macaque. J Neurosci 8:1831–1845. https://doi.org/10.1523/JNEUROSCI.08-06-01831.1988 pmid:3385477
    OpenUrlAbstract/FREE Full Text
  21. ↵
    1. Goddard E,
    2. Mannion DJ,
    3. McDonald JS,
    4. Solomon SG,
    5. Clifford CWG
    (2011) Color responsiveness argues against a dorsal component of human V4. J Vis 11:3. https://doi.org/10.1167/11.4.3
    OpenUrlAbstract/FREE Full Text
  22. ↵
    1. Goddard E,
    2. Solomon S,
    3. Clifford C
    (2010) Adaptable mechanisms sensitive to surface color in human vision. J Vis 10:17–17. https://doi.org/10.1167/10.9.17
    OpenUrlAbstract/FREE Full Text
  23. ↵
    1. Gruner W
    (2010) MANCOVAN. In MATLAB central. Available at: https://www.mathworks.com/matlabcentral/fileexchange/27014-mancovan
  24. ↵
    1. Hansen KA,
    2. Kay KN,
    3. Gallant JL
    (2007) Topographic organization in and near human visual area V4. J Neurosci 27:11896–11911. https://doi.org/10.1523/JNEUROSCI.2991-07.2007 pmid:17978030
    OpenUrlAbstract/FREE Full Text
  25. ↵
    1. Hasson U,
    2. Avidan G,
    3. Deouell LY,
    4. Bentin S,
    5. Malach R
    (2003) Face-selective activation in a congenital prosopagnosic subject. J Cogn Neurosci 15:419–431. https://doi.org/10.1162/089892903321593135
    OpenUrlCrossRefPubMed
  26. ↵
    1. Henriksson L,
    2. Hyvärinen A,
    3. Vanni S
    (2009) Representation of cross-frequency spatial phase relationships in human visual cortex. J Neurosci 29:14342–14351. https://doi.org/10.1523/JNEUROSCI.3136-09.2009 pmid:19906981
    OpenUrlAbstract/FREE Full Text
  27. ↵
    1. Himmelberg MM,
    2. Winawer J,
    3. Carrasco M
    (2023) Polar angle asymmetries in visual perception and neural architecture. Trends Neurosci 46:445–458. https://doi.org/10.1016/j.tins.2023.03.006 pmid:37031051
    OpenUrlCrossRefPubMed
  28. ↵
    1. Jung B,
    2. Taylor PA,
    3. Seidlitz J,
    4. Sponheim C,
    5. Perkins P,
    6. Ungerleider LG,
    7. Glen D,
    8. Messinger A
    (2021) A comprehensive macaque fMRI pipeline and hierarchical atlas. Neuroimage 235:117997. https://doi.org/10.1016/j.neuroimage.2021.117997 pmid:33789138
    OpenUrlCrossRefPubMed
  29. ↵
    1. Kingma DP,
    2. Ba J
    (2017) Adam: a method for stochastic optimization.
  30. ↵
    1. Klink PC,
    2. Chen X,
    3. Vanduffel W,
    4. Roelfsema PR
    (2021) Population receptive fields in nonhuman primates from whole-brain fMRI and large-scale neurophysiology in visual cortex. Elife 10:e67304. https://doi.org/10.7554/eLife.67304 pmid:34730515
    OpenUrlCrossRefPubMed
  31. ↵
    1. Kolster H,
    2. Janssens T,
    3. Orban GA,
    4. Vanduffel W
    (2014) The retinotopic organization of macaque occipitotemporal cortex anterior to V4 and caudoventral to the middle temporal (MT) cluster. J Neurosci 34:10168–10191. https://doi.org/10.1523/JNEUROSCI.3288-13.2014 pmid:25080580
    OpenUrlAbstract/FREE Full Text
  32. ↵
    1. Kupers ER,
    2. Benson NC,
    3. Carrasco M,
    4. Winawer J
    (2022) Asymmetries around the visual field: from retina to cortex to behavior. PLoS Comput Biol 18:e1009771. https://doi.org/10.1371/journal.pcbi.1009771 pmid:35007281
    OpenUrlCrossRefPubMed
  33. ↵
    1. Kupers ER,
    2. Carrasco M,
    3. Winawer J
    (2019) Modeling visual performance differences ‘around’ the visual field: a computational observer approach. PLoS Comput Biol 15:e1007063. https://doi.org/10.1371/journal.pcbi.1007063 pmid:31125331
    OpenUrlCrossRefPubMed
  34. ↵
    1. Lafer-Sousa R,
    2. Conway BR
    (2013) Parallel, multi-stage processing of colors, faces and shapes in macaque inferior temporal cortex. Nat Neurosci 16:1870–1878. https://doi.org/10.1038/nn.3555 pmid:24141314
    OpenUrlCrossRefPubMed
  35. ↵
    1. Lafer-Sousa R,
    2. Conway BR,
    3. Kanwisher NG
    (2016) Color-biased regions of the ventral visual pathway lie between face-and place-selective regions in humans, as in macaques. J Neurosci 36:1682–1697. https://doi.org/10.1523/JNEUROSCI.3164-15.2016 pmid:26843649
    OpenUrlAbstract/FREE Full Text
  36. ↵
    1. Lafer-Sousa R,
    2. Liu YO,
    3. Lafer-Sousa L,
    4. Wiest MC,
    5. Conway BR
    (2012) Color tuning in alert macaque V1 assessed with fMRI and single-unit recording shows a bias toward daylight colors. J Opt Soc Am A Opt Image Sci Vis 29:657–670. https://doi.org/10.1364/JOSAA.29.000657
    OpenUrlCrossRefPubMed
  37. ↵
    1. Larsson J,
    2. Heeger DJ
    (2006) Two retinotopic visual areas in human lateral occipital cortex. J Neurosci 26:13128–13142. https://doi.org/10.1523/jneurosci.1657-06.2006 pmid:17182764
    OpenUrlAbstract/FREE Full Text
  38. ↵
    1. Levine MW,
    2. McAnany JJ
    (2005) The relative capabilities of the upper and lower visual hemifields. Vision Res 45:2820–2830. https://doi.org/10.1016/j.visres.2005.04.001
    OpenUrlCrossRefPubMed
  39. ↵
    1. Li X,
    2. Morgan PS,
    3. Ashburner J,
    4. Smith J,
    5. Rorden C
    (2016) The first step for neuroimaging data analysis: DICOM to NIfTI conversion. J Neurosci Methods 264:47–56. https://doi.org/10.1016/j.jneumeth.2016.03.001
    OpenUrlCrossRefPubMed
  40. ↵
    1. Lin W,
    2. Gao Q,
    3. Du M,
    4. Chen W,
    5. Tong T
    (2021) Multiclass diagnosis of stages of Alzheimer’s disease using linear discriminant analysis scoring for multimodal data. Comput Biol Med 134:104478. https://doi.org/10.1016/j.compbiomed.2021.104478
    OpenUrlCrossRefPubMed
  41. ↵
    1. Liu J,
    2. Wandell BA
    (2005) Specializations for chromatic and temporal signals in human visual cortex. J Neurosci 25:3459–3468. https://doi.org/10.1523/JNEUROSCI.4206-04.2005 pmid:15800201
    OpenUrlAbstract/FREE Full Text
  42. ↵
    1. MacLeod DIA,
    2. Boynton RM
    (1979) Chromaticity diagram showing cone excitation by stimuli of equal luminance. J Opt Soc Am 69:1183. https://doi.org/10.1364/josa.69.001183
    OpenUrlCrossRefPubMed
  43. ↵
    1. Motter BC
    (2009) Central V4 receptive fields are scaled by the V1 cortical magnification and correspond to a constant-sized sampling of the V1 surface. J Neurosci 29:5749–5757. https://doi.org/10.1523/jneurosci.4496-08.2009 pmid:19420243
    OpenUrlAbstract/FREE Full Text
  44. ↵
    1. Mullen KT
    (2019) The response to colour in the human visual cortex: the fMRI approach. Curr Opin Behav Sci 30:141–148. https://doi.org/10.1016/j.cobeha.2019.08.001
    OpenUrlCrossRef
  45. ↵
    1. Mullen KT,
    2. Dumoulin SO,
    3. McMahon KL,
    4. De Zubicaray GI,
    5. Hess RF
    (2007) Selectivity of human retinotopic visual cortex to S-cone-opponent, L/M-cone-opponent and achromatic stimulation. Eur J Neurosci 25:491–502. https://doi.org/10.1111/j.1460-9568.2007.05302.x
    OpenUrlCrossRefPubMed
  46. ↵
    1. Norman-Haignere S,
    2. Kanwisher NG,
    3. Mcdermott JH
    (2015) Distinct cortical pathways for music and speech revealed by hypothesis-free voxel decomposition. Neuron 88:1281–1296.
    OpenUrlCrossRefPubMed
  47. ↵
    1. Olman CA,
    2. Ugurbil K,
    3. Schrater P,
    4. Kersten D
    (2004) BOLD fMRI and psychophysical measurements of contrast response to broadband images. Vision Res 44:669–683. https://doi.org/10.1016/j.visres.2003.10.022
    OpenUrlCrossRefPubMed
  48. ↵
    1. Packer L,
    2. Rimbach G,
    3. Virgili F
    (1999) Antioxidant activity and biologic properties of a procyanidin-rich extract from pine (Pinus maritima) bark, pycnogenol. Free Radic Biol Med 27:704–724. https://doi.org/10.1016/s0891-5849(99)00090-8
    OpenUrlCrossRefPubMed
  49. ↵
    1. Pardo PJ,
    2. Georgopoulos AP,
    3. Kenny JT,
    4. Stuve TA,
    5. Findling RL,
    6. Schulz SC
    (2006) Classification of adolescent psychotic disorders using linear discriminant analysis. Schizophr Res 87:297–306. https://doi.org/10.1016/j.schres.2006.05.007
    OpenUrlCrossRefPubMed
  50. ↵
    1. Paxinos G,
    2. Huang X-F,
    3. Toga AW
    (2000) The rhesus monkey brain in stereotaxic coordinates.
  51. ↵
    1. Pearce B,
    2. Crichton S,
    3. Mackiewicz M,
    4. Finlayson GD,
    5. Hurlbert A
    (2014) Chromatic illumination discrimination ability reveals that human colour constancy is optimised for blue daylight illuminations. PLoS One 9:e87989. https://doi.org/10.1371/journal.pone.0087989 pmid:24586299
    OpenUrlCrossRefPubMed
  52. ↵
    1. Perry VH,
    2. Cowey A
    (1985) The ganglion cell and cone distributions in the monkey’s retina: implications for central magnification factors. Vision Res 25:1795–1810. https://doi.org/10.1016/0042-6989(85)90004-5
    OpenUrlCrossRefPubMed
  53. ↵
    1. Ponce CR,
    2. Xiao W,
    3. Schade PF,
    4. Hartmann TS,
    5. Kreiman G,
    6. Livingstone MS
    (2019) Evolving images for visual neurons using a deep generative network reveals coding principles and neuronal preferences. Cell 177:999–1009.e10. https://doi.org/10.1016/j.cell.2019.04.005 pmid:31051108
    OpenUrlCrossRefPubMed
  54. ↵
    1. Quek GL,
    2. Finkbeiner M
    (2014) Face-sex categorization is better above fixation than below: evidence from the reach-to-touch paradigm. Cogn Affect Behav Neurosci 14:1407–1419. https://doi.org/10.3758/s13415-014-0282-y
    OpenUrlCrossRefPubMed
  55. ↵
    1. Quek GL,
    2. Finkbeiner M
    (2015) The upper-hemifield advantage for masked face processing: not just an attentional bias. Atten Percept Psychophys 78:52–68. https://doi.org/10.3758/s13414-015-0965-7
    OpenUrl
  56. ↵
    1. Reveley C, et al.
    (2016) Three-dimensional digital template atlas of the macaque brain. Cereb Cortex 27:4463–4477. https://doi.org/10.1093/cercor/bhw248 pmid:27566980
    OpenUrlCrossRefPubMed
  57. ↵
    1. Roe AW,
    2. Chelazzi L,
    3. Connor CE,
    4. Conway BR,
    5. Fujita I,
    6. Gallant JL,
    7. Lu H,
    8. Vanduffel W
    (2012) Toward a unified theory of visual area V4. Neuron 74:12–29. https://doi.org/10.1016/j.neuron.2012.03.011 pmid:22500626
    OpenUrlCrossRefPubMed
  58. ↵
    1. Rosenthal I,
    2. Ratnasingam S,
    3. Haile T,
    4. Eastman S,
    5. Fuller-Deets J,
    6. Conway BR
    (2018) Color statistics of objects, and color tuning of object cortex in macaque monkey. J Vis 18:1–21. https://doi.org/10.1167/18.11.1 pmid:30285103
    OpenUrlCrossRefPubMed
  59. ↵
    1. Rosenthal IA,
    2. Singh SR,
    3. Hermann KL,
    4. Pantazis D,
    5. Conway BR
    (2021) Color space geometry uncovered with magnetoencephalography. Curr Biol 31:515–526.
    OpenUrlCrossRefPubMed
  60. ↵
    1. Schiller PH,
    2. Finlay BL,
    3. Volman SF
    (1976) Quantitative studies of single-cell properties in monkey striate cortex. I. Spatiotemporal organization of receptive fields. J Neurophysiol 39:1288–1319. https://doi.org/10.1152/jn.1976.39.6.1288
    OpenUrlCrossRefPubMed
  61. ↵
    1. Silva MF,
    2. Brascamp JW,
    3. Ferreira S,
    4. Castelo-Branco M,
    5. Dumoulin SO,
    6. Harvey BM
    (2018) Radial asymmetries in population receptive field size and cortical magnification factor in early visual cortex. Neuroimage 167:41–52. https://doi.org/10.1016/j.neuroimage.2017.11.021
    OpenUrlCrossRefPubMed
  62. ↵
    1. Smith SM, et al.
    (2004) Advances in functional and structural MR image analysis and implementation as FSL. Neuroimage 23:S208–S219. https://doi.org/10.1016/j.neuroimage.2004.07.051
    OpenUrlCrossRefPubMed
  63. ↵
    1. Song H,
    2. Chui TYP,
    3. Zhong Z,
    4. Elsner AE,
    5. Burns SA
    (2011) Variation of cone photoreceptor packing density with retinal eccentricity and age. Invest Ophthalmol Vis Sci 52:7376. https://doi.org/10.1167/iovs.11-7199 pmid:21724911
    OpenUrlAbstract/FREE Full Text
  64. ↵
    1. Tarhan L,
    2. De Freitas J,
    3. Konkle T
    (2021) Behavioral and neural representations en route to intuitive action understanding. Neuropsychologia 163:108048. https://doi.org/10.1016/j.neuropsychologia.2021.108048 pmid:34653497
    OpenUrlCrossRefPubMed
  65. ↵
    1. Tootell RBH
    (2001) Where is “dorsal V4” in human visual cortex? Retinotopic, topographic and functional evidence. Cereb Cortex 11:298–311. https://doi.org/10.1093/cercor/11.4.298
    OpenUrlCrossRefPubMed
  66. ↵
    1. Tregillus KEM,
    2. Isherwood ZJ,
    3. Vanston JE,
    4. Engel SA,
    5. MacLeod DIA,
    6. Kuriki I,
    7. Webster MA
    (2021) Color compensation in anomalous trichromats assessed with fMRI. Curr Biol 31:936–942.e4. https://doi.org/10.1016/j.cub.2020.11.039 pmid:33326771
    OpenUrlCrossRefPubMed
  67. ↵
    1. Tsurumi S,
    2. Kanazawa S,
    3. Yamaguchi MK,
    4. Kawahara J
    (2022) Development of upper visual field bias for faces in infants. Dev Sci 26:1–8. https://doi.org/10.1111/desc.13262 pmid:35340093
    OpenUrlPubMed
  68. ↵
    1. Vanduffel W,
    2. Fize D,
    3. Mandeville JB,
    4. Nelissen K,
    5. Van Hecke P,
    6. Rosen BR,
    7. Tootell RBH,
    8. Orban GA
    (2001) Visual motion processing investigated using contrast agent-enhanced fMRI in awake behaving monkeys. Neuron 32:565–577. https://doi.org/10.1016/s0896-6273(01)00502-5
    OpenUrlCrossRefPubMed
  69. ↵
    1. Vanduffel W,
    2. Zhu Q,
    3. Orban GA
    (2014) Monkey cortex through fMRI glasses. Neuron 83:533–550. https://doi.org/10.1016/j.neuron.2014.07.015 pmid:25102559
    OpenUrlCrossRefPubMed
  70. ↵
    1. Van Essen DC,
    2. Newsome WT,
    3. Maunsell JHR
    (1984) The visual field representation in striate cortex of the macaque monkey: asymmetries, anisotropies, and individual variability. Vision Res 24:429–448. https://doi.org/10.1016/0042-6989(84)90041-5
    OpenUrlCrossRefPubMed
  71. ↵
    1. Vanston JE,
    2. Crognale MA
    (2018) Effects of eccentricity on color contrast. J Opt Soc Am A 35:B122. https://doi.org/10.1364/josaa.35.00b122
    OpenUrlCrossRef
  72. ↵
    1. Vernon RJW,
    2. Gouws AD,
    3. Lawrence SJD,
    4. Wade AR,
    5. Morland AB
    (2016) Multivariate patterns in the human object-processing pathway reveal a shift from retinotopic to shape curvature representations in lateral occipital areas, LO-1 and LO-2. J Neurosci 36:5763–5774. https://doi.org/10.1523/jneurosci.3603-15.2016 pmid:27225766
    OpenUrlAbstract/FREE Full Text
  73. ↵
    1. Vizioli L,
    2. Moeller S,
    3. Dowdle L,
    4. Akçakaya M,
    5. De Martino F,
    6. Yacoub E,
    7. Uğurbil K
    (2021) Lowering the thermal noise barrier in functional brain mapping with magnetic resonance imaging. Nat Commun 12:1–15. https://doi.org/10.1038/s41467-021-25431-8 pmid:34462435
    OpenUrlCrossRefPubMed
  74. ↵
    1. Wade A,
    2. Augath M,
    3. Logothetis N,
    4. Wandell B
    (2008) fMRI measurements of color in macaque and human. J Vis 8:6. https://doi.org/10.1167/8.10.6 pmid:19146348
    OpenUrlAbstract/FREE Full Text
  75. ↵
    1. Wade AR,
    2. Brewer AA,
    3. Rieger JW,
    4. Wandell BA
    (2002) Functional measurements of human ventral occipital cortex: retinotopy and colour. Philos Trans R Soc Lond B Biol Sci 357:963–973. https://doi.org/10.1098/rstb.2002.1108 pmid:12217168
    OpenUrlCrossRefPubMed
  76. ↵
    1. Wandell BA,
    2. Brewer AA,
    3. Dougherty RF
    (2005) Visual field map clusters in human cortex. Philos Trans R Soc Lond B Biol Sci 360:693–707. https://doi.org/10.1098/rstb.2005.1628 pmid:15937008
    OpenUrlCrossRefPubMed
  77. ↵
    1. Watson AB
    (2014) A formula for human retinal ganglion cell receptive field density as a function of visual field location. J Vis 14:15. https://doi.org/10.1167/14.7.15
    OpenUrlAbstract/FREE Full Text
  78. ↵
    1. Wiesel TN,
    2. Hubel DH
    (1966) Spatial and chromatic interactions in the lateral geniculate body of the rhesus monkey. J Neurophysiol 29:1115–1156.
    OpenUrlCrossRefPubMed
  79. ↵
    1. Winawer J,
    2. Horiguchi H,
    3. Sayres RA,
    4. Amano K,
    5. Wandell BA
    (2010) Mapping hV4 and ventral occipital cortex: the venous eclipse. J Vis 10:1. https://doi.org/10.1167/10.5.1 pmid:20616143
    OpenUrlCrossRefPubMed
  80. ↵
    1. Winawer J,
    2. Witthoft N
    (2015) Human V4 and ventral occipital retinotopic maps. Vis Neurosci 32:E020. https://doi.org/10.1017/S0952523815000176 pmid:26241699
    OpenUrlCrossRefPubMed
  81. ↵
    1. Wright MJ,
    2. Johnston A
    (1983) Spatiotemporal contrast sensitivity and visual field locus. Vision Res 23:983–989. https://doi.org/10.1016/0042-6989(83)90008-1
    OpenUrlCrossRefPubMed
  82. ↵
    1. Xanthopoulos P,
    2. Pardalos PM,
    3. Trafalis TB
    (2013) Linear discriminant analysis. In: Robust data mining (Xanthopoulos P, Pardalos PM, Trafalis TB, eds), pp 27–33. New York: Springer.
  83. ↵
    1. Yushkevich PA,
    2. Piven J,
    3. Hazlett HC,
    4. Smith RG,
    5. Ho S,
    6. Gee JC,
    7. Gerig G
    (2006) User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability. Neuroimage 31:1116–1128. https://doi.org/10.1016/j.neuroimage.2006.01.015
    OpenUrlCrossRefPubMed
  84. ↵
    1. Zhu Q,
    2. Vanduffel W
    (2019) Submillimeter fMRI reveals a layout of dorsal visual cortex in macaques, remarkably similar to new world monkeys. Proc Natl Acad Sci U S A 116:2306–2311. https://doi.org/10.1073/pnas.1805561116 pmid:30674668
    OpenUrlAbstract/FREE Full Text
  85. ↵
    1. Ziemba CM,
    2. Freeman J,
    3. Movshon JA,
    4. Simoncelli EP
    (2016) Selectivity and tolerance for visual texture in macaque V2. Proc Natl Acad Sci U S A 113:E3140–E3149. https://doi.org/10.1073/pnas.1510847113 pmid:27173899
    OpenUrlAbstract/FREE Full Text
  86. ↵
    1. Zito GA,
    2. Cazzoli D,
    3. Müri RM,
    4. Mosimann UP,
    5. Nef T
    (2016) Behavioral differences in the upper and lower visual hemifields in shape and motion perception. Front Behav Neurosci 10:1–8. https://doi.org/10.3389/fnbeh.2016.00128 pmid:27378876
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 45 (2)
Journal of Neuroscience
Vol. 45, Issue 2
8 Jan 2025
  • Table of Contents
  • About the Cover
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Color and Spatial Frequency Provide Functional Signatures of Retinotopic Visual Areas
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Color and Spatial Frequency Provide Functional Signatures of Retinotopic Visual Areas
Spencer R. Loggia, Stuart J. Duffield, Kurt Braunlich, Bevil R. Conway
Journal of Neuroscience 8 January 2025, 45 (2) e1673232024; DOI: 10.1523/JNEUROSCI.1673-23.2024

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Color and Spatial Frequency Provide Functional Signatures of Retinotopic Visual Areas
Spencer R. Loggia, Stuart J. Duffield, Kurt Braunlich, Bevil R. Conway
Journal of Neuroscience 8 January 2025, 45 (2) e1673232024; DOI: 10.1523/JNEUROSCI.1673-23.2024
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • color
  • cortical organization
  • fMRI
  • macaque
  • retinotopy
  • vision

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Local neuronal ensembles that co-reactivate across regions during sleep are preferentially stabilized
  • Effects of short-term synaptic plasticity in feedforward inhibitory circuits on cerebellar responses to repetitive sensory input
  • Input-Specific Organization of Intrinsic Excitability Expands Coding Capacity of Fast-Spiking Auditory Neurons
Show more Research Articles

Systems/Circuits

  • Local neuronal ensembles that co-reactivate across regions during sleep are preferentially stabilized
  • Effects of short-term synaptic plasticity in feedforward inhibitory circuits on cerebellar responses to repetitive sensory input
  • A Novel Mouse Model for Developmental and Epileptic Encephalopathy by Purkinje Cell-Specific Deletion of Scn1b
Show more Systems/Circuits
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.