Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Cover ArticleArticles, Behavioral/Systems/Cognitive

Decoding and Reconstructing Color from Responses in Human Visual Cortex

Gijs Joost Brouwer and David J. Heeger
Journal of Neuroscience 4 November 2009, 29 (44) 13992-14003; https://doi.org/10.1523/JNEUROSCI.3577-09.2009
Gijs Joost Brouwer
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
David J. Heeger
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Article Figures & Data

Figures

  • Tables
  • Additional Files
  • Figure 1.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 1.

    Stimulus and experimental protocol. A, Location of the eight colors and gray point in CIE L*a*b* space. Colors were presented at a lightness of L* = 75 (8.8 cd/m2). B, The same eight colors and gray point in CIE 1931 xyz space. C, Stimuli were concentric sinusoidal gratings (spatial frequency, 0.5 cycles/°), within a circular aperture (10° radius), modulating from a center gray point to one of the eight locations in color space. Stimuli drifted either inward or outward, at a speed of 1°/s. Stimulus duration, 1.5 s. Interstimulus interval, 3–6 s in steps of 1.5 s.

  • Figure 2.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 2.

    Comparison of methods. A, Comparison between decoding accuracies obtained with the full data (dimensionality = number of voxels) and accuracies obtained with the data after reducing it by means of PCA (dimensionality = number of components needed to explain 68% of the variance). Each point represents average decoding accuracy for one observer and ROI. Dark symbols, Maximum likelihood classifier. Light symbols, Forward model. Correlation coefficients (r values), Correlation in decoding accuracy between the full data and the reduced data for each decoder. B, Decoding accuracies obtained by combining data across sessions. Increasing the number of sessions dramatically increased V1 accuracies but only slightly increased MT+ accuracies. Replacing the data with white noise yielded chance performance. Error bars indicate SDs across runs (with 1 run at a time left out of training and used for testing accuracy). Dark curves and symbols, Maximum likelihood classifier. Light curves and symbols, Forward model classifier. C, Correlation between maximum likelihood classification and forward modeling classification accuracies. Each point corresponds to the mean decoding accuracy for one observer and ROI. Correlation coefficient (r value), Correlation in decoding accuracy between maximum likelihood classifier and forward model.

  • Figure 3.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 3.

    Forward model. A, Idealized color tuning curve, modeled as a half-wave rectified and squared sinusoid. B, The response of a voxel was fitted with a weighted sum of six idealized color tuning curves, evenly spaced around the color circle, in CIE L*a*b* space. C, Simulated response amplitude matrix, for each voxel and each color. D, Matrix of principal component scores, computed by projecting the vector of response amplitudes (across voxels) onto each of the principal component vectors, ordered by the amount of variance they explain in the original response amplitudes. E, Plotting the first two principal component scores as coordinate pairs reconstructs the original color space. F, Cone-opponency model. LMS cone responses were calculated for the stimulus colors. Four cone-opponency channels (M–L, L–M, −S, and +S) were computed from the cone responses, half-wave rectified. G, The first two principal components of the simulated cone-opponency responses revealed results similar to those observed in V1.

  • Figure 4.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 4.

    Color decoding. A, Decoding accuracies obtained with the maximum likelihood classifier, for each visual area. Gray bars, Decoding accuracies for individual observers. Light bars, Mean across observers. Dark bars, Accuracies obtained by combining the data across observers before classification. *p < 0.05, visual areas for which accuracies were significantly above chance in all observers (2-tailed permutation test); †p < 0.05, areas for which accuracies were significantly above chance in at least three of five observers. Error bars indicate SDs across runs (with 1 run at a time left out of training and used for testing accuracy). Solid line, Chance accuracy (0.125%). Dashed line, 97.5 percentile for chance accuracy, obtained by permutation tests (see Materials and Methods). The 97.5 percentiles were computed separately for each observer and visual area, but the average across observers/ROIs is shown here for simplicity. B, Decoding accuracies obtained using the forward model. Same format as in A.

  • Figure 5.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 5.

    Reconstruction. A, Color reconstruction on the basis of V4 activity. Each point represents the color reconstructed for one run, using combined data from all sessions and observers, plotted in CIE L*a*b* space. The reconstructed colors from all sessions and observers cluster near the actual stimulus color, indicated in the top right of each panel. B, Reconstruction of novel colors, not included in training the forward model. Reconstructed colors again cluster near the actual stimulus color but with more spread than in A. C, Forward model decoding accuracies for included and novel colors. Error bars indicate SD of the accuracies over runs. *p < 0.05, visual areas for which there was a statistically significant difference when the test color was excluded from training (paired-sample t test). Solid line, Chance accuracy (0.125%). Dashed line, 97.5 percentile for chance accuracy, obtained by permutation tests (identical to Fig. 4). Areas V1, V2, and V3 show a significant decrease in decoding accuracy for novel colors, whereas areas V4 and VO1 show highly similar decoding accuracies for both included and novel colors. The accuracies for included colors in this figure are similar but not identical to the combined accuracies shown in Figure 4B. Each run is associated with one example per color. If we remove one color from training at a time and we leave one run out at a time, we can only predict one example per run (the example associated with the color excluded from training). The remaining colors in the run are not novel. Thus, for fair comparison, we determined the accuracy for included colors in the same way. D, Performance of the maximum likelihood classifier on included (black bars) and novel (gray bars) colors, quantified as the average distance between the color predicted by the maximum likelihood classifier and the novel color presented. Small distances indicate that the classifier predicted colors as perceptually similar, neighboring colors. The maximum distance was 4 (classifying a novel color as the opposite color on the hue circle), and the minimum distance in the case of included colors was 0 (correctly classifying a included color), or, in the case of novel colors, the minimum distance was 1 (classifying a novel color as its immediate neighbor). Error bars indicate SDs across runs (with 1 run at a time left out of training and used for testing accuracy). Solid line, Median distance as expected by chance. Dashed line, Statistical threshold (p < 0.05, two-tailed permutation test). †p < 0.05, visual areas for which the measured distance for novel colors was statistically smaller than expected by chance.

  • Figure 6.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 6.

    Neural color spaces. A, Color spaces derived from the covariation, across voxels, in the responses to different stimulus colors, using data combined over all sessions and observers. In V4, the first two principal components (main source of variation) reveal a nearly circular progression (not self-intersecting) through color space, with similar colors evoking the most similar responses. This is not the case for V1 or MT+ activity, nor for white noise. Note, however, that V1 shows a more robust clustering of PCA scores, relative to the other areas. B, Clustering and progression of the color spaces derived from activity in each visual cortical area. Dashed lines, Chance levels for the progression (black) and clustering (gray) measures, computed by a permutation test (see Materials and Methods). All areas show clustering significantly higher than chance (p < 0.05, two-tailed permutation test). *p < 0.05, visual areas with a progression measure higher than chance (two-tailed permutation test). The relatively high progression score in MT+ is artifactual because the cluster centers were all near the origin of the space.

  • Figure 7.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 7.

    Mean response amplitudes. Mean response amplitudes for each of the eight colors, averaged throughout each of three representative visual area ROIs, from a representative observer. In this observer, it is clear that the mean responses were not statistically different between colors. Similar results were found for other observers/ROIs. Error bars indicate SD of mean responses, averaged across voxels, over runs. See also Table 2.

Tables

  • Figures
  • Additional Files
    • View popup
    Table 1.

    Decoding accuracies

    V1V2V3V4VO1LO1LO2V3ABMT+
    A
        O10.730.460.630.640.250.360.250.180.18
        O20.720.720.410.250.200.270.230.280.25
        O30.430.430.330.330.410.310.180.230.19
        O40.590.300.380.250.270.160.250.200.19
        O50.400.220.280.210.210.170.110.140.21
        Mean0.560.420.390.330.270.250.200.200.20
        Combined0.930.730.730.730.480.320.390.230.32
    B
        O10.630.420.640.640.340.340.210.220.16
        O20.660.590.330.220.200.220.170.270.30
        O30.400.360.300.300.360.280.160.190.19
        O40.480.320.250.230.320.200.250.290.27
        O50.320.280.290.190.320.180.150.150.21
        Mean0.490.390.350.310.310.240.190.220.22
        Combined0.800.640.710.640.510.410.430.340.36
    • A, Decoding accuracies obtained with the maximum likelihood classifier, for each visual cortical area. Accuracies are listed separately for each individual observer (O1–O5), averaged across observers (Mean), and when data were combined across all observers before classification (Combined). Bold font indicates decoding accuracies that were significantly higher than chance as determined by a two-tailed permutation test (p < 0.05; see Materials and Methods). Accuracies were significantly greater than chance for V1, V2, and V3 in all observers and for areas V4 and VO1 in at least three of five observers. B, Decoding accuracies obtained using the forward model, showing a very similar pattern of (statistically significant) decoding accuracies as was found for the classifier.

    • View popup
    Table 2.

    Mean response amplitudes

    dfV1V2V3V4VO1LO1LO2V3ABMT+
    A
        O17, 183(0.59)(0.32)(0.28)(0.66)(0.44)(0.52)(0.71)(0.68)(0.57)
    p = 0.76p = 0.95p = 0.96p = 0.70p = 0.88p = 0.82p = 0.66p = 0.69p = 0.78
        O27, 431(1.38)(0.42)(0.18)(0.05)(0.44)(0.06)(0.33)(0.23)(0.26)
    p = 0.21p = 0.89p = 0.99p = 0.99p = 0.88p = 0.99p = 0.94p = 0.98p = 0.97
        O37, 215(1.77)(0.81)(0.90)(0.67)(0.93)(0.34)(0.20)(0.84)(1.39)
    p = 0.10p = 0.58p = 0.51p = 0.70p = 0.48p = 0.94p = 0.98p = 0.56p = 0.21
        O47, 239(1.93)(1.68)(1.37)(2.25)(1.31)(1.10)(0.98)(0.82)(3.38)
    p = 0.07p = 0.11p = 0.22p = 0.03p = 0.25p = 0.37p = 0.45p = 0.57p = 0.00
        O57, 191(0.53)(0.17)(0.21)(0.25)(0.67)(0.31)(1.85)(0.30)(0.64)
    p = 0.81p = 0.99p = 0.98p = 0.97p = 0.70p = 0.95p = 0.08p = 0.95p = 0.72
        Combined7, 143(1.45)(0.66)(0.27)(0.12)(0.18)(0.89)(0.68)(1.13)(0.25)
    p = 0.19p = 0.70p = 0.96p = 0.99p = 0.99p = 0.52p = 0.68p = 0.06p = 0.97
    B
        O10.210.110.180.130.110.110.140.140.04
        O20.220.110.200.200.170.140.190.110.14
        O30.160.180.100.150.150.140.150.100.18
        O40.110.090.160.200.140.110.070.130.13
        O50.150.140.130.100.080.070.080.080.18
        Combined0.160.160.140.180.070.110.070.110.13
    • A, F statistics (parentheses) and p values for a one-way ANOVA analysis between the mean response amplitudes evoked by the different colors, across ROIs and observers. Significant results (p < 0.05) are shown in bold. First column represents the degrees of freedom per observer and the degrees of freedom for the combined analysis. Only in two cases (observer O4, V4, and MT+) were there significant differences between the mean response amplitudes evoked by the different colors. No significant differences were found when the data were combined across observers (bottom row). B, Decoding accuracies when attempting to classify colors on the basis of mean response amplitudes. For all areas, decoding accuracies are at or near chance, with accuracies significantly higher than chance only in a handful of observers/ROIs (two-tailed permutation test, p < 0.05, shown in bold).

Additional Files

  • Figures
  • Tables
  • Supplemental Data

    Files in this Data Supplement:

    • supplemental material - Supplemental Figures
Back to top

In this issue

The Journal of Neuroscience: 29 (44)
Journal of Neuroscience
Vol. 29, Issue 44
4 Nov 2009
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Decoding and Reconstructing Color from Responses in Human Visual Cortex
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Decoding and Reconstructing Color from Responses in Human Visual Cortex
Gijs Joost Brouwer, David J. Heeger
Journal of Neuroscience 4 November 2009, 29 (44) 13992-14003; DOI: 10.1523/JNEUROSCI.3577-09.2009

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Decoding and Reconstructing Color from Responses in Human Visual Cortex
Gijs Joost Brouwer, David J. Heeger
Journal of Neuroscience 4 November 2009, 29 (44) 13992-14003; DOI: 10.1523/JNEUROSCI.3577-09.2009
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Articles

  • Memory Retrieval Has a Dynamic Influence on the Maintenance Mechanisms That Are Sensitive to ζ-Inhibitory Peptide (ZIP)
  • Neurophysiological Evidence for a Cortical Contribution to the Wakefulness-Related Drive to Breathe Explaining Hypocapnia-Resistant Ventilation in Humans
  • Monomeric Alpha-Synuclein Exerts a Physiological Role on Brain ATP Synthase
Show more Articles

Behavioral/Systems/Cognitive

  • Identification and Characterization of a Sleep-Active Cell Group in the Rostral Medullary Brainstem
  • Gravin Orchestrates Protein Kinase A and β2-Adrenergic Receptor Signaling Critical for Synaptic Plasticity and Memory
  • Generation of Intensity Selectivity by Differential Synaptic Tuning: Fast-Saturating Excitation But Slow-Saturating Inhibition
Show more Behavioral/Systems/Cognitive
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.