Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Articles, Systems/Circuits

Intelligent Information Loss: The Coding of Facial Identity, Head Pose, and Non-Face Information in the Macaque Face Patch System

Ethan M. Meyers, Mia Borzello, Winrich A. Freiwald and Doris Tsao
Journal of Neuroscience 6 May 2015, 35 (18) 7069-7081; https://doi.org/10.1523/JNEUROSCI.3086-14.2015
Ethan M. Meyers
1School of Cognitive Sciences, Hampshire College, Amherst, Massachusetts 01002,
2The Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Ethan M. Meyers
Mia Borzello
2The Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139,
3Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts 02114,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Mia Borzello
Winrich A. Freiwald
2The Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139,
4Laboratory of Neural Systems, The Rockefeller University, New York, New York 10065, and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Doris Tsao
5Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, California 91126
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Article Figures & Data

Figures

  • Tables
  • Figure 1.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 1.

    Stimulus sets used in the experiments. A, One example of each image from the eight categories in the FOB stimulus set. Each category contained 16 images, for a total of 128 images in this stimulus set. B, Average HMAX S1 and C2 image similarity (Pearson's correlation) between all images in each of the eight categories in the FOB dataset, averaged over all pairs of the 16 images in each category. HMAX S1 features are responses to Gabor filters and thus mimic properties of V1, while C2 features are modeled on higher level visual responses (Serre et al., 2007; Mutch et al., 2010). Error bars show the SD in the HMAX feature correlations across pairs of the 16 images. As can be seen, images in the face category were most similar to each other while images in the scrambles image category were most different from each other. C, An example of the eight head orientations for 1 of the 25 people in the FV stimulus set.

  • Figure 2.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 2.

    Individual image decoding accuracies and average firing rates in the three face patches to each image in the face, body, and object stimulus set. A, The sorted decoding accuracies for each image in the FOB set. Red bars indicate decoding accuracies of face images. The three nonface images with the highest decoding accuracies for each patch are shown as insets at the top of each subplot. The decoding accuracies for all face images are in the top 50% for ML/MF and AL, and all 16 face images had the highest decoding accuracies in AM. B, The average population firing rate for each image (in the same order as A). Total population firing rates for all face images (apart from one image in AL) were higher than for nonface images. C, Individual image decoding accuracies plotted as a function of individual image firing rates. There is a positive correlation between decoding accuracies and firing rates for both face and nonface images (significant correlations are denoted in green with an asterisk).

  • Figure 3.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 3.

    Decoding accuracies for discriminating between the 16 objects within each category on the face, body, and object stimulus set. The within-category decoding accuracy for faces is similar to the accuracy levels seen in several other object categories in ML/MF and AL (right, center subplots) while the within-category decoding accuracy for faces is much higher than the decoding accuracy of other object categories in AM (right subplot). These results show that for most categories, images within the same category elicit different neural responses. The black bars under the plots indicate the time when the stimulus was present, the orange bars under the plot indicate the length of the decoding time window, with the orange circle indicating that the decoding accuracies are aligned to the center of this bin. Chance decoding is 1/16.

  • Figure 4.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 4.

    Face versus nonface, and category1 versus category2 decoding accuracies. A, Decoding accuracies for determining whether an image was a face versus a nonface (face detection) using data from ML/MF (blue), AL (green), and AM (red). The decoding analysis was run separately seven times for face versus each of the nonface categories, and the decoding accuracies were averaged over the seven results. B, A comparison of category 1 versus category 2 decoding for the nonface categories (colors for the different brain regions are the same as in A). The decoding analysis was run for all 21 pairs of nonface category 1 versus nonface category 2, and the plotted results are the average of each of these 21 decoding analyses. The black bars under the plots indicate the time when the stimulus was present, and the orange bars indicate the length of the decoding time window. C, All 28 results for discriminating between all pairs of categories using 300 ms of data (i.e., all pairs of decoding results that went into the averages in A and B). Chance on these binary decoding tasks is 50%.

  • Figure 5.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 5.

    Pose-specific and pose-invariant face identity information. A classifier was trained to discriminate between individuals using data from one head orientation and then tested with data from another head orientation. The left column shows results from ML/MF, middle column from AL, and right column from AM. First row is for training on the left profile, middle row for training on the straight pose, and bottom row for training on the right profile face. Each bar in the histogram shows decoding accuracy as the result of testing at a specific location (L, left profile; 3/4 L, left 3/4 profile; S, straight; 3/4 R, right 3/4 profile; R, right profile; U, head tiled up; D, head tilted down; B, back of the head). Cyan indicates statistical significant results (p < 0.005 permutation test), and dark blue indicates failure to reach significance. We see higher decoding accuracy in AL and AM compared with ML/MF. Additionally, we see an increase in pose-invariant generalization from training on one head orientation and testing on a different head orientation going from ML/MF to AL to AM.

  • Figure 6.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 6.

    Average pose-specific and pose-invariant identity decoding results. Results from decoding the identity of a face when training and testing a classifier with images from the same head orientation (pose-specific decoding, blue trace) or with images from different head orientations (pose-invariant decoding, red trace). Pose-specific decoding results are averaged over all seven conditions for training and testing on the same head orientation, while the pose-invariant decoding is averaged over all 36 pose-invariant decoding permutations (back of head conditions were not included in this analysis). Error bars indicate the SDs over all these decoding results. The results are shown for ML/MF (left), AL (middle), and AM (right). AL and AM have higher pose-specific decoding accuracies than ML/MF and there is an increase in pose-invariant information from ML/MF to AL to AM. The black bars under the plots indicate the time when the stimulus was present, and the orange bars under the plot indicate the length of the decoding time window.

  • Figure 7.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 7.

    Head pose decoding results. Results from decoding the pose of the head generalizing over facial identity. A, Decoding results for the three face patches plotted as a function of time. Head pose is better represented in ML/MF and AL than in AM. The black bars under the plots indicate the time when the stimulus was present, and the orange bars under the plot indicate the length of the decoding time window. B, Decoding results plotted as a function of the number of selective neurons used (i.e., results are plotted using the 2 most selective neurons for decoding, the 3 most selective neurons, etc.). The selectivity of neurons was based on ANOVA p values using data from the training set. The results show that AM has lower decoding performance than ML/MF and AL even when only a small number of neurons is used indicated that even the most pose-selective neurons in AM are less pose selective than the most pose-selective neurons in ML/MF and AL.

  • Figure 8.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 8.

    Classifier comparison to assess how much information is in the overall firing rate level and in patterns across neurons. Each subplot compares the results from four different classifiers (Maximum Correlation Coefficient classifier in green, Minimum Angle classifier in cyan, Poisson Naive Bayes classifier in purple, and Total Activity classifier in red). The columns contain data from the three different face patches. A, Results from decoding whether an image was a face or is in another object category on the FOB dataset (i.e., same decoding task as in Fig. 4). B, Within-category discrimination for monkey whole bodies on the FOB dataset (i.e., same decoding task as in Fig. 3). C, Pose-specific individual face discrimination on the FV dataset (i.e., same as the blue traces on Fig. 6). For face versus nonface results, all classifiers performed similarly. For all other analyses, there was much more information in classifiers that extracted patterns of information across neurons (PNB, Maximum Correlation Coefficient, Minimum Angle) than the classifier that used the total activity (TAct). The black bars under the plots indicate the time when the stimulus was present, and the orange bars under the plot indicate the length of the decoding time window.

Tables

  • Figures
    • View popup
    Table 1.

    A table listing the classification decision functions for the four different classifiers used in this paper

    A. Poisson Naive Bayes (PNB) Embedded Image
    B. Total Activity (TAct) Embedded Image
    C. Maximum Correlation Coefficient (MCC) Embedded Image
    D. Minimum Angle (Min Ang) Embedded Image
    • wc is a vector that is the mean of the training data from class c, w̄c is a scalar that is the mean of wc, x is the test vector to be classified, x̄ is a scalar that is the mean of x, and n is the number of neurons; thus training the classifier consists of learning wc and w̄c and testing the classifier consists of determining which class x belongs to. As can be seen, all these classifiers are rather similar and mainly differ in how they normalize the data and, consequently, whether they take the overall level of population activity into account (A and B) or whether they only examine relative differences in the firing rate activity between neurons (C and D). TAct, Total Activity; MCC, Maximum Correlation Coefficient; Min Ang, Minimum Angle.

    • View popup
    Table 2.

    Correlation between individual image firing rates and decoding accuracies for the three face patches show that higher population firing rates were correlated with higher decoding accuracies

    ML/MFALAM
    Human faces−0.340.290.50*
    Human bodies0.100.62*0.47
    Fruits and vegetables0.420.80*0.23
    Gadgets0.79*0.340.78*
    Human hands0.60*0.66*−0.32
    Scrambled patterns−0.21−0.010.11
    Monkey body parts0.55*0.89*0.34
    Monkey whole bodies0.330.47*0.37
    • Each correlation was calculated separately for the 16 images in each category. Asterisks indicate statistically significant positive correlations (p < 0.05) using a permutation test. There were no statistically significant negative correlations.

Back to top

In this issue

The Journal of Neuroscience: 35 (18)
Journal of Neuroscience
Vol. 35, Issue 18
6 May 2015
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Advertising (PDF)
  • Ed Board (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Intelligent Information Loss: The Coding of Facial Identity, Head Pose, and Non-Face Information in the Macaque Face Patch System
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Intelligent Information Loss: The Coding of Facial Identity, Head Pose, and Non-Face Information in the Macaque Face Patch System
Ethan M. Meyers, Mia Borzello, Winrich A. Freiwald, Doris Tsao
Journal of Neuroscience 6 May 2015, 35 (18) 7069-7081; DOI: 10.1523/JNEUROSCI.3086-14.2015

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Intelligent Information Loss: The Coding of Facial Identity, Head Pose, and Non-Face Information in the Macaque Face Patch System
Ethan M. Meyers, Mia Borzello, Winrich A. Freiwald, Doris Tsao
Journal of Neuroscience 6 May 2015, 35 (18) 7069-7081; DOI: 10.1523/JNEUROSCI.3086-14.2015
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Notes
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • face processing
  • face recognition
  • macaque
  • neural decoding
  • population coding

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Articles

  • Memory Retrieval Has a Dynamic Influence on the Maintenance Mechanisms That Are Sensitive to ζ-Inhibitory Peptide (ZIP)
  • Neurophysiological Evidence for a Cortical Contribution to the Wakefulness-Related Drive to Breathe Explaining Hypocapnia-Resistant Ventilation in Humans
  • Monomeric Alpha-Synuclein Exerts a Physiological Role on Brain ATP Synthase
Show more Articles

Systems/Circuits

  • Vocal error monitoring in the primate auditory cortex
  • Diverse Firing Profiles of Crhbp-positive Neurons in the Dorsal Pons Suggestive of Their Pleiotropic Roles in REM Sleep Regulation in Mice
  • Presynaptic Mu Opioid Receptors Suppress the Functional Connectivity of Ventral Tegmental Area Dopaminergic Neurons with Aversion-Related Brain Regions
Show more Systems/Circuits
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.