Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

Symmetrical Viewpoint Representations in Face-Selective Regions Convey an Advantage in the Perception and Recognition of Faces

Tessa R. Flack, Richard J. Harris, Andrew W. Young and Timothy J. Andrews
Journal of Neuroscience 8 May 2019, 39 (19) 3741-3751; DOI: https://doi.org/10.1523/JNEUROSCI.1977-18.2019
Tessa R. Flack
1Department of Psychology, University of York, York YO10 5DD, United Kingdom,
2School of Psychology, University of Lincoln, LN6 7TS, United Kingdom, and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Richard J. Harris
3School of Psychology, University of Leeds, Leeds LS2 9JT, United Kingdom
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Andrew W. Young
1Department of Psychology, University of York, York YO10 5DD, United Kingdom,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Andrew W. Young
Timothy J. Andrews
1Department of Psychology, University of York, York YO10 5DD, United Kingdom,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Timothy J. Andrews
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

Learning new identities is crucial for effective social interaction. A critical aspect of this process is the integration of different images from the same face into a view-invariant representation that can be used for recognition. The representation of symmetrical viewpoints has been proposed to be a key computational step in achieving view-invariance. The aim of this study was to determine whether the representation of symmetrical viewpoints in face-selective regions is directly linked to the perception and recognition of face identity. In Experiment 1, we measured fMRI responses while male and female human participants viewed images of real faces from different viewpoints (−90, −45, 0, 45, and 90° from full-face view). Within the face regions, patterns of neural response to symmetrical views (−45 and 45° or −90 and 90°) were more similar than responses to nonsymmetrical views in the fusiform face area and superior temporal sulcus, but not in the occipital face area. In Experiment 2, participants made perceptual similarity judgements to pairs of face images. Images with symmetrical viewpoints were reported as being more similar than nonsymmetric views. In Experiment 3, we asked whether symmetrical views also convey an advantage when learning new faces. We found that recognition was best when participants were tested with novel face images that were symmetrical to the learning viewpoint. Critically, the pattern of perceptual similarity and recognition across different viewpoints predicted the pattern of neural response in face-selective regions. Together, our results provide support for the functional value of symmetry as an intermediate step in generating view-invariant representations.

SIGNIFICANCE STATEMENT The recognition of identity from faces is crucial for successful social interactions. A critical step in this process is the integration of different views into a unified, view-invariant representation. The representation of symmetrical views (e.g., left profile and right profile) has been proposed as an important intermediate step in computing view-invariant representations. We found view symmetric representations were specific to some face-selective regions, but not others. We also show that these neural representations influence the perception of faces. Symmetric views were perceived to be more similar and were recognized more accurately than nonsymmetric views. Moreover, the perception and recognition of faces at different viewpoints predicted patterns of response in those face regions with view symmetric representations.

  • face
  • fMRI
  • recognition
  • symmetry
  • viewpoint

Introduction

Faces are seen from many different angles in everyday life and differences in viewpoint play an important role in social perception. For example, different orientations provide useful information about internal mental states, such as the focus of attention, and they directly affect social attributions (Sutherland et al., 2017). However, changes in viewpoint make the process of face recognition more difficult, because so many different views can be generated from the same identity. Despite this challenge, we can recognize familiar faces from different viewpoints with relative ease (Hancock et al., 2000), raising the critical theoretical question of how this viewpoint-invariance for recognizing familiar faces is achieved (Young and Burton, 2017; Young, 2018). Cognitive models of face processing have suggested that the recognition of facial identity is based on a view-invariant representation that receives convergent input from relatively viewpoint-specific representations (Bruce and Young, 1986; but see Tarr and Bulthoff, 1998; Burton et al., 1999). Understanding how the brain generates this viewpoint invariant representation is central to understanding how we recognize faces.

Neurophysiological studies have shown that neurons in the temporal lobe can be selective for different facial viewpoints (Perrett et al., 1991). This led to the idea that recognition is initially based on multiple viewpoint-specific representations that are a precursor to viewpoint-invariant representations of identity (Perrett et al., 1998). However, these studies also reported a subpopulation of neurons that showed bimodal responses in which there was selectivity to two different viewpoints, typically symmetrical viewpoints. More recently, Freiwald and Tsao (2010) and Dubois et al. (2015) used fMRI in combination with single-neuron recording in different face regions of the monkey temporal lobe. They found that in the most posterior face regions (ML/MF), neurons were selective for the viewpoint of the face. However, a more anterior face patch (AL) contained neurons that showed mirror-symmetric tuning for viewpoint. The most anterior region (AM) contained view-invariant neurons.

Neuroimaging studies have also found evidence for the representation of viewpoint symmetry in face-selective regions of the human brain. Early studies found viewpoint-selective responses to unfamiliar faces in face-selective regions (Grill-Spector et al., 1999; Andrews and Ewbank, 2004; Fang et al., 2007; Carlin et al., 2011; Guntupalli et al., 2017; Weibert et al., 2018), with partial view-invariance (20–30°) for familiar faces (Eger et al., 2005; Pourtois et al., 2005; Ewbank and Andrews, 2008). More recently, a number of studies have found selectivity to mirror-symmetric viewpoints in face-selective regions (Axelrod and Yovel, 2012; Kietzmann et al., 2012, 2015, 2017; Guntupalli et al., 2017). These studies found that the pattern of response in face regions was more similar for symmetrical views of the face compared with nonsymmetrical views. However, the existence of mirror-symmetric representations in face regions has been challenged by reports maintaining that patterns of response can be better explained by view-dependent representations (Ramirez et al., 2014, Ramírez, 2018).

The existence and location of mirror-symmetric representations of faces is important, because they are often thought to form a key computational step in the generation of viewpoint-invariant representations (Freiwald and Tsao, 2010; Axelrod and Yovel, 2012; Kietzmann et al., 2012). The aim of this study was to determine where mirror-symmetric representations exist and whether there is a direct link with the perception and recognition of faces. Previous behavioral studies using face matching tasks have found better performance on face image pairs showing symmetrical compared with nonsymmetrical viewpoints (Troje and Bülthoff, 1998; Busey and Zaki, 2004). Here, we used perceptual matching and face-learning paradigms to ask whether mirror-symmetric representations in face-selective regions can be predicted by performance on such behavioral tasks. A key feature of our study is the use of real human faces, as opposed to computer generated faces. Human faces are not perfectly mirror symmetrical, so it is important to use real human faces to determine whether the brain represents symmetry in the real world and whether these representations are important for the perception and recognition of identity.

Materials and Methods

Participants

All participants had normal or corrected-to-normal vision with no history of neurological conditions. Twenty right-handed participants (10 female, mean age 25.3 ± 3.1) took part in Experiment 1. Twenty participants took part in Experiment 2 (12 female, mean age 24.2 ± 3.6). Forty-eight participants took part in Experiment 3 (37 female, mean age 23.2 ± 5.2). Written consent was obtained from all participants and the studies were approved by the York Neuroimaging Centre Research Ethics Committee (Experiment 1), the Department of Psychology Ethics Committee at the University of York (Experiment 2) and Durham University (Experiment 3). All experiments conformed to the principles of the Declaration of Helsinki.

Experiment 1

Experimental design.

Face images were taken from the Radboud Faces Database (Langner et al., 2010). There were five stimulus conditions, presented in a block design: (1) right profile (−90°), (2) right ¾ profile (−45°), (3) front view (0°), (4) left ¾ profile (45°), and (5) left profile (90°). These viewpoints were shown across five different identities (Fig. 1). Images were placed onto a 1/f amplitude mask to ensure that all images stimulated the same amount of the visual field despite changes in viewpoint.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Examples of stimuli from Experiment 1. Each column shows the sequence of images in a representative stimulus block for the different conditions. Within each block the viewpoint remained the same, with the identity varying across images.

Images from each viewpoint condition were presented in a blocked design. Each block contained five images (Fig. 1, columns), with each image presented for 1 s followed by a 200 ms gray screen. There was a 9 s inter-block period during which a gray fixation screen was presented. Each viewpoint condition was repeated six times, giving a total of 42 blocks. The order of identities in each block was randomized across blocks. To ensure participants were paying attention during the scan, participants were required to monitor the images and indicate using a response box when they saw a red dot superimposed onto one of the images. Accuracy on this task was very high (99.9 ± 0.5%).

Imaging parameters.

Data for Experiment 1 were collected using a GE 3 Tesla HD Excite MRI system with an eight channel phased array head coil tuned to 127.4 MHz. A gradient echo planar imaging (EPI) sequence was used to acquire the data. The acquisition parameters were as follows: 38 contiguous axial slices, repetition time: 3 s, echo time: 32.5 ms, flip angle: 90°, field-of-view: 28.8 × 28.8 cm, matrix: 128 × 128, slice-thickness: 3 mm, voxel size: 2.25 × 2.25 × 3 mm. To improve registration, the EPI image was coregistered with a T1-weighted image taken in the same plane, before being registered to the high-resolution main structural scan (T1-weighted, 1.13 × 1.13 × 1 mm) of each participant. This was then coregistered to the standard MNI 152 brain.

fMRI analysis.

Our main analysis focused on face-selective regions [fusiform face area (FFA), occipital face area (OFA), superior temporal sulcus (STS), inferior frontal gyrus (IFG); amygdala (AMG)]. There were two important principles underlying the way in which we defined the face-selective regions-of-interest (ROIs). The first principle was that ROIs should be based on independent data. Given that we were investigating the reliability of patterns of response across individuals, it was essential that these came from independent participants. The second principle was that ROIs must be of the same size (number of voxels), to allow the MVPA analyses to have comparable potential power to detect underlying patterns of response in each region.

An independent localizer scan was therefore used to define group level ROIs using different participants (n = 83). Responses to faces that varied in identity and viewpoint were compared with the response to scrambled faces. ROIs comprised of the 500 most significant voxels in the OFA, FFA, and STS (Sormaz et al., 2016). Our analysis was supplemented by using ROIs based on probabilistic visual field maps developed by Wang et al. (2015). Our rationale for using these masks was to determine how the representation of face viewpoint changes from early to higher levels of the visual system. The size of each region in our analysis is shown in Table 1-2.

Pattern analyses were performed using the PyMVPA toolbox (http://www.pymvpa.org/; Hanke et al., 2009). Parameter estimates from a univariate analysis of the main experiment were first normalized by subtracting the average response across the five viewpoint conditions (−90, −45, 0, 45, 90°). The reliabilities of the neural patterns of response were then determined using a modified form of the correlation-based MVPA method devised by Haxby et al. (2001), whereby patterns of response from each participant were compared with the patterns resulting from the group analysis with that participant left out. This leave one participant out (LOPO) method allowed us to determine the consistency of the patterns of response across participants by measuring how similar each participant's responses were to those for the rest of the group. This method has been successfully used in several recent studies from our research group (Rice et al., 2014; Watson et al., 2014; Coggan et al., 2016; Weibert et al., 2018). The group pattern was derived by entering all but one of the participants' data into a higher-level group analysis (mixed effects; FLAME, http://www.fmrib.ox.ac.uk/fsl). This group pattern of response for each condition was then correlated with the pattern from the participant who was omitted from the group. For each unique pair of conditions, the LOPO method was repeated 20 times, with a different participant being omitted from the rest of the group each time. A Fisher's Z-transformation was then applied to the correlations before statistical analysis.

To assess whether there were distinct patterns of response to individual viewpoint directions, paired t tests were used to test the difference between the average within-condition (e.g., −90 vs −90°, −45 vs −45°) and the corresponding between-condition correlations. If a viewpoint evoked a distinct pattern of response, then the within-condition correlations for the individual participant and rest of the group data should be higher than the between-condition correlations in the given region.

Next, a representational similarity analysis (Kriegeskorte et al., 2008) was performed to determine how information about viewpoint was represented. We compared the fit achieved by three models derived from different theoretical perspectives: (1) Viewpoint, (2) Direction, and (3) Symmetry. In the Viewpoint model the value of each cell was proportional to the degree of difference in rotation between viewpoints. In the Direction model, cells involving combinations of viewpoints with the same direction (both left-facing or both right-facing) were given the value 1, whereas all other cells were coded 0. In the Symmetry model, cells showing symmetrical viewpoints were given a value of 1 and nonsymmetrical viewpoints were given a value 0. To prevent differences in the overall magnitude of within-condition and between-condition correlations artificially inflating differences in correlations between matrices, our analysis was only performed on the between-cluster comparisons. All models were normalized using a Z-transform (mean = 0, SD = 1) and then used in a linear regression analysis, with the outcomes defined as the correlation matrices obtained from the MVPA concatenated across LOPO iterations. For each model, elements within the matrix were extracted and flattened to a vector. These vectors were then repeated and tiled to match the number of participants. For each participant, correlation matrices were extracted and flattened to a vector. These vectors were then concatenated and entered into the model as the outcome variable. This analysis yielded a regression coefficient and an error that reflected variance across participants. All regression analyses included a constant term. From this analysis, it was possible to determine the relative fit to each model in each ROI.

Experiment 2

Stimuli and experimental design.

To determine whether symmetrical viewpoints were seen as being more similar than nonsymmetrical viewpoints, we conducted a behavioral study in which participants rated the perceptual similarity of pairs of images which varied in viewpoint. Stimuli consisted of the same greyscale images used in Experiment 1. Images were presented in pairs, with the identity across the two images remaining the same, but the viewpoint changing. Images were presented sequentially, with the first image being presented for 1 s, a 200 ms ISI, and then the second image. Each viewpoint was presented with every other viewpoint, in both the first and second position. For each identity, there were 2 trials for each of the 10 viewpoint combinations. This was repeated for each of the 5 identities, giving a total of 10 trials for each of the 10 viewpoint combinations. The order of trials was randomized for each individual participant. Participants were required to respond with a button press indicating how similar they perceived the images to be, on a scale of 1–7 (1 being less similar and 7 being more similar). Participants were given an unlimited time to respond. The perceptual similarity between symmetric and nonsymmetric responses was compared using a paired t test. The perceptual similarity between different viewpoints was then used as a model in a regression analysis of the fMRI data from different regions.

Experiment 3

Stimuli and experimental design.

Experiment 3 used a face-identity learning paradigm (see Longmore, Liu, and Young, 2008) to determine whether a face learnt in one viewpoint conveyed an advantage in the recognition of the symmetrical viewpoint. Faces from the Radboud database were again used in this experiment. There were 20 male identities each posing a neutral facial expression at the following viewpoints: −90°, −45°, 0°, 45°, 90°. The 20 identities were split into two sets. Participants were randomly assigned to Set 1 or Set 2. Within each set, each of the 10 identities was assigned to one of the five viewpoints. The assignment of identities to viewpoints was randomized for each participant. This generated 10 face images: 2 images for each viewpoint.

In the learning phase, participants were presented with these 10 face images. The faces were presented sequentially, with each face being presented for 5 s, with a 500 ms interstimulus interval (ISI) between each trial. Underneath each face was a first name. Names were randomly assigned to faces for each participant from a set of 10 names. These names were chosen to be short and common in the United Kingdom, consisting of one or two syllables and three or four letters, e.g., Paul, Tim. Participants were instructed to remember the face and its corresponding name.

In the training phase, the 10 faces were split into two blocks of 5 faces. Participants were presented with the first block of five faces. These faces were presented individually, and for an unlimited time. Participants were asked to pick the name that they believed belonged to the face. The five name options were displayed below the face and participants had to use the mouse to click on the name they thought matched the face. The order of the names was random for each participant. Once a response had been recorded, participants were given feedback to indicate whether they were correct or not. If they were incorrect, they were told what the correct name was for the target face. This feedback was provided to aid and reinforce the learning in this training phase. To move to the next block of five faces, the participants had to correctly name each face twice in a row. For example, if a given face was named correctly once, and then incorrectly the next time it was presented, the correct count for this face was returned to 0 and the participant had to complete two more trials correctly in a row, to continue. Once the participant had correctly named the face twice in a row, it was removed from the block. This process was repeated with the remaining block of five faces. Next, the entire set of 10 faces was presented. In this final block of the training phase, participants had to correctly name all 10 faces twice in a row (in the same way as described above) to complete the phase.

In the final test phase of the experiment, participants were presented with all images from the set. This included the 10 images used for training and the remaining 40 images that were not used during training. Images were presented twice, giving 100 trials. The task was to match the name to the face from the 10 names displayed underneath the face. Feedback was not given in this phase of the experiment. For each identity, the aim was to determine whether the (untrained) face images that were symmetrical to the trained view were identified correctly more often than the (untrained) face images that were not symmetrical to the trained view. If participants had correctly learned an image trained in Phases 1 and 2, they should then be able to correctly recognize the same image at this final stage (e.g., when learned in 45° and tested in 45°). For this reason, only identities that were correctly recognized 100% of the time in the test stage when tested in the same viewpoint as they were learned were retained for analysis. The recognition of symmetric and nonsymmetric responses was compared using a paired t test. The recognition rate between different viewpoints was then used as a model in a regression analysis of the fMRI data in different regions.

Results

Experiment 1

Figure 2 shows the results of the MVPA analysis demonstrating the similarity in the patterns of response to different viewpoint directions in the (A) face and (B) visual field regions. To determine whether there were distinct patterns of response to individual viewpoint directions, within-viewpoint (e.g., −90, −90°) correlations were compared with between-viewpoint (e.g., −90, −45°) correlations.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Correlation matrices showing the similarity in the patterns of response across viewpoints in (A) face-selective and (B) visual field regions. C, Distinct patterns of response were demonstrated by higher within-viewpoint correlations compared with between-viewpoint correlations. ***p < 0.001, **p < 0.01, *p < 0.05.

There was a significant difference between within-viewpoint compared with between-viewpoint correlations in all face regions except the AMG and IFG (Fig. 2C; Table 1). To determine whether the size of the face regions was important, we repeated the analysis with smaller (200 voxel) masks and found a similar pattern of results (Table 1-1). There was also significant difference between within-viewpoint compared with between-viewpoint correlations across many visual field areas. This overall pattern demonstrates that there are distinct representations of particular face viewpoints across visual cortex.

View this table:
  • View inline
  • View popup
Table 1.

Within-viewpoint and between-viewpoint correlations and associated paired t test across all ROIs

Table 1-1

Within-viewpoint and between-viewpoint correlations and associated paired t-tests across all face regions defined with 200 voxel masks. Download Table 1-1, DOCX file

Table 1-2

Total number of voxels for each region of interest. Voxel size = 2 x 2 x 2 mm. Download Table 1-2, DOCX file

Next, we asked how similar the pattern of response to viewpoint was across all the ROIs by comparing the neural correlation matrices in Figure 2A,B. Figure 3A shows the similarity in the representation of viewpoint across all regions. To determine the pattern of similarity in the representation across regions, a hierarchical clustering analysis was performed using an unweighted average distance method for computing the distance between clusters and the “1 − correlation” values as the distance metric (Fig. 3B). The distinct clusters shown by the output of the clustering show that the way viewpoint is represented differs between regions.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

A, Representational similarity matrix showing the similarity in the neural representations across regions. B, Hierarchical clustering analysis showing regions with similar patterns of response to face viewpoint.

To determine how viewpoint is represented in different regions, our next analysis investigated how three different models of viewpoint representation were able to predict patterns of response. Figure 4 and Table 2 show the models for each representation and the corresponding regression coefficient for each region. To determine whether the size of the face regions was important, we repeated the analysis with smaller (200 voxel) masks and found a similar pattern of results (Table 2-1). The analysis was also repeated across all regions with multiple regression (Table 2-2) and using a permutation test for statistical significance (Table 2-3).

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Regression analysis of fMRI data showing how different models predict patterns of response to viewpoint in different regions. A, The Viewpoint and (B) Direction models predict the representational similarity in low-level visual areas. C, In contrast, the symmetry model predicted patterns in high-level regions including the FFA and STS. ***p < 0.001, *p < 0.05.

View this table:
  • View inline
  • View popup
Table 2.

Regression coefficients for the viewpoint representation models across all ROIs

Table 2-1

Regression coefficients for the viewpoint representation models across all face regions defined with 200 voxel masks. Download Table 2-1, DOCX file

Table 2-2

Regression coefficients for the viewpoint representation models using multiple regression across all ROIs. Download Table 2-2, DOCX file

Table 2-3

Permutation analysis for the viewpoint representation models’ ability to predict neural responses across ROIs. Permutation p-values have been corrected for multiple comparisons using the Bonferroni-Holm correction across ROIs. Critical values represent the 95th percentile of absolute permuted null distribution. Download Table 2-3, DOCX file

The Viewpoint and Direction models (Fig. 4A,B) showed a similar pattern with high coefficients in the early visual field regions (V1–V4) and in some of the ventral temporal visual field regions (VO1–VO2). However, the coefficient values were not significant in the lateral occipital visual field regions (LO1, LO2) and the face-selective regions. The only exception was the OFA, which had a significant regression coefficient for both Viewpoint and Direction. The Symmetry model (Fig. 4C) showed an opposite pattern of results. We found significant but negative coefficients in the early visual field regions (V1–V4) and in some of the ventral visual field regions (VO1–VO2). In contrast, there were significant positive coefficients in other ventral visual field regions (PH1–PH2) and in the FFA and STS. The OFA did not show a significant effect for symmetry. The AMG and IFG did not show significant coefficients for any of the three models.

We also analyzed our data to ask whether low-level differences can account for the pattern of data. To investigate the effects of low-level image properties on patterns of neural response in face-selective regions, the image statistics of each object were computed using the GIST descriptor (Oliva and Torralba, 2001). For each image, a vector of 2048 values was obtained by passing the image through a series of 32 Gabor filters (8 orientations at 4 spatial frequencies), and windowing the filtered images along a 8 × 8 grid or 64 spatial locations. Each vector represents the image in terms of the output of each Gabor filter at each position across the image (Rice et al., 2014; Watson et al., 2014, 2016). Image similarities between conditions were measured by correlating the GIST descriptors for all combinations of images. The similarity matrix of the correlation values for the GIST descriptor across all pairwise combinations of conditions was then used as a regressor in a regression analysis with the fMRI data. Table 3 shows that, consistent with previous studies (Rice et al., 2014; Watson et al., 2016; Weibert et al., 2018), low-level stimulus properties can account for some of the variance in the patterns of response in early visual field areas but also in higher visual areas such as the core face regions (OFA, FFA, STS).

View this table:
  • View inline
  • View popup
Table 3.

Regression coefficients for a model of low-level image properties across ROIs

Experiment 2

Experiment 2 aimed to assess the perceptual experience of facial viewpoint symmetry. Participants were presented with pairs of faces which showed different viewpoints and were asked to rate how similar they believed the images were on a scale of 1–7 (1, less similar; 7, more similar). A similarity matrix of each of the viewpoint combinations can be seen in Figure 5A. To assess whether participants rate symmetrical directions more similar than nonsymmetrical directions, data were averaged across symmetrical and nonsymmetrical pairings. Participants rated symmetrical viewpoints as more similar than nonsymmetrical viewpoint conditions (t(19) = 6.37, p < 0.001).

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

A, Perceptual similarity ratings between viewpoint directions (left). A regression analysis using the perceptual similarity ratings as a model showed an increase in the coefficients from low-level to high-level visual regions, with the highest values in the FFA and STS. B, Recognition rates for different combinations of viewpoint during the training and test phases of the recognition experiment (left). A regression analysis using the recognition values as a model showed an increase in the coefficients from low-level to high-level regions, with the highest values in the FFA and STS. ***p < 0.001, **p < 0.01, *p < 0.05.

Next, we asked whether the pattern of response in the perceptual similarity task could predict the patterns of response in different regions of the brain (Fig. 5A, right). Using a regression analysis with perceptual similarity as the model, we found that responses from V1 were not predicted by the pattern of perceptual similarity. However, the correlation coefficients showed a progressive increase along the visual hierarchy with the highest regression coefficients in the FFA and STS (Table 3). This shows a clear link between the perceptual similarity of different viewpoints and the pattern of response in some face regions.

Experiment 3

The aim of Experiment 3 was to determine whether learning a face at a particular viewpoint conferred an advantage in the recognition of the symmetric viewpoint. Figure 5B shows the percentage recognition rates for different combinations of viewpoint from training and test phases of the experiment. To compare performance across symmetrical and nonsymmetrical conditions, data were averaged across symmetrical and nonsymmetrical viewpoint combinations. Participants performed significantly better when tested with a viewpoint that was symmetrical to the one which they had learned (t(47) = 2.63, p = 0.012).

Next, we asked whether accuracy in the recognition task could predict the patterns of response in different regions of the brain (Fig. 5B, right). Using a regression analysis with recognition accuracy as the model, we found that early visual areas were not significant. In contrast, only LO2, FFA, and STS showed positive regression coefficients (Table 3). This demonstrates a link between behavioral performance on a face learning task and patterns of response in the face-selective regions, such as the FFA.

To determine whether the size of the face regions was important, we repeated the analysis of Experiment 2 and Experiment 3 with smaller (200 voxel) masks and found a similar pattern of results (Table 4 and Table 4-1). The analysis was also repeated across all regions with multiple regression (Table 4-2). A multiple regression found significant effects of Similarity across many visual areas, but was largest in the FFA and STS. However, we did not find any additional benefit of the Recognition model. This is likely explained by the similarity in the models which have a correlation of r = 0.53. Finally, we reanalyzed the data using a permutation test for statistical significance (Table 4-3).

View this table:
  • View inline
  • View popup
Table 4.

Regression coefficients demonstrating the ability of the behavioral data from Experiment 2 (Similarity ratings) and Experiment 3 (Recognition) in predicting the neural responses across all ROIs

Table 4-1

Regression coefficients demonstrating the ability of the behavioural data from Exp 2 (Similarity ratings) and Exp 3 (Recognition) in predicting the neural responses across all face regions defined with 200 voxel masks. Download Table 4-1, DOCX file

Table 4-2

Regression coefficients demonstrating the ability of the behavioural data from Exp 2 (Similarity ratings) and Exp 3 (Recognition) in predicting the neural responses using multiple regression across all ROIs. Download Table 4-2, DOCX file

Table 4-3

Permutation analysis for simple linear regression demonstrating the ability of the behavioural data from Exp 2 and 3 in predicting the neural responses across all ROIs. Permutation p-values have been corrected for multiple comparisons using the Bonferroni-Holm correction across ROIs. Critical values represent the 95th percentile of absolute permuted null distribution. Download Table 4-3, DOCX file

Discussion

The aim of this study was to investigate and understand responses to symmetric views of real human faces in face-selective regions of the human brain. A viewpoint-symmetric representation was found in the FFA and STS, but not in the OFA (Experiment 1). To determine whether a symmetrical representation of viewpoint can convey an advantage in the perception and recognition of faces, participants performed a perceptual similarity and a face recognition task. We found that symmetric viewpoints were perceived to be more similar than nonsymmetric viewpoints (Experiment 2) and that identities learnt at a particular viewpoint were more accurately recognized at the symmetrical viewpoint compared with nonsymmetrical viewpoints (Experiment 3). Critically, these behavioral judgements of symmetry and recognition across different views were able to predict patterns of response in face-selective brain regions.

Previous neurophysiological studies have found a large number of neurons with maximal responses to specific viewpoints (Perrett et al., 1991, 1998; Freiwald and Tsao, 2010; Dubois et al., 2015). We also found distinct patterns of response to viewpoint throughout visual cortex. Our findings are therefore consistent with previous neuroimaging studies that have also found distinct patterns of response to specific viewpoints (Carlin et al., 2011; Axelrod and Yovel, 2012; Kietzmann et al., 2012; Ramirez et al., 2014; Dubois et al., 2015; Guntupalli et al., 2017). These neural responses to viewpoint are also consistent with behavioral studies that have shown the importance of viewpoint-selective representations in the perception and recognition of unfamiliar faces (Bruce, 1982; Hill and Bruce, 1996; Fang and He, 2005; Longmore et al., 2008).

This widespread evidence of viewpoint selectivity to face images across the visual cortex provides a challenge to understanding how invariant representations of facial identity are achieved (Perrett et al., 1998). One possibility is that different viewpoint-selective units converge to generate invariant units of facial identity, similar to the face recognition units suggested by cognitive models of face recognition (Bruce and Young, 1986; Burton et al., 1999). However, the discovery of neurons that are tuned to symmetric viewpoints of the face has led to the idea that these may provide an important intermediate computational step before full invariance is achieved (Freiwald and Tsao, 2010), or at least facilitate the process. Evidence that these viewpoint symmetric representations are found in the human brain comes from neuroimaging studies, which have found that the patterns of response in face regions to symmetrical viewpoints are more similar than to nonsymmetrical viewpoints (Axelrod and Yovel, 2012; Kietzmann et al., 2012, 2017; Guntupalli et al., 2017). However, there has been some inconsistency in the literature about which regions show a symmetrical representation of faces. Kietzmann et al. (2012, 2017) found viewpoint symmetry represented in the OFA and FFA. However, other studies report symmetrical representations in the FFA and STS, but not in the OFA, leading them to conclude that there is a hierarchical processing of face viewpoint in face regions (Axelrod and Yovel, 2012; Guntupalli et al., 2017). Finally, Ramirez et al. (2014, 2018) have argued that symmetry responses in the FFA could be better explained by a view-dependent mechanism. In contrast, to the current and previous studies, they presented faces in the periphery to test the invariance of viewpoint symmetry. So, it is possible that a lack of position invariance could explain the difference in results across studies (Kietzmann et al., 2017). Another possible reason for the difference in results could be the methodological choices in MVPA. However, view symmetry in faces has been shown across different studies that have used a range of MVPA approaches (Axelrod and Yovel, 2012; Kietzmann et al., 2012, 2017; Guntupalli et al., 2017).

To determine where viewpoint symmetry is represented, we compared how three different models (Viewpoint, Direction, and Symmetry) predicted patterns of neural response in different regions of visual cortex. The predictions of the Viewpoint model were exclusively based on the angular separation between the different viewpoints, the Direction model coded whether combinations of viewpoints were both left-facing or right-facing, and the Symmetry model explicitly allowed different symmetric orientations (e.g., −45 and +45, or −90 and +90) to be represented as similar to each other regardless of the angular separations themselves (which are 180° for −90 and +90 images and only 90° for −45 and +45 images).

We found that the Viewpoint and Direction models best predicted responses in early visual cortex (V1–V4) and the OFA, but showed a gradual decline in high-level regions and were not able to explain the patterns in the FFA and STS. In contrast, the patterns of response in the FFA and STS (but not the OFA) were best predicted by the Symmetry model. These findings are consistent with a hierarchical organization of viewpoint responsiveness across visual regions in which more posterior regions have view-dependent representations, but more anterior regions (including classic face-selective regions) are sensitive to viewpoint symmetry (Freiwald and Tsao, 2010; Axelrod and Yovel, 2012; Kietzmann et al., 2012, 2017; Guntupalli et al., 2017).

Although our results provide further support for the representation of viewpoint symmetry in face-selective neural regions, such as the FFA, it has not been clear whether these representations are important for the perception and recognition of faces. That is, the link between neural and behavioral measures has not been investigated directly. To address this issue, we compared symmetrical and nonsymmetrical views in a perceptual similarity task and a recognition task. We found that symmetrical views of the face are perceived to be more similar than nonsymmetrical views. Similarly, we found that novel face images that were symmetric to a learned face view were recognized better than nonsymmetric face views. These results are consistent with previous studies that have shown a behavioral advantage for symmetric compared with nonsymmetric viewpoints (Troje and Bülthoff, 1998; Busey and Zaki, 2004). However, to investigate the link between symmetric and nonsymmetric viewpoints and neural responses, we used the data from our behavioral results to predict patterns of response across visual cortex. The data from both the perceptual similarity and recognition experiments showed a progressive increase in their ability to predict neural responses from low-level to high-level regions. Patterns of response in face regions, such as the FFA and STS, were predicted best by performance on both the perceptual similarity and recognition tasks. This provides the first evidence for a close link between symmetrical representations in the brain and a behavioral advantage in the recognition of faces.

A crucial difference between this study and many previous studies investigating symmetry is the use of real faces. Many previous studies have used computer generated faces that are often themselves mirror symmetric. However, human faces are not actually completely symmetrical (Fig. 1). So, if the brain uses symmetry to assist reaching viewpoint invariance, it needs to be able to allow for these deviations from symmetry found in real human faces. We have been able to show that neural and behavioral responses are able to compensate for this lack of full mirror symmetry.

Although this study has focused on face processing, a preference for bilateral symmetry is a more general property of the visual system (Bertamini et al., 2018; Keefe et al., 2018). The bias in neural processing is also evident in perceptual judgements showing that we are adept at discriminating small deviations in bilateral symmetry (Corballis and Beale, 1976; Rhodes et al., 2005) and find bilaterally symmetrical images more esthetically pleasing than nonsymmetric images (Jacobsen et al., 2006; Makin et al., 2012). Our study differs from these studies as bilateral symmetry is not evident in any one image itself. Rather, we have shown that symmetry enhances the integration over time of two images. Nevertheless, we did find that sensitivity to viewpoint symmetry was evident in regions such as PH1 and PH2. Future studies will be necessary to determine the extent to which similar neural and perceptual mechanisms are involved in both processes.

An important feature of our findings is that the spatial patterns of response to viewpoint generalized across participants. This observation complements other neuroimaging studies using univariate methods that have already shown that the locations of face-selective regions in the ventral visual pathway are broadly consistent across individuals (Kanwisher et al., 1997; Davies-Thompson and Andrews, 2012). This implies that common principles may well underpin the organization of these regions. In our analysis, we used multivariate methods to compare the spatial pattern of response in each individual with the spatial pattern from the rest of the group of participants (Poldrack et al., 2009; Rice et al., 2014; Watson et al., 2014; Coggan et al., 2016; Weibert et al., 2018). The success of this approach implies that much of the topographic pattern of response to facial viewpoint is consistent across individuals. Of course, it is possible that a finer-grained within-participant analysis could reveal more information. However, it is unclear how this could lead to a completely different pattern of response. Indeed, our results are consistent with many previous studies using within-participant analyses (Axelrod and Yovel, 2012; Kietzmann et al., 2012; Guntupalli et al., 2017). These observations are significant in that they suggest that our findings reflect the operation of large-scale organizing principles that are consistent across different individuals.

In conclusion, our results demonstrate that the spatial patterns of responses to facial viewpoint in the FFA and STS are sensitive to symmetry. A model that explicitly represented image symmetry was better able to predict patterns of response in these face regions than models based exclusively on image viewpoint or direction. We also found that symmetrical viewpoints are perceived to be more similar and are more easily recognized than nonsymmetrical viewpoints in purely behavioral tasks. Finally, we were able to establish a direct link between the neuroimaging and behavioral findings by showing that these behavioral data could predict patterns of response in face-selective regions, such as the FFA and STS. Together, these results support the idea that symmetrical representations are an important computational step in the generation of view-invariant representations of faces that are essential to familiar face recognition.

Footnotes

  • T.R.F. was supported by a studentship from the University of York. We thank the staff at the York Neuroimaging Centre for their help throughout this project, and David Watson and David Coggan for help with the analysis.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Timothy J. Andrews at timothy.andrews{at}york.ac.uk

References

  1. ↵
    1. Andrews TJ,
    2. Ewbank MP
    (2004) Distinct representations for facial identity and changeable aspects of faces in the human temporal lobe. Neuroimage 23:905–913. doi:10.1016/j.neuroimage.2004.07.060 pmid:15528090
    OpenUrlCrossRefPubMed
  2. ↵
    1. Axelrod V,
    2. Yovel G
    (2012) Hierarchical processing of face viewpoint in human visual cortex. J Neurosci 32:2442–2452. doi:10.1523/JNEUROSCI.4770-11.2012 pmid:22396418
    OpenUrlAbstract/FREE Full Text
  3. ↵
    1. Bertamini M,
    2. Silvanto J,
    3. Norcia AM,
    4. Makin ADJ,
    5. Wagemans J
    (2018) The neural basis of visual symmetry and its role in middle and high-level visual processing. Ann N Y Acad Sci 1426:111–126. doi:10.1111/nyas.13667 pmid:29604083
    OpenUrlCrossRefPubMed
  4. ↵
    1. Bruce V
    (1982) Changing faces: visual and non-visual coding processes in face recognition. Br J Psychol 73:105–116. doi:10.1111/j.2044-8295.1982.tb01795.x pmid:7059746
    OpenUrlCrossRefPubMed
  5. ↵
    1. Bruce V,
    2. Young AW
    (1986) Understanding face recognition. Br J Psychol 77:305–327. doi:10.1111/j.2044-8295.1986.tb02199.x pmid:3756376
    OpenUrlCrossRefPubMed
  6. ↵
    1. Burton AM,
    2. Bruce V,
    3. Hancock PJ
    (1999) From pixels to people: a model of familiar face recognition. Cogn Sci 23:1–31. doi:10.1207/s15516709cog2301_1
    OpenUrlCrossRef
  7. ↵
    1. Busey TA,
    2. Zaki SR
    (2004) The contribution of symmetry and motion to the recognition of faces at novel orientations. Mem Cognit 32:916–931. doi:10.3758/BF03196870 pmid:15673180
    OpenUrlCrossRefPubMed
  8. ↵
    1. Carlin JD,
    2. Calder AJ,
    3. Kriegeskorte N,
    4. Nili H,
    5. Rowe JB
    (2011) A head view-invariant representation of gaze direction in anterior superior temporal sulcus. Curr Biol 21:1817–1821. doi:10.1016/j.cub.2011.09.025 pmid:22036180
    OpenUrlCrossRefPubMed
  9. ↵
    1. Coggan DD,
    2. Liu W,
    3. Baker DH,
    4. Andrews TJ
    (2016) Category-selective patterns of neural response in the ventral visual pathway in the absence of categorical information. Neuroimage 135:107–114. doi:10.1016/j.neuroimage.2016.04.060 pmid:27132543
    OpenUrlCrossRefPubMed
  10. ↵
    1. Corballis MC,
    2. Beale IL
    (1976) The psychology of left and right. Oxford: Lawrence Erlbaum.
  11. ↵
    1. Davies-Thompson J,
    2. Andrews TJ
    (2012) Intra- and interhemispheric connectivity between face-selective regions in the human brain. J Neurophysiol 108:3087–3095. doi:10.1152/jn.01171.2011 pmid:22972952
    OpenUrlCrossRefPubMed
  12. ↵
    1. Dubois J,
    2. de Berker AO,
    3. Tsao DY
    (2015) Single-unit recordings in the macaque face patch system reveal limitations of fMRI MVPA. J Neurosci 35:2791–2802. doi:10.1523/JNEUROSCI.4037-14.2015 pmid:25673866
    OpenUrlAbstract/FREE Full Text
  13. ↵
    1. Eger E,
    2. Schweinberger SR,
    3. Dolan RJ,
    4. Henson RN
    (2005) Familiarity enhances invariance of face representations in human ventral visual cortex: fMRI evidence. Neuroimage 26:1128–1139. doi:10.1016/j.neuroimage.2005.03.010 pmid:15961049
    OpenUrlCrossRefPubMed
  14. ↵
    1. Ewbank MP,
    2. Andrews TJ
    (2008) Differential sensitivity for viewpoint between familiar and unfamiliar faces in human visual cortex. Neuroimage 40:1857–1870. doi:10.1016/j.neuroimage.2008.01.049 pmid:18343161
    OpenUrlCrossRefPubMed
  15. ↵
    1. Fang F,
    2. He S
    (2005) Viewer-centered object representation in the human visual system revealed by viewpoint aftereffects. Neuron 45:793–800. doi:10.1016/j.neuron.2005.01.037 pmid:15748853
    OpenUrlCrossRefPubMed
  16. ↵
    1. Fang F,
    2. Murray SO,
    3. He S
    (2007) Duration-dependent fMRI adaptation and distributed viewer-centered face representation in human visual cortex. Cereb Cortex 17:1402–1411. doi:10.1093/cercor/bhl053 pmid:16905593
    OpenUrlCrossRefPubMed
  17. ↵
    1. Freiwald WA,
    2. Tsao DY
    (2010) Functional compartmentalization and viewpoint generalization within the macaque face-processing system. Science 330:845–851. doi:10.1126/science.1194908 pmid:21051642
    OpenUrlAbstract/FREE Full Text
  18. ↵
    1. Grill-Spector K,
    2. Kushnir T,
    3. Edelman S,
    4. Avidan G,
    5. Itzchak Y,
    6. Malach R
    (1999) Differential processing of objects under various viewing conditions in the human lateral occipital complex. Neuron 24:187–203. doi:10.1016/S0896-6273(00)80832-6 pmid:10677037
    OpenUrlCrossRefPubMed
  19. ↵
    1. Guntupalli JS,
    2. Wheeler KG,
    3. Gobbini MI
    (2017) Disentangling the representation of identity from head view along the human face processing pathway. Cereb Cortex 27:46–53. doi:10.1093/cercor/bhw344 pmid:28051770
    OpenUrlCrossRefPubMed
  20. ↵
    1. Hancock PJ,
    2. Bruce V,
    3. Burton AM
    (2000) Recognition of unfamiliar faces. Trends Cogn Sci 4:330–337. doi:10.1016/S1364-6613(00)01519-9 pmid:10962614
    OpenUrlCrossRefPubMed
  21. ↵
    1. Hanke M,
    2. Halchenko YO,
    3. Sederberg PB,
    4. Hanson SJ,
    5. Haxby JV,
    6. Pollmann S
    (2009) PyMVPA: a python toolbox for multivariate pattern analysis of fMRI data. Neuroinformatics 7:37–53. doi:10.1007/s12021-008-9041-y pmid:19184561
    OpenUrlCrossRefPubMed
  22. ↵
    1. Haxby JV,
    2. Gobbini MI,
    3. Furey ML,
    4. Ishai A,
    5. Schouten JL,
    6. Pietrini P
    (2001) Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293:2425–2430. doi:10.1126/science.1063736 pmid:11577229
    OpenUrlAbstract/FREE Full Text
  23. ↵
    1. Hill H,
    2. Bruce V
    (1996) Effects of lighting on the perception of facial surfaces. J Exp Psychol Hum Percept Perform 22:986–1004. doi:10.1037/0096-1523.22.4.986 pmid:8756964
    OpenUrlCrossRefPubMed
  24. ↵
    1. Jacobsen T,
    2. Schubotz RI,
    3. Höfel L,
    4. Cramon DY
    (2006) Brain correlates of aesthetic judgment of beauty. Neuroimage 29:276–285. doi:10.1016/j.neuroimage.2005.07.010 pmid:16087351
    OpenUrlCrossRefPubMed
  25. ↵
    1. Kanwisher N,
    2. McDermott J,
    3. Chun MM
    (1997) The fusiform face area: a module in human extrastriate cortex specialized for face perception. J Neurosci 17:4302–4311. doi:10.1523/JNEUROSCI.17-11-04302.1997 pmid:9151747
    OpenUrlAbstract/FREE Full Text
  26. ↵
    1. Keefe BD,
    2. Gouws AD,
    3. Sheldon AA,
    4. Vernon RJW,
    5. Lawrence SJD,
    6. McKeefry DJ,
    7. Wade AR,
    8. Morland AB
    (2018) Emergence of symmetry selectivity in the visual areas of the human brain: fMRI responses to symmetry presented in both frontoparallel and slanted planes. Hum Brain Mapp 39:3813–3826. doi:10.1002/hbm.24211 pmid:29968956
    OpenUrlCrossRefPubMed
  27. ↵
    1. Kietzmann TC,
    2. Swisher JD,
    3. König P,
    4. Tong F
    (2012) Prevalence of selectivity for mirror-symmetric views of faces in the ventral and dorsal visual pathways. J Neurosci 32:11763–11772. doi:10.1523/JNEUROSCI.0126-12.2012 pmid:22915118
    OpenUrlAbstract/FREE Full Text
  28. ↵
    1. Kietzmann TC,
    2. Gert A,
    3. König P
    (2015) Representational dynamics of facial viewpoint encoding: head orientation, viewpoint symmetry, and front-on views. J Vis 15(12):750. doi:10.1167/15.12.750
    OpenUrlCrossRef
  29. ↵
    1. Kietzmann TC,
    2. Gert AL,
    3. Tong F,
    4. König P
    (2017) Reperesentational dynamics of facial viewpoint encoding. J Cogn Neurosci 29:637–651. doi:10.1162/jocn_a_01070 pmid:27791433
    OpenUrlCrossRefPubMed
  30. ↵
    1. Kriegeskorte N,
    2. Mur M,
    3. Bandettini PA
    (2008) Representational similarity analysis-connecting the branches of systems neuroscience. Front Syst Neurosci 2:4. doi:10.3389/neuro.01.016.2008 pmid:19104670
    OpenUrlCrossRefPubMed
  31. ↵
    1. Langner O,
    2. Dotsch R,
    3. Bijlstra G,
    4. Wigboldus DHJ,
    5. Hawk ST,
    6. van Knippenberg A
    (2010) Presentation and validation of the radboud faces database. Cogn Emot 24:1377–1388. doi:10.1080/02699930903485076
    OpenUrlCrossRef
  32. ↵
    1. Longmore CA,
    2. Liu CH,
    3. Young AW
    (2008) Learning faces from photographs. J Exp Psychol Hum Percept Perform 34:77–100. doi:10.1037/0096-1523.34.1.77 pmid:18248141
    OpenUrlCrossRefPubMed
  33. ↵
    1. Makin AD,
    2. Pecchinenda A,
    3. Bertamini M
    (2012) Implicit affective evaluation of visual symmetry. Emotion 12:1021–1030. doi:10.1037/a0026924 pmid:22251051
    OpenUrlCrossRefPubMed
  34. ↵
    1. Oliva A,
    2. Torralba A
    (2001) Modeling the Shape of the scene: A holistic representation of the spatial envelope. International Journal of Computer Vision 42:145–175.
    OpenUrlCrossRef
  35. ↵
    1. Perrett DI,
    2. Oram MW,
    3. Harries MH,
    4. Bevan R,
    5. Hietanen JK,
    6. Benson PJ,
    7. Thomas S
    (1991) Viewer-centred and object-centred coding of heads in the macaque temporal cortex. Exp Brain Res 86:159–173. pmid:1756786
    OpenUrlCrossRefPubMed
  36. ↵
    1. Perrett DI,
    2. Oram MW,
    3. Ashbridge E
    (1998) Evidence accumulation in cell populations responsive to faces: an account of generalisation of recognition without mental transformations. Cognition 67:111–145. doi:10.1016/S0010-0277(98)00015-8 pmid:9735538
    OpenUrlCrossRefPubMed
  37. ↵
    1. Poldrack RA,
    2. Halchenko YO,
    3. Hanson SJ
    (2009) Decoding the large-scale structure of brain function by classifying mental states across individuals. Psychol Sci 20:1364–1372. doi:10.1111/j.1467-9280.2009.02460.x pmid:19883493
    OpenUrlCrossRefPubMed
  38. ↵
    1. Pourtois G,
    2. Schwartz S,
    3. Seghier ML,
    4. Lazeyras F,
    5. Vuilleumier P
    (2005) View-independent coding of face identity in frontal and temporal cortices is modulated by familiarity: an event-related fMRI study. Neuroimage 24:1214–1224. doi:10.1016/j.neuroimage.2004.10.038 pmid:15670699
    OpenUrlCrossRefPubMed
  39. ↵
    1. Ramírez FM
    (2018) Orientation encoding and viewpoint invariance in face recognition: inferring neural properties from large-scale signals. Neuroscientist 24:582–608. doi:10.1177/1073858418769554 pmid:29855217
    OpenUrlCrossRefPubMed
  40. ↵
    1. Ramírez FM,
    2. Cichy RM,
    3. Allefeld C,
    4. Haynes JD
    (2014) The neural code for face orientation in the human fusiform face area. J Neurosci 34:12155–12167. doi:10.1523/JNEUROSCI.3156-13.2014 pmid:25186759
    OpenUrlAbstract/FREE Full Text
  41. ↵
    1. Rhodes G,
    2. Peters M,
    3. Lee K,
    4. Morrone MC,
    5. Burr D
    (2005) Higher-level mechanisms detect facial symmetry. Proc Biol Sci 272:1379–1384. doi:10.1098/rspb.2005.3093 pmid:16006321
    OpenUrlCrossRefPubMed
  42. ↵
    1. Rice GE,
    2. Watson DM,
    3. Hartley T,
    4. Andrews TJ
    (2014) Low-level image properties of visual objects predict patterns of neural response across category-selective regions of the ventral visual pathway. J Neurosci 34:8837–8844. doi:10.1523/JNEUROSCI.5265-13.2014 pmid:24966383
    OpenUrlAbstract/FREE Full Text
  43. ↵
    1. Sormaz M,
    2. Watson DM,
    3. Smith WAP,
    4. Young AW,
    5. Andrews TJ
    (2016) Modelling the perceptual similarity of facial expressions from image statistics and neural responses. Neuroimage 129:64–71. doi:10.1016/j.neuroimage.2016.01.041 pmid:26825440
    OpenUrlCrossRefPubMed
  44. ↵
    1. Sutherland CA,
    2. Young AW,
    3. Rhodes G
    (2017) Facial first impressions from another angle: how social judgments are influenced by changeable and invariant facial properties. Br J Psychol 108:397–415. doi:10.1111/bjop.12206 pmid:27443971
    OpenUrlCrossRefPubMed
  45. ↵
    1. Tarr MJ,
    2. Bulthoff HH
    (1998) Image-based object recognition in man, monkey and machine. Cognition 67:1–20. doi:10.1016/s0010-0277(98)00026-2 pmid:9735534
    OpenUrlCrossRefPubMed
  46. ↵
    1. Troje NF,
    2. Bülthoff HH
    (1998) How is bilateral symmetry of human faces used for recognition of novel views? Vision Res 38:79–89. doi:10.1016/S0042-6989(97)00165-X pmid:9474378
    OpenUrlCrossRefPubMed
  47. ↵
    1. Wang L,
    2. Mruczek RE,
    3. Arcaro MJ,
    4. Kastner S
    (2015) Probabilistic maps of visual topography in human cortex. Cereb Cortex 25:3911–3931. doi:10.1093/cercor/bhu277 pmid:25452571
    OpenUrlCrossRefPubMed
  48. ↵
    1. Watson DM,
    2. Hartley T,
    3. Andrews TJ
    (2014) Patterns of response to visual scenes are linked to the low-level properties of the image. Neuroimage 99:402–410. doi:10.1016/j.neuroimage.2014.05.045 pmid:24862072
    OpenUrlCrossRefPubMed
  49. ↵
    1. Watson DM,
    2. Young AW,
    3. Andrews TJ
    (2016) Spatial properties of objects predict patterns of neural response in the ventral visual pathway. Neuroimage 126:173–183. doi:10.1016/j.neuroimage.2015.11.043 pmid:26619786
    OpenUrlCrossRefPubMed
  50. ↵
    1. Weibert K,
    2. Flack TR,
    3. Young AW,
    4. Andrews TJ
    (2018) Patterns of neural response in face regions are predicted by low-level image properties. Cortex 103:199–210. doi:10.1016/j.cortex.2018.03.009 pmid:29655043
    OpenUrlCrossRefPubMed
  51. ↵
    1. Young AW
    (2018) Faces, people and the brain: the 45th sir Frederic Bartlett lecture. Q J Exp Psychol 71:569–594. doi:10.1177/1747021817740275 pmid:29461174
    OpenUrlCrossRefPubMed
  52. ↵
    1. Young AW,
    2. Burton AM
    (2017) Recognizing faces. Curr Dir Psychol Sci 26:212–217. doi:10.1177/0963721416688114
    OpenUrlCrossRef
Back to top

In this issue

The Journal of Neuroscience: 39 (19)
Journal of Neuroscience
Vol. 39, Issue 19
8 May 2019
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Advertising (PDF)
  • Ed Board (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Symmetrical Viewpoint Representations in Face-Selective Regions Convey an Advantage in the Perception and Recognition of Faces
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Symmetrical Viewpoint Representations in Face-Selective Regions Convey an Advantage in the Perception and Recognition of Faces
Tessa R. Flack, Richard J. Harris, Andrew W. Young, Timothy J. Andrews
Journal of Neuroscience 8 May 2019, 39 (19) 3741-3751; DOI: 10.1523/JNEUROSCI.1977-18.2019

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Symmetrical Viewpoint Representations in Face-Selective Regions Convey an Advantage in the Perception and Recognition of Faces
Tessa R. Flack, Richard J. Harris, Andrew W. Young, Timothy J. Andrews
Journal of Neuroscience 8 May 2019, 39 (19) 3741-3751; DOI: 10.1523/JNEUROSCI.1977-18.2019
Reddit logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • face
  • fMRI
  • recognition
  • symmetry
  • viewpoint

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • The human centromedial amygdala contributes to negative prediction error signaling during appetitive and aversive Pavlovian gustatory learning
  • Perceptual Difficulty Regulates Attentional Gain Modulations in Human Visual Cortex
  • Concurrent- and after-effects of medial temporal lobe stimulation on directed information flow to and from prefrontal and parietal cortices during memory formation
Show more Research Articles

Behavioral/Cognitive

  • The human centromedial amygdala contributes to negative prediction error signaling during appetitive and aversive Pavlovian gustatory learning
  • Perceptual Difficulty Regulates Attentional Gain Modulations in Human Visual Cortex
  • Concurrent- and after-effects of medial temporal lobe stimulation on directed information flow to and from prefrontal and parietal cortices during memory formation
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
(JNeurosci logo)
(SfN logo)

Copyright © 2023 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.