The representation of information about faces in the temporal and frontal lobes
Introduction
Evidence on how information about visual stimuli is represented in the temporal cortical visual areas and on how these representations are formed is described. The neurophysiological recordings are made mainly in non-human primates, macaques, firstly because the temporal lobe, in which this processing occurs, is much more developed than in non-primates, and secondly because the findings are relevant to understanding the effects of brain damage in patients, as will be shown. In this paper, attention will be paid to neural systems involved in processing information about faces, because with the large number of neurons devoted to this class of stimuli, this system has proved amenable to experimental analysis; because of the importance of face recognition and expression identification in primate including human social and emotional behaviour; and because of the application of understanding this neural system to understanding the effects of damage to this system in humans. It will also be shown that the temporal cortical visual areas have neuronal populations that provide invariant representations of objects. Although there is some segregation of face identity and object identity representations in different cytoarchitectonic regions, the proportion of face-selective neurons in any one region reaches only 20%, so that no region is devoted exclusively to faces (see Section 2).
In Section 2, I show that there are two main populations of face-selective neurons in the temporal cortical visual areas. The first population is tuned to the identity of faces, and has representations that are invariant with respect to, for example, retinal position, size and even view. These invariant representations are ideally suited to provide the inputs to brain regions such as the orbitofrontal cortex and amygdala that learn the reinforcement associations of an individual's face, for then the learning, and the appropriate social and emotional responses, generalise to other views of the same face. Moreover, these inferior temporal cortex neurons have sparse distributed representations of faces, which are shown to be well suited as inputs to the stimulus–reinforcer association learning mechanisms in the orbitofrontal cortex and amygdala which allow different emotional and social responses to be made to the faces of different individuals, depending on the reinforcers received. The properties of these neurons tuned to face identity, and how the brain may learn these invariant representations of faces, are described in Sections 3 The selectivity of one population of neurons for faces, 4 The selectivity of these neurons for individual face features or for combinations of face features, 5 Distributed encoding of face and object identity, 6 Advantages of the distributed representation of objects and faces for brain processing, 6.1 Exponentially high coding capacity, 6.2 Ease with which the code can be read by receiving neurons, 6.3 Higher resistance to noise, 6.4 Generalisation, 6.5 Completion, 6.6 Graceful degradation or fault tolerance, 6.7 Speed of readout of the information, 7 Invariance in the neuronal representation of stimuli, 8 A view-independent representation of faces and objects, 9 Learning of new representations in the temporal cortical visual areas, 10 The speed of processing in the temporal cortical visual areas, 11 Possible computational mechanisms in the visual cortex for face and object recognition, 12 A computational model of invariant visual object and face recognition. Section 13 describes a second main population of neurons that are in the cortex in the superior temporal sulcus, which encode other aspects of faces such as face expression, eye gaze, face view and whether the head is moving. This second population of neurons thus provides important additional inputs to parts of the brain such as the orbitofrontal cortex and amygdala that are involved in social communication and emotional behaviour. This second population of neurons may in some cases encode reinforcement value (e.g. face expression neurons), or provide social information that is very relevant to whether reinforcers will be received, such as neurons that signal eye gaze, or whether the head is turning towards or away from the receiver. In Sections 14 A representation of faces in the amygdala, 15 A representation of faces in the orbitofrontal cortex, I describe the responses of face-selective neurons in two areas, the amygdala and orbitofrontal cortex, to which the temporal cortical areas have direct projections. I also review evidence (in Section 15) that damage to the human orbitofrontal cortex can impair face (and voice) expression identification. The orbitofrontal cortex is also shown to be involved in the rapid reversal of behaviour to stimuli (which could be the face of an individual) when the reinforcement contingencies change, and therefore to have an important role in social and emotional behaviour. Moreover, the human orbitofrontal cortex is shown to be activated in a simple model of human social interaction when a face expression change indicates that the face of a particular individual is no longer reinforcing.
Section snippets
Neuronal responses found in different temporal lobe cortex visual areas
Visual pathways project by a number of cortico-cortical stages from the primary visual cortex until they reach the temporal lobe visual cortical areas (Baizer, Ungerleider, & Desimone, 1991; Maunsell & Newsome, 1987; Seltzer & Pandya, 1978) in which some neurons respond selectively to faces (Bruce, Desimone, & Gross, 1981; Desimone, 1991; Desimone, Albright, Gross, & Bruce, 1984; Desimone & Gross, 1979; Gross, Desimone, Albright, & Schwartz, 1985; Perrett, Rolls, & Caan, 1982; Rolls, 1981,
The selectivity of one population of neurons for faces
The neurons described in our studies as having responses selective for faces are selective in that they respond 2–20 times more (and statistically significantly more) to faces than to a wide range of gratings, simple geometrical stimuli, or complex 3D objects (Baylis, Rolls, & Leonard, 1985; Baylis et al., 1987; Rolls, 1984, Rolls, 1992a, Rolls, 1997, Rolls, 2000a; Rolls & Deco, 2002). The recordings are made while the monkeys perform a visual fixation task in which after the fixation spot has
The selectivity of these neurons for individual face features or for combinations of face features
Masking out or presenting parts of the face (e.g. eyes, mouth or hair) in isolation reveal that different cells respond to different features or subsets of features. For some cells, responses to the normal organisation of cut-out or line-drawn facial features are significantly larger than to images in which the same facial features are jumbled (Perrett et al., 1982; Rolls, Tovee, Purcell, Stewart, & Azzopardi, 1994). These findings are consistent with the hypotheses developed below that by
Distributed encoding of face and object identity
An important question for understanding brain function is whether a particular object (or face) is represented in the brain by the firing of one or a few gnostic (or “grandmother”) cells (Barlow, 1972), or whether instead the firing of a group or ensemble of cells each with different profiles of responsiveness to the stimuli provides the representation. It has been shown that the representation of which particular object (face) is present is rather distributed. Baylis et al. (1985) showed this
Advantages of the distributed representation of objects and faces for brain processing
The advantages of the distributed encoding found are now considered, and apply to both fully distributed and to sparse distributed (but not to local) encoding schemes, as explained elsewhere (Rolls, 2005; Rolls & Deco, 2002; Rolls & Treves, 1998).
Invariance in the neuronal representation of stimuli
One of the major problems that must be solved by a visual system is the building of a representation of visual information which allows recognition to occur relatively independently of size, contrast, spatial frequency, position on the retina, angle of view, etc. This is required so that if the receiving associative networks (in e.g. the amygdala, orbitofrontal cortex and hippocampus) learn about one view, position, etc., of the object, the animal generalises correctly to other positions,
A view-independent representation of faces and objects
It has also been shown that some temporal cortical neurons reliably responded differently to the faces of two different individuals independently of viewing angle (Hasselmo, Rolls, Baylis, & Nalwa, 1989), although in most cases (16/18 neurons) the response was not perfectly view-independent. Mixed together in the same cortical regions there are neurons with view-dependent responses (Hasselmo, Rolls, Baylis, & Nalwa, 1989). Such neurons might respond, for example, to a view of a profile of a
Learning of new representations in the temporal cortical visual areas
To investigate the hypothesis that visual experience might guide the formation of the responsiveness of neurons so that they provide an economical and ensemble-encoded representation of items actually present in the environment, the responses of inferior temporal cortex face-selective neurons have been analysed while a set of new faces were shown. It was found that some of the neurons studied in this way altered the relative degree to which they responded to the different members of the set of
The speed of processing in the temporal cortical visual areas
Given that there is a whole sequence of visual cortical processing stages including V1, V2, V4, and the posterior inferior temporal cortex to reach the anterior temporal cortical areas, and that the response latencies of neurons in V1 are about 40–50 ms, and in the anterior inferior temporal cortical areas approximately 80–100 ms, each stage may need to perform processing for only 15–30 ms before it has performed sufficient processing to start influencing the next stage. Consistent with this,
Possible computational mechanisms in the visual cortex for face and object recognition
The neurophysiological findings described above, and wider considerations on the possible computational properties of the cerebral cortex (Rolls, 1989a, Rolls, 1989b, Rolls, 1992a; Rolls & Treves, 1998), lead to the following outline working hypotheses on object (including face) recognition by visual cortical mechanisms (Rolls & Deco, 2002).
Cortical visual processing for object recognition is considered to be organised as a set of hierarchically connected cortical regions consisting at least of
A computational model of invariant visual object and face recognition
To test and clarify the hypotheses just described about how the visual system may operate to learn invariant object recognition, we have performed simulations which implement many of the ideas just described, and which are consistent with and based on much of the neurophysiology summarised above. The network simulated (VisNet) can perform object, including face, recognition in a biologically plausible way, and after training shows, for example, translation and view invariance (Rolls & Deco, 2002
Different neural systems are specialised for face expression decoding and for face recognition
It has been shown that some neurons respond to face identity, and others to face expression (Hasselmo, Rolls, & Baylis, 1989). The neurons responsive to expression were found primarily in the cortex in the superior temporal sulcus, while the neurons responsive to identity (described in the preceding sections) were found in the inferior temporal gyrus including areas TEa and TEm. Information about facial expression is of potential use in social interactions (Rolls, 1984, Rolls, 1986a, Rolls,
A representation of faces in the amygdala
Outputs from the temporal cortical visual areas reach the amygdala and the orbitofrontal cortex, and evidence is accumulating that these brain areas are involved in social and emotional responses to faces (Rolls, 1990, Rolls, 1999b, Rolls, 2000b, Rolls, 2005; Rolls & Deco, 2002). For example, lesions of the amygdala in monkeys disrupt social and emotional responses to faces, and we have identified a population of neurons with face-selective responses in the primate amygdala (Leonard et al., 1985
A representation of faces in the orbitofrontal cortex
Rolls et al. (2006) have found a number of face-responsive neurons in the orbitofrontal cortex, and they are also present in adjacent prefrontal cortical areas (Wilson, O'Scalaidhe, & Goldman-Rakic, 1993). The orbitofrontal cortex face-responsive neurons, first observed by Thorpe, Rolls, and Maddison (1983), then by Rolls et al. (2006), tend to respond with longer latencies than temporal lobe neurons (140–200 ms typically, compared with 80–100 ms); they also convey information about which face is
Conclusions
Neurophysiological investigations of the inferior temporal cortex are revealing at least part of the way in which neuronal firing encodes information about faces and objects, and are showing that one representation implements several types of invariance. The representation found has clear utility for the receiving networks. These neurophysiological findings are stimulating the development of computational neuronal network models which suggest that part of the process involves the operation of a
Acknowledgements
The author has worked on some of the investigations described here with P. Azzopardi, G.C. Baylis, P. Földiák, M. Hasselmo, C.M. Leonard, G. Littlewort, T.J. Milward, D.I. Perrett, M.J. Tovee, A. Treves and G. Wallis, and their collaboration is sincerely acknowledged. Different parts of the research described were supported by the Medical Research Council, PG8513790; by a Human Frontier Science Program grant; by an EC Human Capital and Mobility grant; by the MRC Oxford Interdisciplinary
References (132)
Animate vision uses object-centred reference frames
- et al.
Selectivity between faces in the responses of a population of neurons in the cortex in the superior temporal sulcus of the monkey
Brain Research
(1985) - et al.
Response of neurons in the macaque amygdala to complex social stimuli
Behavioural Brain Research
(1990) - et al.
Visual areas in the temporal cortex of the macaque
Brain Research
(1979) - et al.
Temporal coding in the visual system: New vistas on integration in the nervous system
Trends in Neurosciences
(1992) Anlaysis of the process of visual pattern recognition by the neocognitron
Neural Networks
(1989)- et al.
The role of expression and identity in the face-selective responses of neurons in the temporal visual cortex of the monkey
Behavioural Brain Research
(1989) - et al.
Human neural systems for face recognition and social communication
Biological Psychiatry
(2002) - et al.
Face and voice expression identification in patients with emotional and behavioural changes following ventral frontal lobe damage
Neuropsychologia
(1996) - et al.
Neural correlates of rapid reversal learning in a simple model of human social interaction
NeuroImage
(2003)
The functional neuroanatomy of the human orbitofrontal cortex: Evidence from neuroimaging and neuropsychology
Progress in Neurobiology
Neurons in the amygdala of the monkey with responses selective for faces
Behavioural Brain Research
View-dependent object recognition by monkeys
Current Biology
Neuronal responses to photographs in the superior temporal sulcus of the rhesus monkey
Behavioural Brain Research
Visual neurons responsive to faces
Trends in Neurosciences
Visual analysis of body movements by neurons in the temporal cortex of the macaque monkey: A preliminary report
Behavioural Brain Research
Human emotion and memory: Interactions of the amygdala and hippocampal complex
Current Opinion in Neurobiology
Neural systems involved in emotion in primates
Functions of neuronal networks in the hippocampus and neocortex in memory
Neural organisation of higher visual functions
Current Opinion in Neurobiology
Functions of the primate temporal lobe cortical visual areas in invariant visual object and face recognition
Neuron
The responses of neurons in the cortex in the superior temporal sulcus of the monkey to band-pass spatial frequency filtered faces
Vision Research
Role of low and high spatial frequencies in the face-selective responses of neurons in the cortex in the superior temporal sulcus in the monkey
Vision Research
Representational capacity of face coding in monkeys
Cerebral Cortex
Impaired recognition of social emotions following amygdala damage
Journal of Cognitive Neuroscience
Object perception in natural scenes: Encoding by inferior temporal cortex simultaneously recorded neurons
Journal of Neurophysiology
Responses of neurons in primary and inferior temporal visual cortices to natural scenes
Proceedings of the Royal Society of London B
Organization of visual inputs to the inferior temporal and posterior parietal cortex in macaques
Journal of Neuroscience
Subsymbolic modelling of hand-eye coordination
Single units and sensation: A neuron doctrine for perceptual psychology?
Perception
Responses of neurons in the inferior temporal cortex in short term and serial recognition memory tasks
Experimental Brain Research
Functional subdivisions of the temporal lobe neocortex
Journal of Neuroscience
View-invariant representations of familiar objects by neurons in the inferior temporal visual cortex
Cerebral Cortex
Visual topography of area TEO in the macaque
Journal of Comparative Neurology
Visual properties of neurons in a polysensory area in superior temporal sulcus of the macaque
Journal of Neurophysiology
Facial emotion recognition after bilateral amygdala damage: Differentially severe impairment of fear
Cognitive Neuropsychology
Face-selective cells in the temporal cortex of monkeys
Journal of Cognitive Neuroscience
Stimulus-selective properties of inferior temporal neurons in the macaque
Journal of Neuroscience
How the brain learns to see objects and faces in an impoverished context
Nature
Invariant recognition of feature combinations in the visual system
Biological Cybernetics
Learning invariance from transformation sequences
Neural Computation
The use of decoding to analyze the contribution to the information of the correlations between the firing of simultaneously recorded neurons
Experimental Brain Research
Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position
Biological Cybernetics
Neural networks for visual pattern recogntion
IEEE Transactions
Neural activity in areas V1, V2 and V4 during free viewing of natural scenes compared to controlled viewing
Neuroreport
How independent are the messages carried by adjacent inferior temporal cortical neurons?
Journal of Neuroscience
Spatial view cells in the primate hippocampus: Allocentric view not head direction or eye position or place
Cerebral Cortex
The human visual cortex
Annual Review of Neuroscience
Inferior temporal cortex and pattern recognition
Experimental Brain Research
Cited by (127)
The orbitofrontal cortex and emotion in health and disease, including depression
2019, NeuropsychologiaCitation Excerpt :The visual input is from the ventral, temporal lobe, visual stream concerned with “what” object is being seen (see Rolls, 2000a; Rolls, 2016a; Rolls and Deco, 2002). Many neurons in these temporal cortex visual areas have responses to objects or faces that are invariant with respect to size, position on the retina, and even view (Rolls, 2000a, 2007c, 2008b, 2012b, 2016a; Rolls and Deco, 2002), making these neurons ideal as an input to a system that may learn about the reinforcement association properties of objects and faces, for after a single learning trial, the learning then generalizes correctly to other views etc. (see Rolls, 2000a; Rolls, 2014a, 2016a; Rolls and Deco, 2002). Using this object-related information, orbitofrontal cortex visual neurons frequently respond differentially to objects or images depending on their reward association (Rolls et al., 1996b; Thorpe et al., 1983).
Cognitive penetration of early vision in face perception
2018, Consciousness and CognitionNon-accidental properties, metric invariance, and encoding by neurons in a model of ventral stream visual object recognition, VisNet
2018, Neurobiology of Learning and MemoryDiminished Repetition Suppression Reveals Selective and Systems-Level Face Processing Differences in ASD
2023, Journal of Neuroscience