Abstract
Decision-making is influenced by both expected rewards and social factors, such as who offers the outcomes. Therefore, although a reward might originally be independent of social factors, the two elements are closely related. However, whether and how they are processed separately or conjointly remains unclear. Here, we show that neurons in distinct subnuclei of the amygdala encode expected reward and face reality, a vital aspect of face perception. Although these encoding processes are distinct, they rely on partially shared neuronal circuits with characteristic temporal dynamics. Two male macaque monkeys made saccades under different social and reward contexts by viewing facial images with independent attributes: reality (a real monkey or a cartoon face) and associated reward (large or small). The stimulus image was presented twice per trial: during initial stimulus encoding (S1) and before saccades were made (S2). A longer gaze duration for the eye region of the monkeys compared with cartoons indicated more robust social engagement with realistic faces. During S1, a similar number of lateral nucleus neurons encoded either reality only, with a monkey-image preference; reward only, with a large-reward preference; or both. Conversely, neurons in the basal and central nuclei primarily encoded reward, preferring large- versus small-reward–associated face images. Reward-dependent modulation continued after S1 but was more conspicuous during S1 in the basal nucleus and during both S1 and S2 in the central nucleus. This anatomically and temporally specific encoding in the amygdala may underlie the computation and integration of social and reward information.
Significance Statement
Reward and social information are closely related but originally independent, as both influence our decision-making. The amygdala has been associated with both reward and social information coding. However, whether and how they are processed separately or conjointly by individual neurons in the amygdala remains unclear. We found that neurons in the lateral and basal nuclei encoded face reality, an important aspect of social information, and reward, respectively, during sensory processing. Neurons in the central nucleus encoded reward information during the execution phase. These findings provide new insight into mechanisms underlying separate or integrated social and reward information processing within the amygdala.
Introduction
Decision-making is influenced by both expected rewards and social aspects, such as the identity of the individual offering the outcomes. For example, at a restaurant, we may perceive the same food differently depending on whether it is served by a robot or a human waiter. Therefore, although reward and social information might originally be independent, they are closely related.
Among the many brain areas implicated in reward coding, the amygdala plays a key role in value evaluation of sensory events (Paton et al., 2006; Belova et al., 2007; Salzman et al., 2007; Belova et al., 2008; Janak and Tye, 2015; Namburi et al., 2015; Grabenhorst et al., 2016; O'Neill et al., 2018).
The amygdala has also been associated with social cognition (Adolphs, 2010). The human amygdala (Adolphs et al., 1994) and single neurons in the primate amygdala (Kuraoka and Nakamura, 2006; Gothard et al., 2007) encode information about facial expressions and eye contact (Mosher et al., 2014). Amygdala neurons also signal the value of rewards delivered to oneself and to others (Chang et al., 2015), along with social decisions (Grabenhorst and Schultz, 2021). We therefore asked whether and how the amygdala encodes value and social cognition.
Among many aspects of social cognition, we focused on detecting the animacy of faces and gaze direction. New et al. (New et al., 2007) addressed the importance of animacy from an evolutionary psychological perspective. Previous work defined animacy using five dimensions, “being alive,” “looking like an animal,” “having agency,” “having mobility,” and “being unpredictable” (Jozwik et al., 2022), or four judgment standards, “animacy,” “living,” “moving,” and “human-similarity” (Contini et al., 2021). Among these, “looking like an animal” and “human-similarity” both relate to “being similar to real conspecifics.” This reality, especially “face reality,” has been linked to human brain activity involved in animacy discrimination (Gobbini et al., 2011; Balas and Koldewyn, 2013). Therefore, detecting “face reality” is an essential social skill.
Another aspect of social skills is detecting the direction of attention in others, known as joint attention (Emery, 2000). For example, an individual may detect a life-threatening entity by observing another's direction of gaze. It has been reported that primates learn about objects that attract others' attention by observing their gaze direction (Deaner and Platt, 2003).
However, whether and how individual neurons in the amygdala encode reward, face reality, and joint attention information remains unclear.
The lateral, basal, accessory basal, medial, and central subnuclei of the amygdala receive projections from distinct brain areas (Freese and Amaral, 2009) involved in the reward system (Schultz, 2016) and social networks (Kennedy and Adolphs, 2012). Previous studies have examined the distribution of neurons that process reward or social information in the subnuclei (Munuera et al., 2018; Putnam and Gothard, 2019; Pryluk et al., 2020). However, the precise distribution of neurons that encode complex social and/or reward information remains controversial.
The amygdala has also been implicated in various aspects of task sequences, such as stimulus–reward associations (Spiegler and Mishkin, 1981; Rudebeck et al., 2017) and decision-making (Grabenhorst et al., 2012). Therefore, the processing of social and reward information may vary across different task stages, such as sensory encoding and motor execution.
In this report, we examined (1) whether distinct amygdala regions compute face reality (i.e., complex social information), reward, or gaze-target congruency separately or conjointly and (2) whether face reality and reward coding vary across different stages of a concurrent cognitive process (sensory encoding or motor execution). To this end, we developed a behavioral paradigm in which animals made saccades under varying reality conditions (viewing a real monkey face or a cartoon face), reward contexts (predicting large or small rewards), and congruency conditions (gaze predicting the target position or not). We measured reality and reward coding during initial stimulus encoding and the later execution stages. We found anatomically and temporally specific encoding of face reality and reward in distinct subnuclei of the primate amygdala.
Materials and Methods
General
We used two hemispheres from two male Japanese monkeys (Macaca fuscata) with laboratory designations: Animal P (9 years old, 9 kg) and Animal C (7 years old, 11 kg). All experimental procedures were performed in accordance with the Guidelines for the Care and Use of Nonhuman Primates in Neuroscience Research by the Japan Neuroscience Society and were approved by the Institutional Animal Care and Use Committee at Kansai Medical University.
Each animal underwent implantation of a head post to maintain a stable head position. Eye position was monitored using an infrared video-tracking system with a time resolution of 500 Hz and a spatial resolution of 0.25–0.5° (iView X 2.8; SensoMotoric Instruments). All experiments were performed in a dark, soundproof room in which the macaques sat in a primate chair facing a 24 in monitor (ProLite B2403WS; Iiyama) positioned 38 cm from their eyes. All aspects of the behavioral experiment, including stimulus presentation, monitoring of eye movements and neuronal activity, and outcome delivery, were controlled by a real-time experimentation data acquisition system (Tempo; Reflective Computing).
Behavioral tasks
Face stimuli are well known animate agents (Shultz et al., 2015). Comparing real and artificial faces is an effective way to examine face animacy perception (Gobbini et al., 2011; Balas and Koldewyn, 2013; Wang et al., 2020). Accordingly, we used a set of eight full-color, static face stimuli, composed of four different real monkey faces and four cartoon faces shown in different colors (Fig. 1A). The monkey face stimuli were cropped from photographs of real monkeys. All face stimuli had a uniform mean luminance and size (number of pixels). Two monkey faces (M1L, M2L; Fig. 1A) and two cartoon faces (C1L, C2L) were consistently associated with a large reward, whereas the other two monkey faces (M3S, M4S) and two cartoon faces (C3S, C4S) were consistently associated with a small reward. Both animals learned to associate each face with the expected reward through extensive training.
Visually exploring complex images, such as faces, involves a sequence of saccades and fixations that allows us to shift our attention to specific informative features (Leonard et al., 2012; Guo et al., 2019; Liu et al., 2023). To obtain insights into aspects of visual perception, we considered that characterizing eye movement patterns and saccade reaction times (SRTs) is effective, as visuospatial analyses are achieved in conjunction with eye movements (Leek et al., 2012), and the direction of gaze is tightly coupled with attention orientation (Hoffman and Subramaniam, 1995). Furthermore, we asked about the role of the amygdala in multistep information processing rather than single-step processing. We therefore presented the stimuli twice while each animal performed a visually guided saccade to obtain a liquid reward (Fig. 1B). After fixating on the first central fixation point (Fix1) for 500 ms, the first face stimulus (S1) was presented for 500 ms. The animals were allowed to scan the image freely. After a delay with a blank screen (1000 ms for Animal P; 500 ms for Animal C), the second central fixation point (Fix2) appeared. After fixating on Fix2 for 1,100 ms (Animal P) or 500 ms (Animal C), the second face stimulus (S2) was briefly presented for 200 ms, followed by the presentation of a target dot on the left or right, 8° from the center. The animals were then required to make a saccade toward the target and hold fixation on it for 200 ms (Animal P) or 100 ms (Animal C) to obtain a liquid reward, which was controlled using a solenoid valve. The valve was open for 100 ms to deliver a small reward and 200 ms to deliver a large reward.
Since monkeys, such as humans, tend to follow others' gazes by reading social meaning from face stimuli (Emery, 2000; Deaner and Platt, 2003), we also examined gaze-following effects in this study. The first face stimulus (S1) always looked forward. The S2 was identical to S1, except that the gaze direction of the face was either toward the left or right side of the screen. As shown in Figure S1, half of the faces (Fig. 1A, M1L, M3S, C1L, C3S) always looked toward the future target (“congruent faces”), while the other half (Fig. 1A, M2L, M4S, C2L, C4S) always looked away from the future target (“incongruent faces”). Therefore, visual stimuli S1 and S2 carried information about (1) the degree of reality (real monkey or cartoon face), (2) the expected reward size (large or small), and (3) the congruency of the gaze direction and the saccade target.
To evaluate each animal’s preference for the stimuli, we also conducted a stimulus-assessment task (Fig. S2). In this task, the subjects were required to choose one of two face stimuli that were simultaneously presented as S1. Specifically, they had to choose between a monkey and a cartoon face or between a large- and small-reward associated face. The chosen stimulus was presented in the following S2 period. The stimulus preference was evaluated according to the proportion of each stimulus choice.
Recording neuronal activity and the localization of the amygdala nuclei
We recorded extracellular activity from single neurons in the amygdala subnuclei using tungsten electrodes (diameter, 0.25 mm; impedance, 0.5–2.0 MΩ at 1 kHz; Frederick Haer). For precise localization of the subnuclei of the amygdala located deep in the temporal lobe, the following procedures were conducted. First, we implanted a circular plastic recording chamber (Crist Instruments) at 20 mm AP and 11 mm lateral with a 10° lateral tilt. The grid system, with 0.7 mm diameter holes located 1 mm apart (Crist et al., 1988), was attached to the chamber to determine the location of penetration. By using two types (221 and 200 holes) of grids, with holes displaced by 0.5 mm in the anteroposterior and mediolateral directions, neuronal activity was recorded at 0.5 mm intervals. As shown in Figure 10A and Figure S8, the magnetic resonance images (MRI) of the brain (for Animal P, 1.5 T, SIGNA, General Electric; for Animal C, 0.3 T, AIRIS, Hitachi) obtained with the recording chamber and grid system filled with gadolinium allowed us to verify the location and angle of the chamber and grid relative to the amygdala.
For every recording session, a stainless-steel guide tube (0.7 mm in diameter) was first inserted through a hole in the grid until the tube tip reached ∼5 mm above the upper edge of the amygdala. The travel depth of the guide tube was determined based on the MRI data and was deep enough for precise guidance of the electrode and to avoid direct damage to the amygdala. Through this guide tube, the recording electrode was inserted and advanced using a hydraulic microdrive (MDO-974A, Narishige), while neuronal activity was monitored. The precise location, angle, and depth of the recording electrode were clarified primarily by two factors. First, for each penetration, we measured the depth of the border of anatomical structures: the lower edge of the putamen, white matter, and upper edge of the amygdala, which were identified by their characteristic activity pattern (i.e., neurons with a low firing rate in the putamen and relatively quiet activity in the white matter). Second, we overlaid these penetration records on the MR images (Fig. 10A; Fig. S8). We then defined the recording location, i.e., the subnuclei of the amygdala, by referring to published anatomical reports (Martin et al., 2000).
The neuronal activity signal was amplified with a bandpass filter (300 Hz–8 kHz; MCP-Plus 8; Alpha Omega) and collected at 1 kHz. The single-neuron activity was isolated and converted into pulses via a template-matching protocol (20 kHz for waveform matching and spike sampling; Power1401-3A; Cambridge Electronic Design).
Data analysis
Behavioral data
To evaluate the degree of interest or attention for each face stimulus, we measured eye-scan patterns for the first face stimulus (S1) that the animals freely scanned. The gaze was quantified separately for three areas: “eyes,” “face without eyes,” and “outside of the face” (Fig. 2B). Temporal changes in relative gaze frequency between the three areas were computed for every 50 ms window, using shifting 50 ms bins. To elucidate the effects of reality, reward, and congruency on gaze behaviors, we conducted a multiple regression analysis to assess the relationships between looking time and each of these effects. Each measure of looking time was tested with the following multiple regression model:
We also analyzed SRTs, defined as the interval between the S2 offset and the time at which the velocity of the eye movements exceeded 100°/s. To compare data across the different recording sessions, we converted the raw SRTs to z-scores according to the mean and standard deviation of the SRTs for each direction.
Neuronal data
As shown in Figures 4 and 9, a group of amygdala neurons started to exhibit modulated activity even during the Fix1 period. We therefore defined the baseline for each neuron as the mean firing rate during the 500 ms period immediately “before” the Fix1 onset (base). Excitatory or inhibitory responses in a given time window were defined as those significantly larger or smaller than the signal in the baseline period (Wilcoxon signed-rank test, p < 0.05).
We constructed peristimulus time histograms with 1 ms nonoverlapping bins and convolved the data with a Gaussian kernel with a standard deviation of 30 ms. For population activity, each neuron’s activity was depicted as a z-score computed using the mean and standard deviation of activity during the baseline period.
To elucidate the effects of reality, reward, and congruency information on the responses of each amygdala neuron, we computed the strength of the neuronal response to S1 (200–700 ms after the onset of S1) and to S2 (100–400 ms after the onset of S2; Fig. 4, yellow areas). Since the response latency of amygdala neurons to static visual stimuli is known to vary between 130 and 180 ms (Nakamura et al., 1992), we included the period up to 200 ms after S1 and S2 offset (i.e., 700 ms after the S1 onset and 400 ms after the S2 onset) in the analysis period. For each neuron, we conducted a multiple regression analysis to assess relationships between the neuronal activity and each effect. The activity of each neuron was tested with the following multiple regression model:
To estimate the degree of discrimination for reality and reward information based on population activity during the task, we performed receiver operating characteristic (ROC) analysis and computed the area under the curve (AUC) for trials involving either reality information (monkey vs cartoon face) or reward information (large vs small rewards; Fig. 6A). For reality information, an AUC value above 0.5 indicated that a neuron responded differently to monkey faces and cartoon faces. For reward information, an AUC value above 0.5 indicated that a neuron exhibited differential responses to large and small reward conditions. To analyze temporal changes in the effects of reality and reward information, we computed AUC values using a sliding window (200 ms windows with 1 ms steps).
Results
Behavioral data
We recorded single neuronal activity from the amygdala in two animals as they performed a visually guided saccade task to obtain a liquid reward. In the task, different face stimuli were presented twice (Fig. 1). The visual stimuli had different degrees of reality (real monkey or cartoon faces) and different expected reward values (large or small; Fig. 1A). The stimulus presentation took place immediately after the first fixation period (S1) and just before saccade execution (S2; Fig. 1B).
The visual stimuli influenced the eye-scan patterns during the presentation of S1, as shown in Figure 2A. Both animals gazed at all the facial visual stimuli, especially in areas around the eyes, during the first half of the S1 period (Fig. 2A, S1 early). During the later period of S1 (Fig. 2A, S1 late), the gaze was held around the eye area of the monkey face stimuli or the large-reward–associated stimuli. However, the gaze for the cartoon stimuli or small-reward–associated stimuli became scattered.
Experimental paradigm. A, Visual stimuli with two attributes: degree of reality, i.e., monkey (Mon) or cartoon (Car), and reward size, i.e., large or small. B, Visually guided saccade task under different reality and reward contexts. After fixation on Fixation Point 1 (Fix1) for 500 ms, one of eight visual stimuli (A) was briefly presented (S1), followed by a delay (1,000 or 500 ms) and the presentation of Fixation Point 2 (Fix2). After fixation on Fix2, we presented a visual stimulus (S2), which was the same as S1 except for the gaze direction (Fig. S1), followed by a target. The animal was required to make a visually guided saccade toward the target to receive a large or small reward, as indicated by S1 and S2 (M1L, M2L, C1L, and C2L for a large reward; M3S, M4S, C3S, and C4S for a small reward).
A, Persistent focus of attention toward eye region of the monkey face stimuli. Heat maps of each animal's gaze during the early and late half of the 500 ms S1 presentation period are shown for the four types of visual stimuli: Monkey_Large, Monkey_Small, Cartoon_Large, and Cartoon_Small. B, Left, Temporal change in gaze position during the S1 presentation period. We analyzed gaze position for three areas, “Eyes,” “Face without eyes,” and “Outside of face.” The ratio of time spent in each area was calculated every 50 ms. Top right, Time spent looking around the eyes during the last 50 ms of S1. Bottom right, Regression coefficient βs of reality on the x-axis, and regression coefficient βs of reward on the y-axis, for duration of gaze toward eye region of the S1 (see Materials and Methods). C, SRT distributions. Top, Saccades after the presentation of monkey and cartoon faces. Bottom, Saccades after the presentation of faces associated with a large and small reward. Reaction times are z-normalized according to the mean and standard deviation of the reaction times for leftward and rightward saccades (see Materials and Methods). D, Preferences for monkey and large-reward–associated faces. Preferences were determined via a two-alternative forced–choice procedure as part of the stimulus-assessment task (Fig. S2). The chosen face in the assessment task was used as the stimulus in the following visually guided saccade procedure. Values on the x-axis denote the preference of each stimulus, such that “1:0” denotes a complete preference for the stimulus and “0.5:0.5” denotes equal preference for the stimuli. Box plots indicate median and 25th–75th percentiles.
Population data regarding chronological changes in gaze behavior (20 sessions for both animals) support this trend (Fig. 2B, left). We compared the gaze duration for the area around the eyes during the last 50 ms period of S1, between monkey versus cartoon faces, large- versus small-reward–related stimuli, and congruent versus incongruent gazes with future target positions, using a repeated-measure three–way analysis of variance (ANOVA). The gaze duration for the area around the eyes of the monkey faces was significantly longer than that for the cartoon faces in both animals (Fig. 2B, top right panel; for Animal P; F(1,19) = 146; p = 0.233 × 10−9;
We also compared the strength of the effects of reality, reward, and congruency on gaze behavior around the eye region of the face stimuli. For each session, we computed regression coefficients of reality, i.e., the relative effect of monkey vs cartoon faces; reward, i.e., the relative effect of large versus small rewards; and congruency, i.e., the relative effect of congruent versus incongruent gazes toward future target positions. The mean coefficients of reality [mean ± standard error of the mean (SEM); for Animal P, 18.18 ± 1.51; for Animal C, 11.34 ± 1.52] were significantly greater than zero (Wilcoxon signed-rank test; for Animal P, z = 3.92; p = 0.89 × 10−4; for Animal C, z = 3.92; p = 0.89 × 10−4). The mean coefficients of reward (for Animal P, 4.33 ± 1.18; for Animal C, 6.24 ± 1.29) were significantly greater than zero (for Animal P, z = 2.99; p = 0.28 × 10−2; for Animal C, z = 3.66; p = 0.25 × 10−3). The mean coefficient of congruency for Animal C (5.66 ± 1.29) was also significantly greater than zero (z = 3.29; p = 0.10 × 10−2). However, the mean coefficient of congruency for Animal P (−2.95 ± 0.99) was significantly lower than 0 (z = −2.69; p = 0.72 × 10−2). Three pairs of the regression coefficients (reality vs reward, congruency vs reality, and reward vs congruency) were plotted in separate scatter diagrams for clarification (Fig. 2B, bottom right panel; Fig. S3, right panel). Comparison of these coefficients revealed that the reality effect was significantly stronger than the reward effect (Fig. 2B, bottom right, Wilcoxon signed-rank test; for Animal P, z = 3.66; p = 0.25 × 10−3; for Animal C, z = 2.76; p = 0.57 × 10−2) and the congruency effect (Fig. S3B, right; for Animal P, z = 3.92; p = 0.87 × 10−4; for Animal C, z = 3.51; p = 0.45 × 10−3) in both monkeys. In contrast, the comparison between the reward and congruency effects differed between the animals. Although the reward effect was significantly stronger than the congruency effect in Animal P (z = 3.21; p = 0.13 × 10−2), the strengths of both effects did not differ significantly in Animal C (z = 0.261; p = 0.79).
SRTs to the target were also significantly influenced by the reality and the reward value of the stimuli, but not by congruency. SRTs were significantly shorter after presentation of the real monkey faces versus cartoon faces (two-sample t test; for Animal P, t(3462) = 3.54; p = 0.404 × 10−3; Cohen’s d = 0.120; for Animal C, t(5296) = 7.78; p = 0.884 × 10−14; d = 0.214; Fig. 2C, top row). Moreover, SRTs were shorter after the presentation of stimuli associated with large versus small rewards (for Animal P, t(3462) = 5.82; p = 0.66 × 10−8; d = 0.198; for Animal C, t(5296) = 3.23; p = 0.122 × 10−2; d = 0.893× 10−1; Fig. 2C, bottom row). In contrast, SRTs were indifferent between after the presentation of the congruent faces and incongruent faces (for Animal P, t(3462) = −0.694; p = 0.487; d = 0.236 × 10−1; for Animal C, t(5296) = −1.58; p = 0.115; d = 0.433 × 10−1; Fig. S4A, bottom row). These results suggest that both animals paid more persistent and stronger attention to the monkey faces and large-reward–associated faces than to the cartoon faces and small-reward–associated faces, whereas the congruency of orientation of face stimuli did not influence behavioral parameters.
The behavioral results showed both longer gaze durations around the eyes and shorter SRTs in response to real monkey faces compared with cartoon faces and to large rewards compared with small rewards. To investigate whether the gaze-scanning behaviors were related to the saccade reaction, we calculated Spearman’s rank correlation coefficients between the mean gaze duration and the mean SRTs for each face stimulus. However, no correlation was observed between these two measures (for Animal P, ρ = 0.024; p = 0.98; for Animal C, ρ = −0.31; p = 0.46).
This biased attention or preference was also supported by task performance in the stimulus-assessment forced-choice task (Fig. S2). Both animals chose the monkey faces significantly more frequently than the cartoon faces (χ2 test; for Animal P, χ2(1, N = 872) = 65.0; p = 0.765 × 10−15; Cohen's ω = 0.273; for Animal C, χ2(1, N = 1,277) = 171; p = 0.499 × 10−38; ω = 0.366; Fig. 2D). They also chose the large-reward–associated faces more often than the small-reward–associated faces (χ2 test; for Animal P, χ2(1, N = 849) = 689; p = 0.631 × 10−151; ω = 0.901; for Animal C, χ2(1, N = 1,180) = 430; p = 0.197 × 10−94; ω = 0.603; Fig. 2D). However, both animals chose congruent and incongruent gazes equally often (for Animal P, χ2(1, N = 835) = 19.9; p = 0.804 × 10−5; ω = 0.155; for Animal C, χ2(1, N = 1,192) = 2.10; p = 0.148; ω = 0.419 × 10−1; Fig. S4B).
These results indicate that the animals more strongly attended to or showed a preference for the real monkey and large-reward–associated faces compared with the cartoon and small-reward–associated faces. Furthermore, the effect of the face type was stronger than the effect of expected reward amount. However, the effect of congruency between gaze direction and target position was weak or inconsistent depending on the animal.
Neuronal activity
General
In our neuronal survey, the electrode was laterally directed at a 10° angle from the dorsal-to-ventral axis of the amygdala (see Materials and Methods; Fig. 10A). Following unbiased collection of single neuronal activity data, we determined each task-related neuron's locations within amygdala subnuclei by overlaying penetration record maps on MRI images. Task-related neurons were defined as those exhibiting a significant response relative to the baseline period (see Materials and Methods). We recorded the activity of 90, 83, and 76 single neurons from the lateral, basal, and central amygdala nuclei, respectively, in two animals. Of these neurons, 40 excitatory and 23 inhibitory neurons in the lateral nucleus, 52 excitatory and 23 inhibitory in the basal nucleus, and 35 excitatory and 21 inhibitory in the central nucleus showed significant response modulation (see Materials and Methods) during S1. Fifty excitatory and 22 inhibitory neurons in the lateral nucleus, 47 excitatory and 25 inhibitory in the basal nucleus, and 46 excitatory and 14 inhibitory in the central nucleus showed significant response modulation during S2. Evaluating changes in the proportion of responsive neurons over time (Fig. 3) indicated that neurons showing excitatory responses to S1 or S2 were larger in number compared with those showing inhibitory responses in all subnuclei (S1 in the lateral nucleus, χ2(1, N = 63) = 4.59; p = 0.32 × 10−1; S1 in the basal nucleus, χ2(1, N = 75) = 11.2; p = 0.81 × 10−3; S2 in the lateral nucleus, χ2(1, N = 72) = 10.9; p = 0.97 × 10−3; S2 in the basal nucleus, χ2(1, N = 72) = 6.72; p = 0.95 × 10−2; S2 in the central nucleus, χ2(1, N = 60) = 17.1; p = 0.36 × 10−4), except for the response to S1 in the central nucleus (χ2(1, N = 56) = 3.50; p = 0.61 × 10−1). Among them, we analyzed 59 lateral, 60 basal, and 54 central neurons that showed significant excitatory responses during S1 and/or S2 periods. We also found 36 lateral, 30 basal, and 24 central neurons with inhibitory responses to S1 and/or S2 periods. However, the effect of face reality or reward size on activity was not evident, except for reward modulation in the central nucleus (two-sample t test; t(95) = −2.80; p = 0.62 × 10−2; Fig. S5).
The ratio of neurons with significantly larger excitatory or inhibitory responses during each task period, compared with the baseline activity measured 500 ms before Fix1.
Neurons in distinct amygdala subnuclei encode face reality and/or reward value according to task context
In the task, we presented different types of face stimuli at different stages of the cognitive process (Fig. 1A,B). First, we investigated whether information about face reality, reward size, and congruency was encoded separately by neurons in distinct amygdala subnuclei. Figure 4 shows representative examples of the activity of a single neuron in each subnucleus. Comparison of the responses to the monkey and cartoon faces (Fig. 4, left column) revealed stronger responses to the monkey faces following S1 and S2 in the lateral nucleus neuron (Fig. 4A, left panel). We found no significant differences in the responses to S1 and S2 in the basal nucleus neuron (Fig. 4B, left panel). The central nucleus neuron exhibited stronger responses to the cartoon faces compared with the monkey faces following S2 (Fig. 4C, left panel). In contrast, a comparison between the responses to the large- and small-reward stimuli (Fig. 4, middle column) revealed stronger responses for large-reward- compared with small-reward–associated faces during the S1 and S2 periods in the basal and central nuclei neurons (Fig. 4B,C, middle column), although significant difference was observed in the lateral nucleus neuron only for a short time (Fig. 4A, middle column). The effect of congruency was evident in the lateral nucleus neuron only for a short time (Fig. 4, right column). These results indicate that distinct amygdala subnuclei differentially encode face reality- and reward-related, but not congruency-related, information during specific task epochs. Figure S6 shows additional example neurons.
Examples of excitatory single-neuron responses to the S1 and S2 showing reality, reward, or congruency information in different amygdala nuclei. We compared the neuronal activity elicited by the monkey versus cartoon face stimuli (reality effect, left column), the face stimuli associated with the large versus small reward (reward effect, middle column), or the congruent versus incongruent gazes with the future target positions (congruency effect, right column). A, Lateral nucleus. B, Basal nucleus. C, Central nucleus. The yellow area in each diagram indicates the periods considered in the analyses. The black dots inside the raster plots denote the time points at which the neuronal activity is significantly different between the stimuli.
To further investigate how individual neurons process face reality, reward, and congruency information, we conducted a multiple regression analysis to assess the relationship between neuronal activity and each effect (see Materials and Methods). Scatterplots in Figure 5 and Figure S7 show the relationship between regression coefficients for reality, reward, and congruency effects.
The regression coefficient βs of “reality” and “reward” effects for neuronal responses to the S1 (left column) and S2 (right column) of neurons that showed significant excitatory responses during S1 and/or S2 periods (see Materials and Methods, 59 lateral, 60 basal, and 54 central neurons). Neurons with a significant effect for reality only or reward size only (p < 0.05, two-way ANOVA) are shown as red squares and blue circles, respectively. Neurons with a significant effect for both reality and reward size are shown as green triangles. Red and blue lines on the histograms denote the mean reality β and the mean reward β, respectively. A, Lateral nucleus. B, Basal nucleus. C, Central nucleus. S1, stimulus 1; S2, stimulus 2. Asterisks denote that the mean coefficients are significantly higher than 0 (*p < 0.05; **p < 0.01; Wilcoxon signed-rank test).
For the response to S1 in the lateral nucleus (Fig. 5A, left panel), many data points were in the right half of the graph comparing the regression coefficients for reality and reward (also see the histogram at the bottom), indicating stronger responses to monkey compared with cartoon face stimuli. The mean coefficients of reality were 0.775 ± 0.299 (mean ± SEM), which were significantly higher than zero (Wilcoxon signed-rank test, z = 2.62; p = 0.88 × 10−2). In contrast, the mean coefficients of reward and congruency were not significantly different from zero (for reward, 0.625 ± 0.367; z = 1.04; p = 0.30; for congruency, 0.239 ± 0.237; z = −0.12; p = 0.90). These results indicate overall positive reality coding but neither positive nor negative reward and congruency coding by the lateral nucleus neurons during S1. For S2, the mean coefficients of reward were significantly higher than zero (0.175 ± 0.215; z = 2.19; p = 0.28 × 10−1). In contrast, the mean coefficients of reality and congruency were not different from 0 (for reality, −0.0145 ± 0.235; z = 0.340; p = 0.73; for congruency, −0.0899 ± 0.113; z = −0.921; p = 0.36; Fig. S7). These indicate positive reward coding by many lateral nucleus neurons during S2. We also found that, during the S1 and S2 periods, 11 and 9 neurons, respectively, encoded significant effects of both reality and reward (p < 0.05, two-way ANOVA), as shown by green triangles. Therefore, a group of lateral nucleus neurons encoded multiple types of information.
For the response to S1 in the basal nucleus (Fig. 5B, left), most data points were in the upper half of the graph comparing coefficients of reality and reward, while the distribution along the x-axis was narrow, indicating a stronger reward signal. A similar trend was observed for the response to S2 (Fig. 5B, right), although the strength of the reward signal was lower compared with that for S1. The mean reward coefficients for responses to S1 and S2 were 1.37 ± 0.245 and 0.267 ± 0.145 (mean ± SEM), respectively. Both values were significantly greater than zero (z = 4.70; p = 0.26 × 10−5 and z = 2.12; p = 0.34 × 10−1, respectively). Conversely, the mean coefficients for reality and congruency during S1 and S2 were not significantly different from zero (Fig. S7). These findings indicate positive reward coding by many basal nucleus neurons during S1 and S2.
Neurons in the central nucleus showed dominant reward coding during the S1 and S2 periods (Fig. 5C), similar to those in the basal nucleus. The mean reward coefficients for responses to S1 and S2 in the central nucleus were 0.425 ± 0.306 and 0.731 ± 0.179 (mean ± SEM), respectively. Both values were significantly greater than zero (z = 2.37; p = 0.18 × 10−1 and z = 3.62; p = 0.29 × 10−3, respectively). In contrast, the mean coefficients of reality and congruency during S1 and S2 were not significantly different from zero (Fig. S7). These findings suggest that many central nucleus neurons encode positive reward signals during both S1 and S2.
These results indicate that the effects of reality information were dominant over those of reward and congruency information in the lateral nucleus during the S1 period, whereas the effects of reward information were dominant over those of reality and congruency information in the lateral nucleus during the S2 period and in the basal and central nuclei during both the S1 and S2 periods. We performed further analyses, primarily focusing on face reality and reward effects. The effects of congruency were summarized in Figure S7.
Next, we classified the neurons that were responsive to S1 or S2 in each subnucleus based on a two-way ANOVA with reality and reward as variables. Figure 6A shows the number of neurons that were significantly affected by the reality (red circles) and/or reward factors (blue circles) and those affected by neither reality nor reward (outside of circles). In terms of the response to S1 and S2, we found more neurons that were affected “solely” by the reality or reward factor compared with those affected by “both” factors in almost all cases across all subnuclei (χ2 test; for the response to S1 in the lateral nucleus, χ2(1, N = 38) = 6.74; p = 0.94 × 10−2; for the response to S1 in the basal nucleus, χ2(1, N = 37) = 11.92; p = 0.55 × 10−5; for the response to S2 in the basal nucleus, χ2(1, N = 26) = 15.28; p = 0.87 × 10−6; for the response to S1 in the central nucleus, χ2(1, N = 24) = 10.67; p = 0.11 × 10−2; for the response to S2 in the central nucleus, χ2(1, N = 28) = 11.57; p = 0.67 × 10−5) except for the response to S2 in the lateral nucleus (χ2(1, N = 40) = 3.60; p = 0.58 × 10−1).
A, Segregated distribution of neurons that significantly discriminated the degree of reality and/or reward size (p < 0.05, two-way ANOVA) in the amygdala subnuclei. The number of neurons affected by the reality factor, reward factor, both, or neither is visualized as Venn diagram. B, The number of neurons showing the reality effect (M > C indicates a significantly stronger response to monkey versus cartoon faces) and reward effect (L > S indicates a significantly stronger response to faces associated with a large vs small reward). The ratios of the numbers of neurons, relative to the neurons modulated by either the reality or reward factors, are color-scaled.
In Figure 6B, we further classified neurons based on their preference for the face type or reward size. In the lateral nucleus during S1, we found more neurons with stronger responses to monkey faces compared with cartoon faces than neurons with stronger responses to cartoon faces compared with monkey faces (χ2 test, χ2(1, N = 25) = 9; p = 0.27 × 10−2). This trend, however, changed during S2: the numbers of neurons that preferred monkey and cartoon faces were nearly equal.
In the basal and central nuclei, during both the S1 and S2 periods, we found more neurons with stronger responses to large-reward- compared with small-reward–associated faces (for the response to S1 in the basal nucleus, χ2(1, N = 32) = 18.00; p = 0.22 × 10−6; for S2 in the basal nucleus, χ2(1, N = 20) = 5.00; p = 0.25 × 10−1; for S1 in the central nucleus, χ2(1, N = 20) = 9.80; p = 0.17 × 10−2; for S2 in the central nucleus, χ2(1, N = 16) = 12.25; p = 0.47 × 10−5).
Face reality and reward value coding were consistent but varied in strength across different task epochs
The variations in the impact of the reality and reward information during S1 and S2 (Figs. 5, 6) suggest the effect of their temporal positions within each trial. For further quantitative analysis, we computed AUC values for each neuron by comparing activity within sliding 200 ms windows for reality or reward (see Materials and Methods) and then averaged the AUCs for each amygdala nucleus (Fig. 7A).
A, Time course of reality- and reward-based signals in neuron populations showing excitatory responses to the S1 and S2 in the lateral, basal, and central nuclei. The AUC values of the ROC analysis enabled us to compare responses to monkey versus cartoon faces (monkey vs cartoon, magenta) or large versus small rewards (large vs small, cyan). A value over 0.5 denotes the degree of discrimination between monkey and cartoon faces or between large and small rewards. The magenta and cyan dots in the top part of the graphs denote the time points at which the AUC values for reality and reward, respectively, are significantly different from 0.5. B, Consistent reality (left column, M > C indicates a significantly stronger response to monkey vs cartoon faces) and reward (right column, L > S indicates a significantly stronger response to faces associated with a large vs small reward) information across the S1 and S2 periods. We plotted the coefficients of reality and reward of each neuron during the S2 period against those coefficients during the S1 period. Neurons with a significant reality or reward effect in S1 only, S2 only, or both (p < 0.05, two-way ANOVA) are shown by squares, circles, and triangles, respectively. The inserted histograms show the distributions of the coefficient markers across the diagonal dashed lines.
In the lateral nucleus, the reality signal (magenta curves) increased and remained high during S1 presentation, confirming that neuronal responses discriminated monkey faces from cartoon faces. The reality signal then decreased during and after the delay period, including the S2 period. Although the strength of the reality signal during S1 and S2 differed when the neurons were analyzed as a population (Fig. 7A, top panel), the effect of the reality information remained consistent across most neurons, as indicated by a significant positive correlation between the reality coefficients for S1 (x-axis) and S2 (y-axis; Fig. 7B, top-left panel; Spearman’s rank correlation coefficient, ρ = 0.56; p = 0.56 × 10−5). The orthogonal distribution was biased toward the area below the diagonal line (Wilcoxon signed-rank test, z = 3.00; p = 0.27 × 10−2), indicating a stronger reality signal for S1 compared with S2. However, some neurons exhibited changes in reality signal strength between S1 and S2. For example, in Figure 7B (top-left panel), some neurons showed a weak signal during S1 but a strong signal during S2 (circles, n = 13) or vice versa (squares, n = 4). This variability accounts for the bilateral reality preference observed during S2 (Fig. 6B, Lateral, S2).
In the lateral nucleus, the reward signal (Fig. 7A, top panel, cyan curves) increased and remained at a moderate level throughout S1. It then reduced before rising again during the S2 period. At the single-neuron level, reward encoding remained stable, as indicated by a significant positive correlation between reward coefficients for S1 (x-axis) and S2 (y-axis; Fig. 7B, top-right panel; ρ = 0.55; p = 0.90 × 10−5).
In the basal nucleus, reward information was significantly high during S1 and remained at a moderate level during the subsequent task periods (Fig. 7A, middle panel, cyan curves). Note that some neurons showed changes in the reward signal between S1 and S2 and the correlation between the coefficients of reward for S1 and S2 was not significant (ρ = 0.20; p = 0.13; Fig. 7B, middle-right panel). For example, in Figure 7B, middle-right panel, the signal was large during S1 but small during S2 (squares, n = 18) or vice versa (circles, n = 6). Most data points corresponding to the coefficients of reward were under the diagonal line, indicating that reward signals were stronger during the S1 period compared with the S2 period in many basal neurons (z = 3.66; p = 0.25 × 10−3). In the basal nucleus, the reality signal did not show significant modulations throughout the trial (Fig. 7A, middle panel).
In the central nucleus, reward information remained high during S1, the delay, and S2, and persisted even after S2 disappeared (Fig. 7A, bottom panel, cyan curves). The coefficients of reward for S1 and S2 showed a positive correlation (Fig. 7B, bottom right), and no bias was observed toward either S1 or S2. The reality signal did not show signs of significant modulation throughout the trial (Fig. 7A, bottom panel).
Although few neurons in the basal and central nuclei exhibited a significant reality effect, we found a positive correlation between the reality signals during S1 and S2 (basal, ρ = 0.27; p = 0.40 × 10−1; central, ρ = 0.46; p = 0.56 × 10−3; Fig. 7B, middle- and bottom-left panels), indicating consistent population-level coding across task epochs.
The results shown in Figures 5A and 7B suggest that reality information was primarily processed in the lateral nucleus during the early (sensory encoding) phase of the task. Conversely, we found a difference in the temporal modulation pattern of reward signals: in the basal nucleus, reward information was prominent during the early phase, whereas in the central nucleus, no difference between task periods was observed.
In the present task, S2 was presented just before saccade execution. To investigate whether the activity during S2 contributed to saccade execution, we calculated Spearman's rank correlation coefficients between the mean activity during S2 and the mean SRTs, separately for the subnuclei and for neurons showing reality (Fig. 6A, neurons colored red) and reward (Fig. 6A, neurons colored blue) effects. The median correlation coefficients (ρ) of the neuronal populations in each subnucleus were −0.226 (reality effect) and −0.214 (reward effect) for lateral, −0.238 (reality) and −0.060 (reward) for basal, and 0.095 (reality) and −0.500 (reward) for central. A significant bias in correlation coefficients was observed only in the neurons showing a reward effect in the central nucleus, toward the negative direction (Wilcoxon signed-rank test; z = −2.95; p = 0.32 × 10−2; Fig. 8). These results suggest that, in the central nucleus rather than the lateral and basal nuclei, stronger activity during S2 was associated with quicker saccades.
Distribution of Spearman's rank correlation coefficients between the mean neuronal responses to each stimulus during the S2 and the mean SRTs after each stimulus. The top row is for neurons showing reality effect, and the bottom row is for neurons showing reward effect. Black lines on the histograms denote 0 of the correlation coefficients, respectively.
Amygdala neurons showing both face reality and reward effects represented specific activity patterns during the fixation periods
At this point, our analyses indicated that reality and reward information were represented to different degrees, with dynamic changes in distinct amygdala nuclei. We further found that neurons with these different signal coding types, i.e., reality only, reward only, and both reality and reward, had specific temporal activity patterns.
A noticeable difference in neuronal activity among neurons with distinct signal coding types (Fig. 9A) appeared during the Fix1 period. Neurons that exhibited the reality effect only (Fig. 9A, left two columns) exhibited slightly increased activity in the lateral and central nuclei or decreased activity in the basal nucleus, during Fix1 compared with the baseline. Neurons sensitive to the reward effect only (Fig. 9A, middle two columns) exhibited a slight increase in activity during Fix1 across all nuclei. In contrast, neurons sensitive to both reality and reward information (Fig. 9A, right two columns) exhibited a large increase in activity throughout Fix1, peaking during the S1 period. Notably, this pattern was observed across the amygdala subnuclei. The mean neuronal activity during the latter half of Fix1 was the highest (one-way ANOVA; for the lateral nucleus, F(2,59) = 3.39; p = 0.40 × 10−1; for the basal nucleus, F(2,49) = 5.66; p = 0.62 × 10−2; for the central nucleus, F(2,40) = 3.31; p = 0.47 × 10−1) for neurons with both reality and reward effects (Fig. 9B, left panels of each nucleus). Given that the Fix1 period was “before” the presentation of the visual stimuli, this activity appears to have been associated with the general preparation process. The number of individual neurons exhibiting increased activity during Fix1 also reflected the predominance of neurons responsive to both reality and reward information. A z-score greater than two denotes a significant (p < 0.05) increase in neuronal activity from the baseline. The proportion of neurons showing a z-score greater than two was 12.0% (3/25), 0% (0/18), and 26.3% (5/19) in the lateral nucleus; 0% (0/10), 0% (0/32), and 20.0% (2/10) in the basal nucleus; and 0% (0/14), 9.5% (2/21), and 37.5% (3/8) in the central nucleus for the reality effect only, reward effect only, and both reality and reward effects, respectively.
A, The population neurons that discriminated the degree of reality only, reward size only, and both (p < 0.05, two-way ANOVA) showed characteristic temporal dynamics. The activity of each neuron is presented as a row of pixels above the histograms. The yellow area in each diagram indicates the periods used in the analyses in B. B, Mean normalized neuronal activity in the latter half of the Fix1 and Fix2 periods. Asterisks denote significant differences (*p < 0.05; **p < 0.01; one-way ANOVA with post hoc Tukey’s test). Error bars, 1 standard error. Real, reality; Rwd, reward.
The animals also fixated on the central fixation point during the Fix2 period. During the latter half of Fix2, activity was high in the reward-only–type neurons [neurons showing z-scores over 2, meaning significant increased activity from the baseline; 16.0% (4/25)] in the lateral nucleus and reality-only–type neurons [28.6% (4/14)] and reward-only–type neurons [42.9% (9/21)] in the central nucleus also showed significant increase in Fix2 activity. Conversely, Fix2 activity was weak in the reality only-type neurons [16.0% (4/25)] and the both reality- and reward-type neurons [15.8% (3/19)] in the lateral nucleus: all neuron types—0% (0/10) for the reality effect only, 15.6% (5/32) for the reward effect only, and 0% (0/10) for both the reality and reward effects—in the basal nucleus and the reality- and reward-type neurons [12.5% (1/8)] in the central nucleus (Fig. 9B, right panels of each nucleus).
Localization of neurons with distinct effects
Figure 10A shows the estimated locations of the analyzed neurons (see Materials and Methods), plotted on a series of coronal sections of MRI images. In both animals, neurons with a significant reality effect were widely observed from the dorsal to ventral part of the lateral nucleus. Many neurons with significant reward effects were observed in the central and basal nuclei. We also found that neurons with both reality and reward effects tended to be distributed in the ventral part of the lateral or basal nuclei. The mediolateral and dorsoventral distribution of the neurons with reality, reward, and both effects during S1 confirmed these trends (Fig. 10B). While neurons with only the reward effect (blue circles) were scattered overall, neurons only with the reality effect were located significantly lateral to those with only the reward effect (Wilcoxon rank-sum test; for responses to S1, z = −2.27; p = 0.23 × 10−1; for responses to S2, z = −2.22; p = 0.26 × 10−1). Neurons with both the reality and reward effects were located significantly ventral to those with the reality effect only (Wilcoxon rank-sum test; z = −2.14; p = 0.32 × 10−1).
A, Estimated locations of the recorded neurons in two monkeys, plotted onto serial coronal sections of MRI images obtained with the recording chamber and the grid system in this example, 2 mm apart, filled with gadolinium. The data are displayed in an anteroposterior order, from 1 mm posterior to 2 mm anterior for Animal P and from 1 mm posterior to 1.5 mm anterior for Animal C to the anterior commissure (AC), respectively. Excitatory-responsive neurons with a significant effect of reality only, reward only, or both (p < 0.05, two-way ANOVA) and nonsignificant effects of both reality and reward are shown by red squares, blue circles, green triangles, and black crosses, respectively. Inhibitory-responsive neurons are shown by small dots. La, lateral nucleus; Ba, basal nucleus; Ce, central nucleus. B, Mediolateral and dorsoventral distribution of neurons with different effects. The data from all the coronal sections of both monkeys are included. The x-axis indicates the distance from the edge of the lateral nucleus at the AC −1 section. The y-axis indicates the depth from the top of the amygdala at the AC −1 section.
Discussion
Primates perceive information about face reality from conspecifics
The eye gaze patterns and SRTs revealed that the animals perceived the real monkey and cartoon face stimuli differently. In this study, all visual stimuli had a uniform mean luminance and size. Behavioral differences could also be related to the saliency of visual stimuli. The sustained attention (Fig. 2A,B), shorter SRTs (Fig. 2C), and preference (Fig. 2D) for monkey faces indicate that the real monkey face stimuli were more salient than cartoon face stimuli. The difference in behavioral results, however, cannot be explained solely by the saliency of the stimuli. In the present task, the type of reward (large or small) was expected to be more relevant to the animal subjects than the type of stimuli (monkey or cartoon). However, the type of facial stimuli had a greater effect than the reward type on the eye gaze pattern (Fig. 2B, bottom-right panel). Altogether, the simple visual properties or degree of saliency of the stimuli cannot explain the different behavioral effects of the stimulus type; instead, properties such as the complexity of social information or social relevance, including face reality, appear to influence specific behavioral reactions.
Animate stimuli attract more attention than inanimate ones (Lindemann et al., 2011). This finding is consistent with our behavioral results, which show that monkey faces are more attention-grabbing compared with cartoon faces. Several areas of the human brain have been shown to discriminate between real and artificial faces (Gobbini et al., 2011; Balas and Koldewyn, 2013). Therefore, the ability to detect “face reality” constitutes an essential social skill. Furthermore, monkeys' perception of facial reality has been reported to be similar to that of humans (Steckenfinger and Ghazanfar, 2009). Therefore, our animal subjects may have differentiated the level of reality between the monkey and cartoon faces.
In the present study, the congruency between gaze direction and target location did not produce consistent behavioral effects, in contrast to the findings by Deaner and Platt (Deaner and Platt, 2003). The difference may be due to differences in task design. In Deaner and Platt's task, the orientation of the monkey's eyes or head directly facilitated the detection of peripheral targets. In our study, the stimuli provided a mixed prediction of direction: half of the S1 stimuli were congruent with the target direction, while the other half were incongruent. Such mixed cues may have weakened the behavioral congruency effect between gaze direction and saccade target. Additionally, the requirement to maintain central fixation during the presentation of face stimuli (S2) may have further attenuated gaze-following effects.
Face reality information is encoded differently in distinct amygdala subnuclei
We found that many neurons in the lateral nucleus of the amygdala discriminated face stimuli in terms of reality. Neurons in the primate lateral and basal nuclei process facial expressions (Gothard et al., 2007; Kuraoka and Nakamura, 2007; Inagaki et al., 2023), gaze direction (Tazumi et al., 2010; Mosher et al., 2014), and the social hierarchy of faces (Munuera et al., 2018). The lateral nucleus receives projections from areas TE and TEO in the inferotemporal cortex (Ghashghaei and Barbas, 2002; Stefanacci and Amaral, 2002), the dorsal bank of the superior temporal sulcus (STS; Stefanacci and Amaral, 2000), and the pulvinar (Jones and Burton, 1976; Day-Brown et al., 2010). Among these, neurons in the primate TE and the STS distinguish animate from inanimate images (Kiani et al., 2007; Kriegeskorte et al., 2008; Ninomiya et al., 2021). In contrast, pulvinar neurons show similar responses to real and cartoon faces (Nguyen et al., 2013). Therefore, face reality information in the lateral nucleus may originate from higher-order visual areas, including the TE and STS.
Although most reality-sensitive neurons in the lateral nucleus responded more strongly to the monkey face stimuli during S1, a similar number of neurons responded more strongly to either the monkey or cartoon faces during S2 (Figs. 5A, 6B, top-right panel; Fig. 7B, top-left panel). This change in response preference indicates that the task context might influence the reality signal. Indeed, the encoding of reality information precedes other information processes. However, changes in preference were not common for the reward signals (Fig. 7B, top-right panel), which are consistently relevant throughout a trial.
Reward information is encoded differently in distinct amygdala subnuclei
While some studies reported a homogenous distribution of reward-coding neurons across the amygdala subnuclei (Sugase-Miyamoto and Richmond, 2005; Paton et al., 2006; Belova et al., 2007; Bermudez and Schultz, 2010; Bermudez et al., 2012; Iwaoki and Nakamura, 2022), others reported that the basolateral nucleus is involved in stimulus–reward associations (Fuchs et al., 2006; Feltenstein and See, 2007). Here, we observed the reward effect in all amygdala nuclei, with varying degrees and temporal dynamics.
The reward signal in the lateral and basal nuclei that corresponded to the preference for large- versus small-reward–related face stimuli was more prominent during S1 versus S2 (Figs. 5B, 7A, top and middle panels). This dominant reward signal during S1 indicated that sensory analyses took place in relation to the expected rewards. However, the lateral and basal nuclei had different response patterns. Neurons in the lateral nucleus showed a phasic reward signal, while those in the basal nucleus were continuous during S1, followed by delay and fixation periods. This indicates partially distinct reward circuits. Indeed, while the lateral nucleus receives input from the temporal cortex, especially TE, TEO, and STS (Stefanacci and Amaral, 2000), the basal nucleus has reciprocal connections with the anterior cingulate and orbitofrontal cortex (Ghashghaei and Barbas, 2002; Stefanacci and Amaral, 2002) and receives projections from the lateral nucleus (Pitkänen and Amaral, 1998).
The reward signal in the central nucleus was characterized by a continuous large-reward preference, which remained equally strong throughout the trial until saccade onset (Figs. 5C, 7A, bottom panels). The central nucleus receives strong intra-amygdala projections (Price and Amaral, 1981; Pitkänen and Amaral, 1998) and projects to subcortical areas involved in autonomic responses (Price and Amaral, 1981; Jongen-Rêlo and Amaral, 1998; Freese and Amaral, 2009). This supports the involvement of the central nucleus in autonomic responses. The central nucleus also projects to the substantia nigra (Shinonaga et al., 1992), which is involved in reward and saccadic eye movements (Hikosaka et al., 2018). In this study, SRTs were negatively correlated with neuronal activity in central nucleus neurons. Importantly, a significant correlation was observed specifically for central nucleus neurons that showed a reward effect. The effect also occurred during S2, immediately preceding purposeful saccade generation (Fig. 8). Previous studies have shown consistent results (Maeda et al., 2020). These results suggest that elevated neuronal activity in reward-sensitive central nucleus neurons may facilitate saccadic eye movements to obtain rewards. Taken together, these data suggest that the central nucleus integrates information from other brain regions to enable action execution.
Integrated and separate encoding of face reality and reward information
We also found a group of neurons that encoded “both” face reality and reward information. The amygdala basolateral nuclei reportedly exhibit shared parallel coding of social and valence (reward or punishment) information at the neuronal population level, such as social hierarchy and reward size (Munuera et al., 2018), or gaze direction and valence (Pryluk et al., 2020). In contrast, individual neurons mostly processed either social or valence information (Putnam and Gothard, 2019). Notably, we further identified unique features of multicoding neurons across different amygdala nuclei.
First, neurons exhibiting both reality and reward coding showed characteristic buildup activity during Fix1. Amygdala activity during the prestimulus fixation period reportedly encodes a positive state value (Belova et al., 2008). Similar prestimulus activity was reported in the basolateral amygdala (Sugase-Miyamoto and Richmond, 2005), which is potentially related to increased arousal or attention in anticipation of rewards. The buildup activity was followed by stronger activity in response to the face stimuli (Fig. 9A). Therefore, neurons with both effects may process arousal or attention related to future rewards along with sensory information.
Second, neurons that encode both reality and reward were primarily found in the ventrolateral and basal amygdala (Fig. 10A,B). This supports the integration of information in the lateral and basal nuclei, given that efferent fibers mainly originate in the dorsal region and terminate in the ventral division (Price et al., 1987; Pitkänen and Amaral, 1998). Therefore, reality and reward information likely travel from the dorsal to the ventral amygdala, where both types are integrated. Shared encoding of reality and reward information may partly be explained by common features of the stimuli, such as similar salience. Alternatively, the shared coding of both types of information may result from their integration within the amygdala.
Conclusion
Our data indicate that face reality is predominantly processed in the lateral nucleus, reward information is predominantly processed in the basal nuclei for sensory analysis, and continuous reward coding occurs in the central nucleus during action execution. Some neurons encode both reality and reward, possibly through intranuclear integration. Assessing information flow between the subnuclei and other nonamygdala brain areas could enable separate or integrated analysis of social and valence information processing.
Footnotes
This work was supported by Japan Society for the Promotion of Science (JSPS) KAKENHI Grant Numbers JP19K03388, JP21H00312, JP22K03203, JP23H03842, and JP25K06904 (to K.K.); JSPS KAKENHI Grant Numbers JP19H03540, JP21H00216, and JP22K19485; Japan Agency for Medical Research and Development–Core Research for Evolutional Science and Technology 21gm1510003 (to K.N.); and JSPS KAKENHI Grant Number JP22H04926. We are grateful to K. Adachi, M. Arisato, H Shimazaki, T Hayashi, H. Onoe, and T. Isa for obtaining magnetic resonance images. The subject monkeys were provided by NBRP “Japanese Monkeys” through the National BioResource Project of the MEXT, Japan. We thank Edanz (https://jp.edanz.com/ac) and Editage (www.editage.jp) for editing a draft of this manuscript.
The authors declare no competing financial interests.
K.K.'s present address: Department of Physiology, Kindai University Faculty of Medicine, 1-14-1 Miharadai, Minami-ku, Sakai-shi, Osaka 590-0197, Japan.
This paper contains supplemental material available at: https://doi.org/10.1523/JNEUROSCI.0093-24.2025
- Correspondence should be addressed to Koji Kuraoka at kuraokak{at}med.kindai.ac.jp.
















