Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
PreviousNext
Articles, Behavioral/Cognitive

Top-Down Control of Visual Responses to Fear by the Amygdala

Nicholas Furl, Richard N. Henson, Karl J. Friston and Andrew J. Calder
Journal of Neuroscience 30 October 2013, 33 (44) 17435-17443; DOI: https://doi.org/10.1523/JNEUROSCI.2992-13.2013
Nicholas Furl
1Medical Research Council Cognition and Brain Sciences Unit, Cambridge CB2 7EF, United Kingdom, and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Richard N. Henson
1Medical Research Council Cognition and Brain Sciences Unit, Cambridge CB2 7EF, United Kingdom, and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Karl J. Friston
2Wellcome Centre for Imaging Neuroscience, University College London, London WC1N 3BG, United Kingdom
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Andrew J. Calder
1Medical Research Council Cognition and Brain Sciences Unit, Cambridge CB2 7EF, United Kingdom, and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

The visual cortex is sensitive to emotional stimuli. This sensitivity is typically assumed to arise when amygdala modulates visual cortex via backwards connections. Using human fMRI, we compared dynamic causal connectivity models of sensitivity with fearful faces. This model comparison tested whether amygdala modulates distinct cortical areas, depending on dynamic or static face presentation. The ventral temporal fusiform face area showed sensitivity to fearful expressions in static faces. However, for dynamic faces, we found fear sensitivity in dorsal motion-sensitive areas within hMT+/V5 and superior temporal sulcus. The model with the greatest evidence included connections modulated by dynamic and static fear from amygdala to dorsal and ventral temporal areas, respectively. According to this functional architecture, amygdala could enhance encoding of fearful expression movements from video and the form of fearful expressions from static images. The amygdala may therefore optimize visual encoding of socially charged and salient information.

Introduction

Emotional images enhance responses in visual areas, an effect typically observed in the fusiform gyrus for static fearful faces and ascribed to backwards connections from amygdala (Morris et al., 1998; Vuilleumier and Pourtois, 2007). Although support for amygdala influence comes from structural connectivity (Amaral and Price, 1984; Catani et al., 2003), functional connectivity (Morris et al., 1998; Foley et al., 2012), and path analysis (Lim et al., 2009), directed connectivity measures and formal model comparison are still needed to show that backwards connections from amygdala are more likely than other architectures to generate cortical emotion sensitivity.

Moreover, it is surprising that the putative amygdala feedback would enhance fusiform cortex responses. According to the prevailing view, a face-selective area in fusiform cortex, the fusiform face area (FFA), is associated with processing facial identity, whereas dorsal temporal regions, particularly in the superior temporal sulcus (STS), are associated with processing facial expression (Haxby et al., 2000). An alternative position is that fusiform and STS areas both contribute to facial expression processing but contribute to encoding structural forms and dynamic features, respectively (Calder and Young, 2005; Calder, 2011). In this case, static fearful expressions may enhance FFA encoding of structural cues associated with emotional expression. We therefore characterized the conditions under which amygdala mediates fear sensitivity in fusiform cortex, compared with dorsal temporal areas (Sabatinelli et al., 2011).

We asked whether dynamic and static fearful expressions enhance responses in dorsal temporal and ventral fusiform areas, respectively. One dorsal temporal area, hMT+/V5, is sensitive to low level and facial motion and may be homologous to the middle temporal (MT), medial superior temporal (MST), and fundus of the super temporal (FST) areas in the macaque (Kolster et al., 2010). Another dorsal area, the posterior STS, is responsive generally to biological motion (Giese and Poggio, 2003). Compared with dorsal areas, the fusiform gyrus shows less sensitivity to facial motion (Schultz and Pilz, 2009; Trautmann et al., 2009; Pitcher et al., 2011; Foley et al., 2012; Schultz et al., 2012). Despite its association with facial identity processing, many studies have shown that FFA contributes to processing facial expressions (Ganel et al., 2005; Fox et al., 2009b; Cohen Kadosh et al., 2010; Harris et al., 2012) and may have a general role in processing facial form (O'Toole et al., 2002; Calder, 2011). Sensitivity to static fearful expressions in the FFA may reflect this role in processing static form. If so, then dynamic fearful expressions may evoke fear sensitivity in dorsal temporal areas instead, reflecting the role of these areas to processing motion.

Our fMRI results confirmed our hypothesis that dorsal motion-sensitive areas showed fear sensitivity for dynamic facial expressions, whereas the FFA showed fear sensitivity for static expressions. To explore connectivity mechanisms that mediate fear sensitivity, we used dynamic causal modeling (DCM) to explore 508 plausible connectivity architectures. Our Bayesian model comparison identified the most likely model, which showed that dynamic and static fear modulated connections from amygdala to dorsal or ventral areas, respectively. Amygdala therefore may control how behaviorally relevant information is visually coded in a context-sensitive fashion.

Materials and Methods

Participants.

fMRI data were collected from 18 healthy, right-handed participants (>18 years, 13 female) with normal or correct-to-normal vision. Experimental procedures were approved by the Cambridge Psychology Research Ethics Committee.

Imaging acquisition.

fMRI data were collected using a 3T Siemens Tim Trio MRI scanner and a 32-channel coil. We collected whole-brain T2*-weighted echo-planar imaging volumes with 32 oblique axial slices that were 3.5 mm thick, in-plane 64 × 64 matrix with resolution of 3 × 3 mm, TR 2 s, TE 30 ms, flip angle 78°. T1-weighted MP-RAGE structural images were acquired with 1 mm3 voxels. The first five “dummy” volumes were discarded to allow for magnetic equilibration.

Experimental design.

The experiment used a block design, with 18 “main experiment” runs and two “localizer” runs. We chose a block design because this is the statistically most efficient design for convolution models, such as DCM (Mechelli et al., 2003). All blocks were 11 s, comprised eight 1375 ms presentations of greyscale stimuli, and were followed by a 1 s interblock fixation interval. Participants fixated on a gray dot in the center of the display, overlaying the image, and pressed a key when the dot turned red for a random one-third of stimulus presentations. In each localizer run, participants viewed six types of blocks, each presented six times. Face blocks contained dynamic facial expressions taken from the Amsterdam Dynamic Facial Expression Set (van der Schalk et al., 2011) or the final static frames from the dynamic facial videos, capturing the expression apexes. Eight different identities (four male and four female) changed among neutral and disgust, fearful, happy, or sad expressions. The eight identities and four expressions appeared in a pseudo-random order, with each of the four expressions appearing twice. Object blocks included eight dynamic objects used in a previous study (Fox et al., 2009a) or the final static frames from the dynamic object videos, shown in a pseudo-random order. The low-level motion blocks consisted of dynamic random-dot pattern videos with motion-defined oriented gratings. The stimuli depicted 50% randomly luminous pixels, which could move at one frame per second horizontally, vertically, or diagonally left or right. Oriented gratings were defined by moving the dots within four strips of pixels in the opposite direction to the rest of the display, but at the same rate (Van Oostende et al., 1997). Each motion direction was shown twice per block in a pseudo-random order. There were also corresponding low-level static blocks composed of the final static frames from the low-level motion videos.

The remaining runs comprised the main experiment and allowed us to measure expression-specific responses in the ROIs defined by the localizer data. Each of these main experiment runs had 12 blocks. Each block contained a distinct type of stimulus and was presented in a pseudorandom order. Six of the blocks contained faces, using the same four female and four male identities as in the localizer runs. In each block, all faces were either dynamic or static and showed just one of three expressions: disgust, happy, or fearful. The remaining six blocks were Fourier phase-scrambled versions of each of the six face blocks (dynamic videos were phase-scrambled in three dimensions). There was some overlap between face stimuli used in the localizer and main experiment runs; thus, the face-selective voxels we observed in the localizer may show a preference in favor of the faces we used in the main experiment.

After scanning, participants made speeded categorizations of the emotion, expressed in the dynamic and static faces, as disgust, happy, or fearful and rated their emotional intensity on a 1–9 scale. They also rated on a 1–9 scale the intensity of the motion they perceived in each of the dynamic stimuli. Stimuli were presented for the same duration as in the fMRI experiment, and the next stimulus appeared once the participant completed a rating.

Preprocessing and analysis.

We performed preprocessing and analysis using SPM8, DCM10 (Wellcome Trust Centre for Neuroimaging, London; http://www.fil.ion.ucl.ac.uk/spm/) and MATLAB (MathWorks). Data were motion- and slice-time corrected, spatially normalized to an EPI template in MNI space, smoothed to 8 mm full-width half-maximum, and analyzed using the general linear model. At the first (within-subject) level, regressors were constructed by convolving the onset times and durations for the different experimental blocks with a canonical hemodynamic response function. Contrasts of interest were computed for each participant and tested at the random effects (between-subject) level, using one-sample t tests.

At the first level, we identified face-selective ROIs from the localizer runs in the right occipital and fusiform face areas (OFA, FFA) and in the right posterior STS, by contrasting the average response to dynamic and static faces versus the average response to dynamic and static objects and random-dot patterns. We also identified an ROI showing motion sensitivity to faces in the vicinity of area hMT+/V5 (here labeled V5f) by contrasting dynamic versus static faces for the localizer runs. Although the posterior STS was also apparent in this contrast, we defined our STS ROI for further analysis using the peak face selectivity in posterior STS (see Fig. 1). Thirteen of the 18 participants evidenced all four ROIs in the right hemisphere, and further analyses focused on right hemisphere ROIs. This maximized the available data, as right hemisphere ROIs were most commonly identified across participants, a finding consistent with the well-known right hemisphere dominance in face perception (Kanwisher et al., 1997). These ROIs were defined as 9 mm spheres surrounding the peak coordinates. The amygdala was anatomically defined, using an 8 mm sphere centered on MNI 23, −1, −22. We summarized these findings by analyzing localizer runs at the group level. Statistical parametric maps from this group analysis (see Fig. 1) were thresholded at p < 0.001 (for display purposes) and reported if significant at p < 0.05 familywise error corrected at the cluster level (Brett et al., 2003). We further illustrate the localizer findings by showing results computed from individually defined ROIs (see Fig. 2). We ascertained sensitivity to fearful facial expressions by performing general linear model analyses, including ANOVAs, on main experiment run data extracted from the individually defined ROIs.

We used DCM (Friston et al., 2003) to characterize the influence of the amygdala on visual cortical responses. We tested a large model space, in which all models included dynamic and static face inputs to OFA and a dynamic face input to V5f. We also explored two model spaces: one using an additional static face input in V5f and the other using all 18 participants with ROI locations based on group localizer results. Comparisons using both of these model spaces verified our main findings with respect to the amygdala (see Results). We systematically varied which connections were bilinearly modulated by dynamic or static fear. To test which connections give rise to fear-sensitive responses to dynamic faces in V5f and STS, we tested 15 model variants with dynamic fear modulation (Table 1, left column). These included six models with modulation of every combination of connections from OFA, V5f, and amygdala, plus a model in which only connections between V5f and STS were modulated, plus six more models with modulation modulating on every combination of connections from OFA, V5f, and amygdala, but adding further modulation on connections between V5f and STS. To test how static fear modulation could give rise to fear sensitivity to static faces in FFA, we tested 15 variants with static fear modulation on every combination of connections from OFA, V5f, STS, and amygdala to FFA (Table 1, right column). The dynamic fear and static fear variants were crossed to create 225 combinations. To accommodate a putative subcortical pathway to the amygdala processing facial expressions of emotion (Morris et al., 1999), we tested two model variants where there was either no exogenous input to the amygdala or where faces served as inputs. We also implemented two model variants that were either “full connectivity,” with all possible endogenous connections (450 models), or were sparse models (58 models). These sparse models were based on a previous study (Furl et al., 2013) showing no endogenous connectivity between FFA and STS and only feedforward connections. For purposes of this model space, a feedforward connection was defined as one that propagates signals from an exogenous input, and so sparse models with an exogenous input to amygdala were equipped with additional endogenous connections from amygdala to cortex, which could also be bilinearly modulated by dynamic or static fear (Table 1). We combined all the aforementioned model variants in one model space, yielding 508 total models.

View this table:
  • View inline
  • View popup
Table 1.

Model variants comprising the model comparisona

We compared models at the level of individual models and as model families according to their relative log-evidences and posterior probabilities, under the assumption that our healthy participants possessed the same connectivity architecture (Stephan et al., 2010). We estimated the log-evidences for each participant for each model by computing the free energy, a lower bound on the log-evidence that balances model fit with model complexity (Friston et al., 2003; Penny, 2012). The log-evidences, summed over participants, were compared as a generalization of the Bayes factor (Kass and Raftery, 1995), expressed as a posterior probability (where model evidence is also the posterior probability of a model, under uniform priors over models).

Results

ROI specification

Using the localizer runs, we identified conventional face-selective areas (Kanwisher et al., 1997), including the right OFA, FFA, and an area in the posterior STS, by contrasting faces versus objects and random-dot patterns. We also used the localizer runs to identify motion-sensitive areas that showed greater responses to dynamic relative to static facial expressions. Figure 1a illustrates face selectivity and facial motion effects in the right hemisphere using a conventional (statistical parametric mapping) group-level, whole-brain analysis. Consistent with previous studies of facial motion (Schultz and Pilz, 2009; Trautmann et al., 2009; Pitcher et al., 2011; Foley et al., 2012; Schultz et al., 2012), we found bilateral areas sensitive to facial motion (V5f) in the vicinity of human hMT+/V5 (right MNI: 50, −62, 6; left MNI: −50, −70, 6). We also detected motion sensitivity to faces in the right posterior STS (MNI: 48, −34, 0), an area whose biological motion sensitivity has been well documented (Giese and Possio, 2003) and that has been shown to respond more to dynamic than static faces (Schultz and Pilz, 2009; Trautmann et al., 2009; Pitcher et al., 2011; Foley et al., 2012; Schultz et al., 2012). No motion sensitivity was observed in the vicinity of OFA or FFA, and no face selectivity was observed in hMT+/V5, even at liberal (uncorrected) thresholds (p < 0.5). In contrast, we observed overlapping face-selective and motion-sensitive voxels in the right STS (Fig. 1a,b). The amygdala also showed face selectivity in the group-level analyses (Fig. 1c).

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Motion sensitivity to facial stimuli. a, Group-level statistical parametric map for the 13 participants used in ROI and connectivity analyses. Voxels showing significant effects from the localizer runs at p < 0.001 (uncorrected) are projected on an inflated cortical surface of the right hemisphere in MNI space. Green represents voxels sensitive to facial motion; red represents face-selective voxels; yellow represents their overlap. Motion sensitivity to faces without face selectivity is visible in V5f, whereas motion sensitivity to faces and face selectivity overlap in the STS. b, Face selectivity and motion sensitivity to faces in a representative participant. c, Voxels from localizer run data showing face selectivity in bilateral amygdala at p < 0.001 uncorrected. d, Voxels from main experiment run data showing significant differences between all faces and Fourier-scrambled patterns in bilateral amygdala at p < 0.005 uncorrected.

The localizer results for individual participants were used to define functional ROIs in the right hemisphere for OFA, FFA, STS, and V5f, given the well-known right hemisphere dominance in face perception (Kanwisher et al., 1997). We also defined the right amygdala anatomically. We defined our STS ROIs using the peak face selectivity to maintain continuity with previous research, which has largely defined this region of the STS based on face-selective voxels (Haxby et al., 2000). All four ROIs could be defined in 13 of the participants. All of these participants showed overlapping facial motion and face-selective areas in the posterior STS.

Group-level ROI analyses

Using ANOVA on our localizer run data, we tested for motion (dynamic, static) × category (face, object, random-dot pattern) interactions and, when the interaction was not significant, we report main effects. OFA was motion sensitive and face selective (Fig. 2a) and showed main effects of motion (F(1,60) = 21.27; p < 0.001) and category (F(1,60) = 81.98; p < 0.001). Significant motion × category interactions were observed in FFA (Fig. 2b; F(1,60) = 4.98; p = 0.009), V5f (Fig. 2c; F(1,60) = 6.48; p = 0.003), and STS (Fig. 2d; F(1,62) = 6.92; p = 0.002). Post hoc tests (Tukey honest significant difference corrected p < 0.05) showed that these interactions arose because FFA and V5f showed greater motion sensitivity to nonface categories than to faces, whereas STS showed greater motion sensitivity to faces than to nonfaces. STS and V5f showed a significant pairwise difference between dynamic and static faces, whereas FFA did not. In addition, the amygdala (Figs. 1c and 2e) also showed face selectivity (F(1,60) = 18.53, p < 0.001).

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Group-level ROI analysis for localizer runs. Mean responses to dynamic and static faces, objects, and random-dot patterns are shown as follows: a, the right OFA; b, the right FFA; c, the right V5f; d, the face-selective area in the STS; e, the amygdala. Error bars indicate SEM. All graphs represent the 13 participants who manifested every ROI.

To summarize the localizer results, dorsal areas showed robust sensitivity to motion in faces, whereas ventral areas, such as FFA, did not. Even though the STS did not show significant motion sensitivity to the low-level motion of random-dot patterns, it was nevertheless sensitive to the more complex forms of motion in faces, possibly reflecting coding of the higher-order motion features present in faces but not in random-dot patterns. Importantly, differences in STS responses between face expressions, such as enhanced responses to fear, are therefore not easily explained by low-level motion differences among dynamic expressions.

We extracted and analyzed data from the main experiment runs by extracting responses to faces, compared with Fourier-scrambled patterns, from the independent ROIs described above. Planned comparisons (one-tailed) were used to test whether our ROIs showed increased sensitivity to either dynamic or static fearful faces. Consistent with previous research (Vuilleumier et al., 2007), the FFA (Fig. 3b) showed greater responses to static fearful faces compared with static nonfearful (disgust + happy) faces (t(12) = 1.82, p = 0.047), with no significant sensitivity to dynamic fearful expressions (p = 0.600). In contrast, and as predicted, areas sensitive to facial motion showed increased sensitivity for dynamic fearful, compared with nonfearful, faces (V5f (t(12) = 2.78, p = 0.0080; and STS (t(12) = 1.78; p = 0.049) and no significant sensitivity to static expressions (p > 0.403) (Fig. 3c,d). The OFA (Fig. 3a) did not show any significant fear sensitivity (p > 0.144). The average response to faces in the amygdala during the main experiment (compared with Fourier scrambled faces) is shown in Figure 1d. Prior functional imaging and electrophysiological studies in the human and macaque monkey have established that the amygdala is responsive to fearful expressions as well as multiple other emotional expressions, when presented as static images (Krolak-Salmon et al., 2004; Sergerie et al., 2008; Hadj-Bouziane et al., 2012), and we replicate this finding (static faces > Fourier scrambled faces, t(12) = 2.3, p = 0.02, one-tailed) (Fig. 3e), with no significant differences between emotions (p = 0.37). Responses in the amygdala to dynamic faces also showed no main effect of expression (p = 0.37, Fig. 3e). Responses to Fourier phase-scrambled patterns were near zero in all conditions for all ROIs and showed no significant effects of expression (p > 0.22).

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Group-level ROI analysis for main experiment runs. a, Mean responses to dynamic and static disgust, happy, and fearful facial expressions in the right OFA. Responses are assessed relative to Fourier-scrambled pattern baseline. b, Mean responses in the right FFA. c, Mean responses in the right V5f. d, Mean responses in the face-selective area in the STS. e, Mean responses in the amygdala. *p < 0.05, enhanced responses to fearful expressions for either dynamic or static expressions. Error bars indicate SEM. All graphs represent the 13 participants who manifested every ROI.

Connectivity models of fear sensitivity

Our ROI analysis established fear-sensitive responses in separate visual areas, depending on whether facial expressions were dynamic or static. We used DCM to address the mechanisms underlying these distinct fear-sensitive responses. DCM explains ROI time series by estimating “coupling” parameters that relate the activity (a hidden state variable) in a source area to the rate of change of activity in a target area. Parameters can be added or removed to specify hypothetical connectivity architectures. These parameters include the following: (1) coupling to exogenous stimulus inputs, which enable perturbation of hidden neuronal states by stimulus presentation (faces); (2) endogenous connections, which reflect directed coupling among areas, averaged over experimental conditions; and (3) bilinear modulations, which reflect changes in coupling induced by an experimental factor, in our case, dynamic or static fear (Friston et al., 2003). Our model space considered different combinations of these bilinear parameters to identify connections that were modulated by either dynamic or static fear (see below). Having specified the model, its parameters are optimized to best explain the data using standard variational (Bayesian) techniques. The model evidence summarizes this ability to accurately predict the data in a way that accounts for model complexity. Model evidence allows different models to be compared in terms of their posterior probability (Penny et al., 2004). We also performed “model family” comparisons to test for evidence for specific architectural features of interest by aggregating over the posterior probabilities for models that share that property (Penny et al., 2010).

We specified a comprehensive model space covering plausible alternatives for explaining increased fear-sensitive cortical responses to dynamic and static fearful faces. Our models systematically varied which connections, projecting to FFA, were modulated by static fear and which connections, projecting to V5f and STS, were modulated by dynamic fear. To address the possibility of a subcortical input to the amygdala, such as from the superior colliculus via the pulvinar nucleus (Morris et al., 1999), we also considered models with or without exogenous (stimulus bound) inputs to the amygdala. Finally, to address a previous study showing a feedforward organization among face-selective areas and no connectivity between STS and FFA (Furl et al., 2013), we included models with this “sparse” connectivity structure, as well as models with full connectivity. All 508 combinations of the aforementioned model variants were included in Bayesian model comparison to find the model features that best explained the data.

When all 508 models were individually compared, we found one highly likely model whose posterior probability was 0.93 (Fig. 4a); all competing models had posterior probabilities of <0.068. This optimal model possessed full endogenous connectivity, exogenous inputs only to OFA and V5f (with no extra input to the amygdala), modulation of the connections from the amygdala to V5f and STS by dynamic fear, and modulation of the connections from the amygdala to FFA and V5f to FFA by static fear. Thus, this model confirmed our hypotheses that the amygdala mediates fear sensitivity in visual areas via backwards connections and that the mode of presentation (i.e., dynamic vs static faces) determines the regional selectivity of this top-down effect. Although the contribution of V5f, in addition to the amygdala, was not predicted, it is consistent with findings that motion-sensitive areas contribute to perception of static expressions (Furl et al., 2012).

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Connectivity analysis results. a, The optimal model. This model was evaluated for the 13 participants with all ROIs. Exogenous inputs (of dynamic or static faces) are indicated. Gray arrows indicate endogenous connections; green arrows indicate connections modulated by static fear; blue arrows indicate connections modulated by dynamic fear. b, Relative log-evidences for two model family comparisons. The posterior probability for the family with the highest evidence is numbered above the bar for the most likely family. “Amy input” tests for evidence favoring an exogenous input to the amygdala. “Full connectivity” tests for evidence favoring models with full endogenous connectivity versus sparse models. c, Model family comparisons testing modulation of dynamic fear on different possible connections projecting to V5f and STS. d, Model family comparisons testing modulation of static fear on different possible connections projecting to FFA. amy, Amygdala.

We also confirmed key attributes of this connectivity pattern using model family comparisons. Posterior probabilities were close to 1 in favoring (1) the 234 models with no extra exogenous amygdala input versus models that had this input (Fig. 4b), (2) the 450 models with full endogenous connectivity compared with sparse models (Fig. 4b), and (3) the 127 models that satisfied our a priori hypothesis, with fear modulation on all connections from amygdala to FFA, V5f, and STS, compared with models without these features. Figure 4c shows four more model family comparisons, testing models where a region had connections to other regions modulated by dynamic fear versus models where this region's connections were not modulated by dynamic fear. There was little evidence for families with dynamic fear modulation on connections originating in OFA (244 models), FFA (240 models), or mutual modulation between V5f and STS (274 models). However, the posterior probability was close to 1 for connections originating in amygdala (238 models). Figure 4d shows that there was little evidence for families possessing static fear modulation on connections originating in OFA (290 models) or STS (198 models). In contrast, there was a 0.99 posterior probability of static fear modulation on connections originating in amygdala (268 models) and a 0.92 posterior probability of static fear modulation on connections originating in V5f (270 models).

Overall, our model comparisons provided very strong evidence for a model where dynamic fear modulated the connections from amygdala to V5f and STS, the dorsal temporal facial motion areas that showed enhanced responses to dynamic fearful expressions. In the same model, static fear modulated the connections from amygdala to FFA, the face-selective ventral area that showed enhanced responses to static fearful expressions.

Postscanning behavioral measures

We assessed the validity of the facial and control stimuli presented during the fMRI experiment by obtaining behavioral data after scanning. To maintain continuity across results, we describe behavioral data for the 12 participants who were also included in the ROI and DCM analyses (one participant lacked behavioral data). All findings are reported using repeated-measures ANOVAs and post hoc pairwise Tukey honest significant difference range tests at p < 0.05.

For localizer run stimuli, participants rated the “motion intensity” (from 1 to 9) in dynamic faces (mean ± SE, 5.28 ± 0.25), objects (mean ± SE, 6.54 ± 0.36), and patterns (mean ± SE, 7.25 ± 0.34) and showed a main effect of category (F(2,55) = 13.64, p < 0.0001), with a significantly lower rating for dynamic faces than for dynamic random-dot patterns. For main experiment run faces (Fig. 5a), participants reported more motion intensity for veridical than for Fourier phase-scrambled dynamic faces, yielding a main effect (F(1,22) = 29.39, p < 0.0002), but there were no expression differences or interaction. Participants also performed speeded classifications for the faces from the main experiment runs, followed immediately by a 1–9 rating of emotional intensity. For correct classifications, neither motion, nor expression, nor their interaction affected emotional intensity ratings (Fig. 5b). For both dynamic and static faces, participants showed a higher hit rate (F(1,55) = 7.59, p = 0.002) and d′ (F(1,55) = 37.70, p > 0.001) for happy expressions than for disgust or fearful expressions. Fearful expressions were the least accurate, with a lower d′ than both disgust and happy expressions (Fig. 5c). Happy expressions were also classified faster (F(1,55) = 5.82, p < 0.005; Fig. 5d) than fearful or disgust expressions.

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Postscanning behavioral results for faces shown in main experiment runs. a, Motion intensity ratings for facial videos and Fourier phase-scrambled videos of disgust, happy, and fearful expressions. b, Mean emotional intensity ratings of veridical dynamic and static disgust, happy, and fearful expressions. c, d′ classification performance. d, Reaction times (ms) for correct expression classifications. Error bars indicate SEM. All graphs represent the 12 participants who manifested every ROI and had behavioral data.

Overall, our behavioral findings suggest that, at least at the level of conscious reports, the different facial expressions did not differ much in perceived motion or emotional intensity. Thus, the emotion-enhanced responses we observed in FFA, V5f, and STS are not likely to result from heightened emotional intensity in fearful expressions, relative to happy and disgust. We also replicated the numerous previous studies showing accuracy and reaction time advantages for static happy expressions over static fearful expressions (Sweeny et al., 2013), and here we show that these results extend to dynamic expressions.

Discussion

We have shown that the responses of temporal visual areas exhibiting enhanced responses to emotional facial expressions are determined by whether or not they contain motion. For dynamic facial expressions, we found increased sensitivity to fearful faces in dorsal temporal lobe areas sensitive to facial motion (V5f, STS). In contrast, for static expressions, there was increased sensitivity to fearful faces in a more ventral area, the face-selective FFA. These data were better explained by connectivity models where dynamic and static facial fear modulated the backwards connections from amygdala to dorsal and ventral temporal areas, respectively, than by a large number of alternative connectivity architectures. For many years, it has been speculated that emotion-sensitive responses in occipitotemporal visual areas might arise because of the influence of backwards connections from the amygdala (Morris et al., 1998). We have used DCM to explicitly confirm this hypothesis in healthy human participants. And we further demonstrate that the amygdala influences specific visual areas in a context-sensitive fashion. Indeed, the areas targeted by the amygdala appear to be those best suited for processing the information in the stimulus (i.e., motion or static form).

Our model selection compared models where amygdala connections influenced cortical areas against numerous models with alternative forms of connectivity. We found clear evidence for one model, where the amygdala influenced ventral and dorsal temporal areas, via backwards connections. There has been little causal evidence for such an amygdala influence in the healthy human brain, although there are suggestive reductions in cortical emotion sensitivity in amygdala-lesioned macaques (Hadj-Bouziane et al., 2012) and human epilepsy patients with amygdala and hippocampal sclerosis (Vuilleumier et al., 2004). Other indirect conclusions have been based on response timing (Krolak-Salmon et al., 2004; Sabatinelli et al., 2009). In the current study, we measured effective connectivity, which does not preclude polysynaptic mediation via “relay areas,” such as frontal cortex (Lim et al., 2009) or the pulvinar (Pessoa and Adolphs, 2010). However, human dissection (ffytch et al., 2005; Martino et al., 2011) and DTI (Catani et al., 2003) show profuse connectivity between amygdala and visual areas and tract-tracing results in the monkey (Amaral and Price, 1984) show prevalent backwards connections In the human, amygdala feedback could be propagated by direct white matter tracts to occipitotemporal cortex.

Our results suggest that the amygdala does more than feed back to visual cortex and that it may have a contextual role in visual coding. Specifically, the amygdala feedback may target brain areas to enhance encoding of the visual elements of a stimulus that best predict fear. When faces were dynamic, the amygdala selectively targeted V5f and STS, areas thought to encode motion information (Schultz and Pilz, 2009; Trautmann et al., 2009; Pitcher et al., 2011; Foley et al., 2012; Schultz et al., 2012). The posterior STS may contribute to perceiving changeable facial attributes, such as dynamic facial expression and eye gaze (Haxby et al., 2000), and may rely on motion representations via connections with motion-sensitive area hMT+/V5 (O'Toole et al., 2002; Calder, 2011). These proposals, and our results, dove-tail with a sizeable literature on biological motion responses in the posterior STS (Giese and Poggio, 2003) and research in the monkey implicating motion-sensitive areas in expression representation (Furl et al., 2012). Thus, when faces were dynamic, the amygdala enhanced fear responses specifically in visual areas well suited for representing the dynamics of the fearful expressions. Motion-sensitive areas would provide the definitive visual information about fearful expressions that are presented as videos.

In contrast, when faces were static, the amygdala produced fear sensitivity in the FFA, an area that shows relatively little motion sensitivity. Indeed, the FFA is thought to be more specialized for representing “static-based” (O'Toole et al., 2002) or “invariant” (Haxby et al., 2000) facial information, which may include representations of facial form, shape, or structure (Calder and Young, 2005; Calder, 2011). Although it is sometimes supposed that representations in the FFA are limited to facial identity perception (Haxby et al., 2000), our findings, together with a considerable body of literature (Ganel et al., 2005; Fox et al., 2009b; Cohen Kadosh et al., 2010; Harris et al., 2012), suggest that FFA can also contribute to expression perception. In this case, the FFA (rather than dorsal areas) would provide the definitive visual information about fearful expressions from static images. In sum, our results can be best explained if the amygdala optimized encoding of behaviorally relevant information by selectively targeting areas best suited for representing the information available in the stimuli, motion, or static-based information.

The enhanced visual responses we observed presumably reflect elaborated visual coding of information, such as form or motion, and this elaborated coding may have behavioral consequences. The fearful expressions of others contain visual information that indicates danger. The amygdala may guide the visual system to prioritize encoding of visual information (form or motion) that best predicts such aversive events. Indeed, there is a vast literature on animal fear conditioning, suggesting that the amygdala plays a role in learning which preceding sensory cues predict adverse events (Dolan, 2000; Maren and Quirk, 2004). Similar amygdala-based learning and conditioning mechanisms operate in the human when learning from others' fear (Olsson and Phelps, 2007). Moreover, fear-related events can also lead to emotion-related memory enhancements that are amygdala-dependent (Phelps, 2004). Our data, together with this literature, suggest that the amygdala controls encoding and prediction of aversive events based on the individual visual elements of a stimulus (form and motion).

Because we used a comprehensive DCM space, we can also report two further findings. First, our model space explored the manner in which the amygdala initially receives visual information, which is currently a topic of active debate (Vuilleumier and Pourtois, 2007; Pessoa and Adolphs, 2010). We considered models with an exogenous input to the amygdala, which was intended to account for a subcortical route to the amygdala (Morris et al., 1999; Rudrauf et al., 2008). Models with this amygdala input were suboptimal compared with models in which the amygdala received inputs from OFA, FFA, V5f, and STS, without any other exogenous influences. Several studies suggest that a subcortical input might be more apparent, however, when faces are low spatial frequency, peripherally or subliminally presented or unattended (Morris et al., 1999; Anderson et al., 2003; Winston et al., 2003; Williams et al., 2004). Second, we observed static fear modulation on the connection from V5f to FFA, a finding that might relate to perception of implied motion in static images (Kourtzi and Kanwisher, 2000; Senior et al., 2000). In a previous study (Furl et al., 2012), we found that static facial expressions could be decoded from motion-sensitive areas in the macaque, despite limited mean responses to static expressions. Similarly, here we found weak responses to static faces in V5f, yet responses in this area influenced responses to static faces in FFA.

This new perspective on amygdala function also introduces new research questions concerning the mechanisms that the amygdala uses to specify its cortical targets. For example, cortical areas may be targeted within the amygdala presynaptically or, alternatively, postsynaptic neurons within the cortical areas may render themselves more receptive to amygdala influence. There may be short-term synaptic plasticity that alters the connection strengths between cortical areas and the amygdala, depending on activity in both areas. Also, oscillatory phase synchrony has been hypothesized as a mechanism for gating or routing information transmission in the brain (Salinas and Sejnowski, 2001). These possibilities are difficult to test using hemodynamic measures, however, and so suggest new research avenues for electrophysiology in monkeys.

In conclusion, we have shown that dynamic facial expressions evoke fear-sensitive responses in dorsal temporal areas sensitive to visual motion but that static expressions evoke fear-sensitive responses in a ventral temporal area, the FFA. Fear-sensitive responses in both dorsal and ventral areas were best explained by a connectivity model where top-down influences from the amygdala were modulated by fear. This model provides strong evidence, from the healthy human brain, for the long-standing speculation that augmented visual responses to emotional stimuli are caused by amygdala feedback (Morris et al., 1998). Our model further elaborates our understanding of amygdala function by showing that the amygdala can flexibly enhance fear responses in specific brain areas that are best suited for representing definitive stimulus information. Our study, the first to apply extensive connectivity modeling to fMRI responses to dynamic and static faces, yields a new perspective on how the amygdala controls the visual system and speaks to novel research avenues.

Footnotes

  • This work was supported by the United Kingdom Economic and Social Research Council Grant RES-062-23-2925 to N.F. and the Medical Research Council Grant MC_US_A060_5PQ50 to A.J.C. and Grant MC_US_A060_0046 to R.N.H. We thank Christopher Fox for supplying the dynamic object stimuli and James Rowe and Francesca Carota for contributing useful comments.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Dr. Nicholas Furl, Medical Research Council Cognition and Brain Sciences Unit, 15 Chaucer Road, Cambridge, CB2 7EF, United Kingdom. nick.furl{at}mrc-cbu.cam.ac.uk

References

  1. ↵
    1. Amaral DG,
    2. Price JL
    (1984) Amygdalo-cortical projections in the monkey (Macaca fascicularis) J Comp Neurol 230:465–496, doi:10.1002/cne.902300402, pmid:6520247.
    OpenUrlCrossRefPubMed
  2. ↵
    1. Anderson AK,
    2. Christoff K,
    3. Panitz D,
    4. De Rosa E,
    5. Gabrieli JD
    (2003) Neural correlates of the automatic processing of threat facial signals. J Neurosci 23:5627–5633, pmid:12843265.
    OpenUrlAbstract/FREE Full Text
  3. ↵
    1. Brett M,
    2. Penny WD,
    3. Kiebel SJ
    (2003) in Human brain function, Introduction to random field theory, eds Frackowiak RSJ, Friston KJ, Frith C, Dolan R, Price CJ, Zeki S, Ashburner J, Penny WD (Academic, San Diego), Ed 2.
  4. ↵
    1. Calder AJ
    (2011) in The Oxford handbook of face perception, Does facial identity and facial expression recognition involve separate visual routes? eds Calder AJ, Rhodes G, Johnson M, Haxby JV (Oxford UP, Oxford).
  5. ↵
    1. Calder AJ,
    2. Young AW
    (2005) Understanding recognition of facial identity and facial expression. Nat Rev Neurosci 6:641–651, doi:10.1038/nrn1724, pmid:16062171.
    OpenUrlCrossRefPubMed
  6. ↵
    1. Catani M,
    2. Jones DK,
    3. Donato R,
    4. ffytche DH
    (2003) Occipito-temporal connections in the human brain. Brain 126:2093–2107, doi:10.1093/brain/awg203, pmid:12821517.
    OpenUrlAbstract/FREE Full Text
  7. ↵
    1. Cohen Kadosh K,
    2. Henson RN,
    3. Cohen Kadosh R,
    4. Johnson MH,
    5. Dick F
    (2010) Task-dependent activation of face-sensitive cortex: an fMRI adaptation study. J Cogn Neurosci 22:903–917, doi:10.1162/jocn.2009.21224, pmid:19320549.
    OpenUrlCrossRefPubMed
    1. Dolan RJ
    (2002) Emotion cognition and behavior. Science 298:1191–1194, pmid:12424363.
    OpenUrlAbstract/FREE Full Text
  8. ↵
    1. Foley E,
    2. Rippon G,
    3. Thai NJ,
    4. Longe O,
    5. Senior C
    (2012) Dynamic facial expressions evoke distinct activation in the face perception network: a connectivity analysis study. J Cogn Neurosci 24:507–520, doi:10.1162/jocn_a_00120, pmid:21861684.
    OpenUrlCrossRefPubMed
  9. ↵
    1. Fox CJ,
    2. Iaria G,
    3. Barton JJ
    (2009a) Defining the face processing network: optimization of the functional localizer in fMRI. Hum Brain Mapp 30:1637–1651, doi:10.1002/hbm.20630, pmid:18661501.
    OpenUrlCrossRefPubMed
  10. ↵
    1. Fox CJ,
    2. Moon SY,
    3. Iaria G,
    4. Barton JJ
    (2009b) The correlates of subjective perception of identity and expression in the face network: an fMRI adaptation study. Neuroimage 44:569–580, doi:10.1016/j.neuroimage.2008.09.011, pmid:18852053.
    OpenUrlCrossRefPubMed
  11. ↵
    1. Friston KJ,
    2. Harrison L,
    3. Penny W
    (2003) Dynamic causal modelling. Neuroimage 19:1273–1302, doi:10.1016/S1053-8119(03)00202-7, pmid:12948688.
    OpenUrlCrossRefPubMed
  12. ↵
    1. Furl N,
    2. Hadj-Bouziane F,
    3. Liu N,
    4. Averbeck BB,
    5. Ungerleider LG
    (2012) Dynamic and static facial expressions decoded from motion-sensitive areas in the macaque monkey. J Neurosci 32:15952–15962, doi:10.1523/JNEUROSCI.1992-12.2012, pmid:23136433.
    OpenUrlAbstract/FREE Full Text
  13. ↵
    1. Furl N,
    2. Coppola R,
    3. Averbeck BB,
    4. Weinberger DR
    (2013) Cross-frequency power coupling between face-selective areas. Cereb Cortex doi:10.1093/cercor/bht097, doi:10.1093/cercor/bht097, pmid:23588186, Advance online publication.
    OpenUrlAbstract/FREE Full Text
  14. ↵
    1. Ganel T,
    2. Valyear KF,
    3. Goshen-Gottstein Y,
    4. Goodale MA
    (2005) The involvement of the “fusiform face area” in processing facial expression. Neuropsychologia 43:1645–1654, doi:10.1016/j.neuropsychologia.2005.01.012, pmid:16009246.
    OpenUrlCrossRefPubMed
  15. ↵
    1. Giese MA,
    2. Poggio T
    (2003) Neural mechanisms for the recognition of biological movements. Nat Rev Neurosci 4:179–192, doi:10.1038/nrn1057, pmid:12612631.
    OpenUrlCrossRefPubMed
  16. ↵
    1. Hadj-Bouziane F,
    2. Liu N,
    3. Bell AH,
    4. Gothard KM,
    5. Luh WM,
    6. Tootell RB,
    7. Murray EA,
    8. Ungerleider LG
    (2012) Amygdala lesions disrupt modulation of functional MRI activity evoked by facial expression in the monkey inferior temporal cortex. Proc Natl Acad Sci U S A 109:E3640–E3648, doi:10.1073/pnas.1218406109, pmid:23184972.
    OpenUrlAbstract/FREE Full Text
  17. ↵
    1. Harris RJ,
    2. Young AW,
    3. Andrews TJ
    (2012) Morphing between expressions dissociates continuous from categorical representations of facial expression in the human brain. Proc Natl Acad Sci U S A 109:21164–21169, doi:10.1073/pnas.1212207110, pmid:23213218.
    OpenUrlAbstract/FREE Full Text
  18. ↵
    1. Haxby JV,
    2. Hoffman EA,
    3. Gobbini MI
    (2000) The distributed human neural system for face perception. Trends Cogn Sci 4:223–233, doi:10.1016/S1364-6613(00)01482-0, pmid:10827445.
    OpenUrlCrossRefPubMed
  19. ↵
    1. Kanwisher N,
    2. McDermott J,
    3. Chun MM
    (1997) The fusiform face area: a module in human extrastriate cortex specialized for face perception. J Neurosci 17:4302–4311, pmid:9151747.
    OpenUrlAbstract/FREE Full Text
  20. ↵
    1. Kass RE,
    2. Raftery AE
    (1995) Bayes factors. J Am Statist Assoc 90:773–795, doi:10.1080/01621459.1995.10476572.
    OpenUrlCrossRef
  21. ↵
    1. Kolster H,
    2. Peeters R,
    3. Orban GA
    (2010) The retinotopic organization of the human middle temporal area MT/V5 and its cortical neighbors. J Neurosci 30:9801–9820, doi:10.1523/JNEUROSCI.2069-10.2010, pmid:20660263.
    OpenUrlAbstract/FREE Full Text
  22. ↵
    1. Kourtzi Z,
    2. Kanwisher N
    (2000) Activation in human MT/MST by static images with implied motion. J Cogn Neurosci 12:48–55, doi:10.1162/08989290051137594, pmid:10769305.
    OpenUrlCrossRefPubMed
  23. ↵
    1. Krolak-Salmon P,
    2. Hénaff MA,
    3. Vighetto A,
    4. Bertrand O,
    5. Mauguière F
    (2004) Early amygdala reaction to fear spreading in occipital, temporal, and frontal cortex: a depth electrode ERP study in human. Neuron 42:665–676, doi:10.1016/S0896-6273(04)00264-8, pmid:15157426.
    OpenUrlCrossRefPubMed
  24. ↵
    1. Lim SL,
    2. Padmala S,
    3. Pessoa L
    (2009) Segregating the significant from the mundane on a moment-to-moment basis via direct and indirect amygdala contributions. Proc Natl Acad Sci U S A 106:16841–16846, doi:10.1073/pnas.0904551106, pmid:19805383.
    OpenUrlAbstract/FREE Full Text
  25. ↵
    1. Maren S,
    2. Quirk GJ
    (2004) Neuronal signalling of fear memory. Nat Rev Neurosci 5:844–852, doi:10.1038/nrn1535, pmid:15496862.
    OpenUrlCrossRefPubMed
  26. ↵
    1. Martino J,
    2. De Witt Hamer PC,
    3. Vergani F,
    4. Brogna C,
    5. de Lucas EM,
    6. Vázquez-Barquero A,
    7. García-Porrero JA,
    8. Duffau H
    (2011) Cortex-sparing fiber dissection: an improved method for the study of white matter anatomy in the human brain. J Anat 219:531–541, doi:10.1111/j.1469-7580.2011.01414.x, pmid:21767263.
    OpenUrlCrossRefPubMed
  27. ↵
    1. Mechelli A,
    2. Price CJ,
    3. Henson RN,
    4. Friston KJ
    (2003) Estimating efficiency a priori: a comparison of blocked and randomized designs. Neuroimage 18:798–805, doi:10.1016/S1053-8119(02)00040-X, pmid:12667856.
    OpenUrlCrossRefPubMed
  28. ↵
    1. Morris JS,
    2. Friston KJ,
    3. Büchel C,
    4. Frith CD,
    5. Young AW,
    6. Calder AJ,
    7. Dolan RJ
    (1998) A neuromodulatory role for the human amygdala in processing emotional facial expressions. Brain 121:47–57, doi:10.1093/brain/121.1.47, pmid:9549487.
    OpenUrlAbstract/FREE Full Text
  29. ↵
    1. Morris JS,
    2. Öhman A,
    3. Dolan RJ
    (1999) A subcortical pathway to the right amygdala mediating “unseen” fear. Proc Natl Acad Sci U S A 96:1680–1685, doi:10.1073/pnas.96.4.1680, pmid:9990084.
    OpenUrlAbstract/FREE Full Text
  30. ↵
    1. Olsson A,
    2. Phelps EA
    (2007) Social learning of fear. Nat Neurosci 10:1095–1102, doi:10.1038/nn1968, pmid:17726475.
    OpenUrlCrossRefPubMed
  31. ↵
    1. O'Toole AJ,
    2. Roark DA,
    3. Abdi H
    (2002) Recognizing moving faces: a psychological and neural synthesis. Trends Cogn Sci 6:261–266, doi:10.1016/S1364-6613(02)01908-3, pmid:12039608.
    OpenUrlCrossRefPubMed
  32. ↵
    1. Penny WD
    (2012) Comparing dynamic causal models using AIC, BIC and free energy. Neuroimage 59:319–330, doi:10.1016/j.neuroimage.2011.07.039, pmid:21864690.
    OpenUrlCrossRefPubMed
  33. ↵
    1. Penny WD,
    2. Stephan KE,
    3. Mechelli A,
    4. Friston KJ
    (2004) Comparing dynamic causal models. Neuroimage 22:1157–1172, doi:10.1016/j.neuroimage.2004.03.026, pmid:15219588.
    OpenUrlCrossRefPubMed
  34. ↵
    1. Penny WD,
    2. Stephan KE,
    3. Daunizeau J,
    4. Rosa MJ,
    5. Friston KJ,
    6. Schofield TM,
    7. Leff AP
    (2010) Comparing families of dynamic causal models. PLoS Comput Biol 8:e1000709, doi:10.1371/journal.pcbi.1000709, pmid:20300649.
    OpenUrlCrossRefPubMed
  35. ↵
    1. Pessoa L,
    2. Adolphs R
    (2010) Emotion processing and the amygdala: from a ‘low road’ to ‘many roads’ of evaluating biological significance. Nat Rev Neurosci 11:773–783, doi:10.1038/nrn2920, pmid:20959860.
    OpenUrlCrossRefPubMed
  36. ↵
    1. Phelps EA
    (2004) Human emotion and memory: interactions of the amygdala and hippocampal complex. Curr Opin Neurobiol 14:198–202, doi:10.1016/j.conb.2004.03.015, pmid:15082325.
    OpenUrlCrossRefPubMed
  37. ↵
    1. Pitcher D,
    2. Dilks DD,
    3. Saxe RR,
    4. Triantafyllou C,
    5. Kanwisher N
    (2011) Differential selectivity for dynamic versus static information in face-selective cortical regions. Neuroimage 56:2356–2363, doi:10.1016/j.neuroimage.2011.03.067, pmid:21473921.
    OpenUrlCrossRefPubMed
  38. ↵
    1. Rudrauf D,
    2. David O,
    3. Lachaux JP,
    4. Kovach CK,
    5. Martinerie J,
    6. Renault B,
    7. Damasio A
    (2008) Rapid interactions between the ventral visual stream and emotion-related structures rely on a two-pathway architecture. J Neurosci 28:2793–2803, doi:10.1523/JNEUROSCI.3476-07.2008, pmid:18337409.
    OpenUrlAbstract/FREE Full Text
  39. ↵
    1. Sabatinelli D,
    2. Lang PJ,
    3. Bradley MM,
    4. Costa VD,
    5. Keil A
    (2009) The timing of emotional discrimination in human amygdala and ventral visual cortex. J Neurosci 29:14864–14868, doi:10.1523/JNEUROSCI.3278-09.2009, pmid:19940182.
    OpenUrlAbstract/FREE Full Text
  40. ↵
    1. Sabatinelli D,
    2. Fortune EE,
    3. Li Q,
    4. Siddiqui A,
    5. Krafft C,
    6. Oliver WT,
    7. Beck S,
    8. Jeffries J
    (2011) Emotional perception: meta-analyses of face and natural scene processing. Neuroimage 54:2524–2533, doi:10.1016/j.neuroimage.2010.10.011, pmid:20951215.
    OpenUrlCrossRefPubMed
  41. ↵
    1. Salinas E,
    2. Sejnowski TJ
    (2001) Correlated neuronal activity and the flow of neuronal information. Nat Rev Neurosci 2:539–550, doi:10.1038/35086012, pmid:11483997.
    OpenUrlCrossRefPubMed
  42. ↵
    1. Schultz J,
    2. Pilz KS
    (2009) Natural facial motion enhances cortical responses to faces. Exp Brain Res 194:465–475, doi:10.1007/s00221-009-1721-9, pmid:19205678.
    OpenUrlCrossRefPubMed
    1. Schultz J,
    2. Brockhaus M,
    3. Bülthoff HH,
    4. Pilz KS
    (2013) What the human brain likes about facial motion. Cereb Cortex 23:1167–1178, doi:10.1093/cercor/bhs106, pmid:22535907.
    OpenUrlAbstract/FREE Full Text
  43. ↵
    1. Senior C,
    2. Barnes J,
    3. Giampietro V,
    4. Simmons A,
    5. Bullmore ET,
    6. Brammer M,
    7. David AS
    (2000) The functional neuroanatomy of implicit-motion perception or representational momentum. Curr Biol 10:16–22, doi:10.1016/S0960-9822(99)00259-6, pmid:10660297.
    OpenUrlCrossRefPubMed
  44. ↵
    1. Sergerie K,
    2. Chochol C,
    3. Armony JL
    (2008) The role of the amygdala in emotional processing: a quantitative meta-analysis of functional neuroimaging studies. Neurosci Biobehav Rev 32:811–830, doi:10.1016/j.neubiorev.2007.12.002, pmid:18316124.
    OpenUrlCrossRefPubMed
  45. ↵
    1. Stephan KE,
    2. Penny WD,
    3. Moran RJ,
    4. den Ouden HE,
    5. Daunizeau J,
    6. Friston KJ
    (2010) Ten simple rules for dynamic causal modeling. Neuroimage 49:3099–3109, doi:10.1016/j.neuroimage.2009.11.015, pmid:19914382.
    OpenUrlCrossRefPubMed
  46. ↵
    1. Sweeny TD,
    2. Suzuki S,
    3. Grabowecky M,
    4. Paller KA
    (2013) Detecting and categorizing fleeting emotions in faces. Emotion 13:76–91, doi:10.1037/a0029193, pmid:22866885.
    OpenUrlCrossRefPubMed
  47. ↵
    1. Trautmann SA,
    2. Fehr T,
    3. Herrmann M
    (2009) Emotions in motion: dynamic compared to static facial expressions of disgust and happiness reveal more widespread emotion-specific activations. Brain Res 1284:100–115, doi:10.1016/j.brainres.2009.05.075, pmid:19501062.
    OpenUrlCrossRefPubMed
  48. ↵
    1. van der Schalk J,
    2. Hawk ST,
    3. Fischer AH,
    4. Doosje B
    (2011) Moving faces, looking places: the Amsterdam Dynamic Facial Expressions Set (ADFES) Emotion 11:907–920, doi:10.1037/a0023853, pmid:21859206.
    OpenUrlCrossRefPubMed
  49. ↵
    1. Van Oostende S,
    2. Sunaert S,
    3. Van Hecke P,
    4. Marchal G,
    5. Orban GA
    (1997) The kinetic occipital (KO) region in man: an fMRI study. Cereb Cortex 7:690–701, doi:10.1093/cercor/7.7.690, pmid:9373023.
    OpenUrlAbstract/FREE Full Text
  50. ↵
    1. Vuilleumier P,
    2. Pourtois G
    (2007) Distributed and interactive brain mechanisms during emotion face perception: evidence from functional neuroimaging. Neuropsychologia 45:174–194, doi:10.1016/j.neuropsychologia.2006.06.003, pmid:16854439.
    OpenUrlCrossRefPubMed
  51. ↵
    1. Vuilleumier P,
    2. Richardson MP,
    3. Armony JL,
    4. Driver J,
    5. Dolan RJ
    (2004) Distant influences of amygdala lesion on visual cortical activation during emotional face processing. Nat Neurosci 7:1271–1278, doi:10.1038/nn1341, pmid:15494727.
    OpenUrlCrossRefPubMed
  52. ↵
    1. Williams MA,
    2. Morris AP,
    3. McGlone F,
    4. Abbott DF,
    5. Mattingley JB
    (2004) Amygdala responses to fearful and happy facial expressions under conditions of binocular suppression. J Neurosci 24:2, doi:10.1523/JNEUROSCI.4977-03.2004, pmid:15044528.
    OpenUrlCrossRefPubMed
  53. ↵
    1. Winston JS,
    2. Vuilleumier P,
    3. Dolan RJ
    (2003) Effects of low-spatial frequency components of fearful faces on fusiform cortex activity. Curr Biol 13:1824–1829, doi:10.1016/j.cub.2003.09.038, pmid:14561410.
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 33 (44)
Journal of Neuroscience
Vol. 33, Issue 44
30 Oct 2013
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Advertising (PDF)
  • Ed Board (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Top-Down Control of Visual Responses to Fear by the Amygdala
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Top-Down Control of Visual Responses to Fear by the Amygdala
Nicholas Furl, Richard N. Henson, Karl J. Friston, Andrew J. Calder
Journal of Neuroscience 30 October 2013, 33 (44) 17435-17443; DOI: 10.1523/JNEUROSCI.2992-13.2013

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Top-Down Control of Visual Responses to Fear by the Amygdala
Nicholas Furl, Richard N. Henson, Karl J. Friston, Andrew J. Calder
Journal of Neuroscience 30 October 2013, 33 (44) 17435-17443; DOI: 10.1523/JNEUROSCI.2992-13.2013
Reddit logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Articles

  • Choice Behavior Guided by Learned, But Not Innate, Taste Aversion Recruits the Orbitofrontal Cortex
  • Maturation of Spontaneous Firing Properties after Hearing Onset in Rat Auditory Nerve Fibers: Spontaneous Rates, Refractoriness, and Interfiber Correlations
  • Insulin Treatment Prevents Neuroinflammation and Neuronal Injury with Restored Neurobehavioral Function in Models of HIV/AIDS Neurodegeneration
Show more Articles

Behavioral/Cognitive

  • A learned map for places and concepts in the human MTL
  • Genetic Disruption of System xc-Mediated Glutamate Release from Astrocytes Increases Negative-Outcome Behaviors While Preserving Basic Brain Function in Rat
  • Neural Substrates of Body Ownership and Agency during Voluntary Movement
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
(JNeurosci logo)
(SfN logo)

Copyright © 2023 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.