Abstract
This study addresses the neuronal representation of aversive sounds that are perceived as unpleasant. Functional magnetic resonance imaging in humans demonstrated responses in the amygdala and auditory cortex to aversive sounds. We show that the amygdala encodes both the acoustic features of a stimulus and its valence (perceived unpleasantness). Dynamic causal modeling of this system revealed that evoked responses to sounds are relayed to the amygdala via auditory cortex. While acoustic features modulate effective connectivity from auditory cortex to the amygdala, the valence modulates the effective connectivity from amygdala to the auditory cortex. These results support a complex (recurrent) interaction between the auditory cortex and amygdala based on object-level analysis in the auditory cortex that portends the assignment of emotional valence in amygdala that in turn influences the representation of salient information in auditory cortex.
Introduction
Certain sounds, such as scraping sounds (e.g., finger nails on a blackboard) are perceived as highly unpleasant. A number of studies in humans have shown higher activation in the amygdala (Mirz et al., 2000; Zald and Pardo, 2002) and auditory cortex (Fecteau et al., 2007; Viinikainen et al., 2011) in response to unpleasant sounds relative to neutral sounds. In the present work, we address the following three key questions: (1) What does the activity in the amygdala and auditory cortex represent in response to unpleasant sounds? (2) Does the amygdala receive direct subcortical auditory inputs or are they relayed through the auditory cortex? (3) How do acoustic features and valence modulate the coupling between the amygdala and the auditory cortex?
In emotional stimuli, the acoustic features of stimuli covary with the emotional valence: spectrotemporal complexity of a stimulus with negative (or positive) valence is different from that of a neutral stimulus. It is therefore not clear whether the observed response in the amygdala corresponds to the acoustic features, the valence per se, or both. In this work, by explicitly modeling the spectrotemporal features of sounds and using these and the valence of sounds as explanatory variables, we disambiguate the two dimensions of the response by decomposing it into two components that are uniquely explained by the acoustic features and valence.
Using a classical conditioning paradigm in rodents, LeDoux and colleagues (1984, 1990b) have argued that the conditioned aversive stimulus (typically a pure tone) reaches the amygdala via a fast “second” auditory pathway from the auditory thalamus to the amygdala. Whether the processing of complex aversive sounds might follow the subcortical route to the amygdala is not known. Although it has been argued that “more complex (aversive) stimuli would require cortical processing” (Phelps and LeDoux, 2005), empirical evidence for the route followed by complex aversive stimuli in humans is lacking.
Converging evidence from both the visual (Lane et al., 1997; Lang et al., 1998) and auditory (Plichta et al., 2011) domains shows that the activity of sensory cortex is modulated by emotional stimuli. The source of these modulations in the sensory cortex is thought to be the amygdala (Morris et al., 1998; Pessoa et al., 2002). This is based on the observations that activity in the amygdala correlates with cortical responses (Morris et al., 1998) and that the amygdala projects extensively to the sensory cortex (Amaral and Price, 1984). However, the observed correlation does not establish a causal influence of the amygdala. Moreover, the way in which the coupling between the amygdala and auditory cortex is influenced by low-level acoustic features and valence is not known.
In this study, we used event-related functional magnetic resonance imaging (fMRI) to measure neuronal responses to sounds that varied over a large range in the degree of their unpleasantness. By using conventional general linear model (GLM) analysis and analysis of causal interactions between the amygdala and the auditory cortex using dynamic causal modeling (DCM) and Bayesian model selection, we show the following: (1) the amygdala encodes both the acoustic features and valence of aversive sounds, (2) information is relayed to the amygdala via the auditory cortex, and (3) while the acoustic features modulate the forward coupling (from auditory cortex to the amygdala), valence modulates backward connectivity from the amygdala to the auditory cortex.
Materials and Methods
Participants.
Sixteen healthy subjects (7 females, age range 22–35 years) with no prior history of neurological or psychiatric disorder participated in the study. All subjects completed a consent form and were paid for their participation. Subjects were informed that they would be listening to unpleasant sounds in the MRI scanner but were not told about the type of unpleasant sounds or the overall aim of the study.
Sound stimuli and extraction of acoustic features.
The stimuli consisted a set of 74 sounds (each ∼2 s duration). The choice of sounds was based on our previous work (Kumar et al., 2008) in which a group of 50 subjects rated the unpleasantness of these sounds. These sounds, which were also analyzed to determine acoustic features relevant to perceived unpleasantness, were not categorically aversive or nonaversive but their perceived unpleasantness varied continuously from high to low. The sounds included highly unpleasant sounds [scraping sounds (chalk scratched on blackboard, knife scraped over bottle) and animal cries] and less unpleasant sounds (e.g., bubbling water).
To identify the features of the sounds that may be salient for perceived unpleasantness, we used a model of the auditory system to determine the spectrotemporal representations that correspond to the representation in the auditory cortex. The model of the auditory system includes (1) peripheral processing, which decomposes time domain sound signals into a two-dimensional time–frequency representation; and (2) central processing, which decomposes the two-dimensional representation into ripples with different spectral and temporal modulation frequencies (for a detailed description of the model, see Shamma, 2003; Kumar et al., 2008). In our previous study, we found that such a representation of sounds in the spectral frequency (F) and temporal modulation frequency (f) space can predict the perceived unpleasantness of sounds. To determine how the perceived unpleasantness and the BOLD signal vary with spectrotemporal features, we evaluated (for each sound) the value of spectral frequency and temporal modulation frequency that corresponds to maximum energy in the F–f space.
MRI data collection.
All imaging data were collected on Siemens 3 tesla Allegra head-only MRI scanner. Stimuli were presented in an event-related paradigm with interstimulus intervals jittered between 1.0 and 1.6 s. Each sound was repeated four times during the experiment. The experiment also included 90 null-events. In the scanner, subjects rated each sound on a scale from 1 (least unpleasant) to 5 (highly unpleasant). Ratings were recorded via button-presses on a five-button box. MRI images were acquired continuously (3 tesla; TR, 2.73 s; TE, 30 ms; 42 slices covering the whole brain; flip angle, 90°; isotropic voxel size, 2 mm with 1 mm gap; matrix size, 128 × 128) with a sequence optimized for the amygdala (Weiskopf et al., 2006). After the fMRI run, a high resolution (1 × 1 × 1 mm) T1-weighted structural MRI scan was acquired for each subject.
MRI data analysis.
Data for three subjects had to be discarded because of technical problems in registering the ratings given by the subjects. MRI data were analyzed using SPM8 (http://www.fil.ion.ucl.ac.uk/spm). After discarding the first two dummy images to allow for T1 relaxation effects, images were first realigned to the first volume. The realigned images were normalized to stereotactic space and smoothed by an isotropic Gaussian kernel of 8 mm full-width at half maximum. After preprocessing, statistical analysis used general linear (convolution) model (GLM). The design matrix for this analysis consisted of stimulus functions encoding the stimulus onsets convolved with a hemodynamic response function and four parametric regressors, in which the stimulus onsets were modulated by (1) spectral frequency (F), (2) temporal modulation frequency (f), (3) the interaction between the spectral frequency and temporal modulation frequency (F × f), and (4) rating of perceived unpleasantness (valence). The regressors were orthogonalized such that variance explained by valence was orthogonal to that explained by the acoustic features. A high-pass filter with a cutoff frequency of 1/128 Hz was applied to remove low-frequency variations in the BOLD signal. The GLM for each subject was estimated and the contrasts of parameter estimates for individual subjects were entered into second-level t tests to form statistical parametric maps, implementing a whole-brain random-effect analysis. The (display) threshold in the amygdala was lowered to p < 0.005 (uncorrected) to highlight the patterns of activity in the amygdala that correlate with acoustic features and rating of unpleasantness. These responses were overlaid on the amygdala maps from the anatomy toolbox (Eickhoff et al., 2005), available in SPM8.
Connectivity analysis: dynamic causal modeling.
The central idea behind DCM (Friston et al., 2003) is to identify causal interactions between two or more brain areas. The term “causal” in DCM refers to how the activity of one brain area changes the response of another area (Friston, 2009). A distinctive feature of DCM, in contrast to other methods (e.g., Granger causality) of connectivity analysis, is that it employs a generative or forward model of how the observed data (BOLD response in the present case) were generated. The generative model used in DCM for fMRI has two parts. The first part models the dynamics of neural activity by the following bilinear differential equation: where z is an n-dimensional state vector (with one state variable per region), t is continuous time, and uj is the j-th experimental input (i.e., stimulus functions above).
The above equation has three sets of parameters. The first set of parameters, matrix A of size (n × n), models the average connection strengths between the regions. These parameters represent the influence that one region has over the other in the absence of any external manipulation. The second set of parameters, matrix Bj (n × n), is known as modulatory parameters and model the change in connection strength induced by the j-th stimulus function. These parameters are therefore input-specific and are also referred to as bilinear parameters. The third set of parameters, matrix C (n × 1), models the direct influence of a stimulus function on a given region. The conventional general linear model analysis is based on the assumption that any stimulus has a direct influence on a region. DCM, therefore, can also be regarded as more general, with the general linear model analysis being a specific situation in which the coupling parameters (first and second sets, A and B) are assumed to be zero.
The second part of the generative model converts neural activity to BOLD responses. This is done by using a hemodynamic model (Friston et al., 2000) of the neurovascular coupling. The combined set of parameters of both parts are then estimated using variational Bayes (Friston et al., 2003) to give the posterior density over parameters and model evidence. The model evidence is used to select the best model/s from a set of models.
The relative evidence for two models—or a set of models—is generally computed using a Bayes factor (BF) (Kaas and Raftery, 1995): where p1 and p2 are the posterior probability of models 1 and 2, respectively (under uninformative priors over models). Kass and Raftery (1995) proposed the following rules for assessing these odds ratios: BF 1–3, weak evidence for model 1 compared to model 2; BF 2–3, positive evidence; BF 20–150, strong evidence; and BF >150, very strong evidence.
Volumes of interest for DCM analysis.
We chose four volumes of interest for the DCM analysis: right amygdala (38, −6, −24), left amygdala (−22, −2, 12), right auditory cortex (48, −14, −12), and left auditory cortex (−50, 6, −6). These areas and their coordinates were based on the group (random-effect) level GLM analysis. The BOLD activities in the right amygdala and the left auditory cortex were positively correlated with ratings of unpleasantness. BOLD responses in the left amygdala and the right auditory cortex were correlated with the interaction between the spectral frequency and temporal modulation frequency. For each subject, we chose subject specific maxima that were closest to and fell within the same anatomical region as the group-level maxima. Time series of activity for different regions were summarized by the first principal component of all voxels lying within 4 mm of the subject-specific maxima. For a group-level model comparison, we used a random-effects analysis (Stephan et al., 2009) as implemented in SPM8.
Results
Relationship between acoustic features and perceived unpleasantness
The set of 74 sounds that were presented during the experiment were analyzed for their acoustic features using a biologically realistic model of the auditory system (see Materials and Methods, above). Figure 1 shows examples of the spectrotemporal representation for six of the 74 sounds. This representation is in a space with dimensions of spectral frequency (F, y-axis) and temporal modulation frequency (f, x-axis). The mean unpleasantness rating for these sounds is also shown (in the top right corner of each plot). It can be seen that sounds with high unpleasantness have high spectral frequencies and low temporal modulation frequencies.
We analyzed, using regression analysis, the relationship between the ratings and the spectrotemporal features. Since rating is an ordinal variable (that is a categorical variable, which can be ordered) and the spectral and temporal modulation frequencies are continuous variables, we used ordinal regression (with a logit model). Specifically we estimated the following model: where Here F is the spectral frequency, f is the temporal modulation frequency, F × f is the interaction between the two and ε is the error. The analysis showed that the regression coefficient for the spectral frequency, β1 = 2.31, and the interaction term, β3 = −0.40, were statistically significant (p < 0.01). The regression coefficient for the temporal modulation term was not statistically significant. The acoustic features explain ∼19% of the variance of the rating variable.
GLM analysis
In this analysis, we determined the brain areas in which BOLD activity varies as a function of acoustic features and ratings of unpleasantness. The design matrix comprised five regressors: a stimulus onset regressor that was modulated by four parameters: (1) spectral frequency (F), (2) temporal modulation frequency (f), (3) interaction between F and f, and (4) ratings of unpleasantness (valence). To distinguish areas that correlate with acoustic features and valence, the regressors were orthogonalized, so that the variance explained by, for example, the interaction term was orthogonal to that explained by the spectral frequency (F) and temporal modulation frequency (f). Similarly, the variance explained by valence was orthogonal to the variance explained by all the acoustic features. The analyses that follow therefore disambiguate the effects of acoustic features and valence (in the sense that any neuronal responses explained by valence cannot be explained by acoustic features and vice versa).
Neuronal responses to acoustic features
Response in the amygdala
No response was observed in the amygdala that correlated with either spectral frequency or temporal modulation frequency alone. There was a significant correlation of BOLD response in the amygdala with the interaction between the spectral frequency and the temporal modulation frequency. Figure 2 (red areas) shows significant group (n = 13) responses overlaid on amygdala probability maps (Eickhoff et al., 2005) when testing for the interaction between spectral frequency F and temporal modulation frequency f. The effect of interaction is observed in the amygdala bilaterally [−22, −2, −12; t(12) = 4.98; 24, −8, −18; t(12) = 4.13; p < 0.001 (uncorrected)]. The cluster of activity in the left amygdala is shared between the superficial (57%) and basolateral (29%) nuclei of the amygdala. The cluster in the right amygdala is mostly in the basolateral amygdala (79%) but a part (21%) also lies in the superficial nucleus.
Response in the auditory cortex
The BOLD activity that was correlated with spectral frequency and temporal modulation frequency in the auditory cortex is shown in Figure 3, A and B, respectively. With a decrease in spectral frequency (Fig. 3A), responses were observed bilaterally in anterolateral Heschl gyrus extending to the planum temporale. Decrease in the temporal modulation frequency elicited responses bilaterally in the anterior part of superior temporal gyrus (STG)/upper bank of STS (Fig. 3B).
Activity in the right STG [48, −14, −12; t(12) = 5.17; p < 0.001 (uncorrected); Fig. 4A] correlated with the interaction between spectral frequency and temporal modulation. Other areas that correlated with acoustic features of the sounds stimuli are summarized in Table 1.
Neuronal responses to valence
Response in the amygdala
Activity was observed in the right amygdala [38, −6, −24; t(12) = 3.96; p < 0.001 (uncorrected); Fig. 2, blue areas] that correlates positively with rating of unpleasantness. This cluster of activity is located mostly (88%) in the basolateral nucleus of the amygdala. No activity was observed in the left amygdala even when the threshold was lowered to p = 0.01 (uncorrected).
Connectivity analysis using dynamic causal modeling
The BOLD activity in the auditory cortex and the amygdala were highly correlated and therefore showed a high degree of functional connectivity. We assessed the directed effective connectivity underlying these correlations using dynamic causal modeling to understand the interactions between the auditory cortex and the amygdala. Specifically we asked the following questions:
How does stimulus information reach the amygdala?
We tested alternative models based on direct inputs to the amygdala and inputs via the auditory cortex (Fig. 5A). In the models, no subcortical structure (e.g., thalamus) of the auditory system was included because no reliable activity was detected in these structures. This could be because of their small size or motion of brainstem (Poncelet et al., 1992). In the first model (M1, direct), the stimulus is received directly by the amygdala, which then drives the auditory cortex. In the second model (M2, via auditory cortex), the stimulus is first processed in the auditory cortex which then drives the amygdala. In the third model (M3, both), the amygdala and the auditory cortex are driven by the stimulus independently. The connectivity of all the three models is same, the only difference being the location of driving inputs. These models were estimated for 13 subjects and compared using Bayesian model comparison with random effects. The model exceedance probabilities of the three models are shown in Figure 5B. These results show that the model in which the stimulus first reaches the auditory cortex, which then drives the amygdala (model M2), is the best model (exceedance probability = 0.97). The Bayes factor for best model (M2) compared with the next-best model (M1) is ∼35, which implies a strong evidence (see Materials and Methods, Connectivity analysis: dynamic causal modeling, above) for model M2.
Which pathway, from the auditory cortex to the amygdala or vice versa, is modulated by the acoustic features?
To answer this question, we created a set of four models (Fig. 6). In the first model (M1, none), a null model, the acoustic features have no modulatory effect in either direction. In the second model (M2, backward), only backward connections from the amygdala to the auditory cortex are modulated by the acoustic features (Fig. 6, modulated connections are marked by red dots). In the third (M3, forward) and fourth (M4, reciprocal) models, forward connections from the auditory cortex to the amygdala or both forward and backward connections are modulated by acoustic features. Areas in the models are fully connected and the driving inputs are at the auditory cortex. Specifically, the modulation term is the interaction between the spectral frequency and the temporal modulation frequency that was significant in the behavioral analysis and predicts BOLD activity in both the amygdala and the auditory cortex. The exceedance probabilities of the models are shown in Figure 7A. In this case, the third model (M3), in which the forward connections from the auditory cortex to the amygdala are modulated by acoustic features, has the highest probability (0.64). Analysis of parameters of the best model showed that all modulatory influences are statistically significant using post hoc tests. The Bayes factor for the best model (M3) compared with the next-best model (M4) is 3.05, which implies a positive evidence for model M3 (see Materials and Methods, Connectivity analysis: dynamic causal modeling, above).
Which pathway, from the auditory cortex to the amygdala or vice versa, is modulated by valence?
The structure of models in this comparison is the same as in Figure 6, but here the modulation is by valence rather than acoustic features. The exceedance probabilities of the models (random-effects analysis, 13 subjects) are shown in Figure 7B. The model in which the backward connections from the amygdala to the auditory cortex are modulated by perceived unpleasantness has the highest probability (0.83). Analysis of the parameters of the best model showed that all modulatory influences are statistically significant. The Bayes factor for the best model (M2) compared with the next-best model (M1) is 6.91, which implies positive evidence for model M2.
Discussion
A number of previous studies (Mirz et al, 2000; Zald and Pardo, 2002) have implicated the amygdala in the perception of aversive sounds. In this paper, using conventional GLM analysis and effective connectivity analysis using DCM, we answer three questions that are important in building a detailed model of how the aversive percepts are formed: (1) What does the amygdala encode? (2) How does the stimulus reach the amygdala? (3) How does the amygdala interact with the auditory cortex?
What does the amygdala encode?
One model of amygdala function suggests that it encodes the value of stimuli both external and internal to an organism (Morrison and Salzman, 2010). Results of most of the previous studies that have implicated the amygdala in processing of emotional information are confounded by lack of control of low-level sensory features. In this work, we distinguished areas of the amygdala that process acoustic features from those that process valence by explicitly modeling the sensory features of stimuli and valence and using them as explanatory variables in fMRI analysis. Our results demonstrate that acoustic features of stimuli are encoded in the amygdala. This is consistent with the few studies in the literature that have examined the encoding of sensory features in the amygdala. A study in rodents (Bordi and LeDoux, 1992) has shown that neurons in the amygdala are tuned to high frequencies (>10 kHz) relevant to negative affect (e.g., distress calls). Similarly Du et al. (2009) measured frequency following response to a chatter sound in rats. For nonauditory stimuli, Kadohisa et al. (2005) showed a detailed representation of food stimulus features, such as viscosity, fat texture, and temperature, exists in the amygdala.
Although much is known about the roles played by different nuclei of the amygdala in animals (LeDoux, 2000), many details are not available in humans. Thanks to the availability of amygdala maps (Eickhoff et al., 2005), recent studies in humans (Ball et al., 2007) have started to tease apart the contributions of different nuclei in the amygdala. In our data, the distribution of responses in different nuclei shows that both basolateral and superficial nuclei of the amygdala encode the acoustic features necessary for attributing valence. This is in agreement with the animal models of amygdala function, in which the basolateral nucleus acts as a gateway for sensory information to the amygdala. Less is known about the role of superficial nucleus in humans. One study (Ball et al., 2007), however, showed that this nucleus responds to auditory input.
Our results show that the amygdala encodes not only the low-level acoustic features that determine valence but also the valence itself. This is in agreement a number of neuroimaging studies in normal subjects and psychopathology that implicate the amygdala in the subjective experience of negative affect. In the auditory domain, although a number of studies show activity in the amygdala in response to unpleasant sounds (Phillips et al., 1998; Morris et al., 1999; Mirz et al., 2000; Sander and Scheich, 2001; Zald and Pardo, 2002), these studies have not specifically examined the relation between its activity and the subjective experience of emotions. However, one study (Blood and Zatorre, 2001) using music as emotional stimuli showed activity of the right amygdala was negatively correlated with increasing chills, experienced by subjects when they listened to certain pieces of music. In studies using nonauditory stimuli, Zald and Pardo (1997) showed that responses in the left amygdala correlated positively with the subjective ratings of aversiveness of odor. Ketter et al. (1996) observed greater regional cerebral blood flow in the left amygdala in response to procaine-induced fear, which correlated with the intensity of fear experienced by individual subjects. In psychopathology, responses in the amygdala correlated with the negative affect experienced by the depressed patients (Abercrombie et al., 1998).
How does the stimulus reach the amygdala?
The auditory input to the amygdala has been studied extensively in rodents (Aggleton et al., 1980; LeDoux et al., 1984, 1990a; Romanski and LeDoux, 1993). It is well known that the basolateral complex of the amygdala, which acts as a sensory interface of the amygdala, receives inputs from both the auditory thalamus (LeDoux et al., 1984, 1990a) and from association areas of the auditory cortex (Aggleton et al., 1980; Romanski and LeDoux, 1993). These studies show that aversive stimuli can reach the amygdala via the auditory thalamus or cortex. To determine how the amygdala receives aversive input in humans, we compared three alternative models. In the first model, the stimulus representation passes directly to the amygdala and thence the auditory cortex. Since the pathway from the auditory thalamus to the amygdala is suggested to provide fast but imprecise inputs to the amygdala, this model includes the possibility that the direct input to the amygdala comes from the thalamus. In the second model, the amygdala did not receive a direct fast and imprecise input but is driven by an input that has been processed by the auditory cortex. In the third model, the amygdala receives both a direct input and the processed input from the auditory cortex. Our results provide evidence for the second model. This is consistent with the idea that the type of aversive stimuli used in the present study (i.e., complex sounds) are first processed and decoded in the auditory cortex before an emotional response can be elaborated. For example, an animal cry (signaling the presence of a dangerous animal) may have different time and frequency domain structure related to the size of the animal. To decode the size of animal from the acoustic structure, the stimuli need to be processed to a high level in the auditory cortex (von Kriegstein et al., 2006) before affective evaluation in the amygdala (Rolls, 2007). Evidence from visual studies (Mormann et al., 2011) shows responses in the amygdala to a specific category of objects (e.g., picture of animals), arguing for a higher level of processing before affective value is assigned.
Models of the role of amygdala in emotional processing (Phelps and LeDoux, 2005) postulate that the cortico-amygdala pathway is used for processing of complex emotional stimuli. However, to the best of our knowledge, there is no empirical evidence for this. Our effective connectivity analysis provides empirical evidence that the cortico-amygdala pathway is needed for emotional analysis of aversive sounds.
How does the amygdala interact with the auditory cortex?
In a complex and rapidly changing environment, adaptive behavior requires that sensory information is extracted and processed more efficiently for stimuli that are emotionally salient. This requires that representations of emotionally salient stimuli in sensory cortex are given a higher weighting than less emotionally salient stimuli. In this study, we observed that activity of the auditory cortex was modulated by both acoustic features and the perceived unpleasantness of the stimuli. In particular, activity in the left and right STG was modulated as a function of perceived unpleasantness and acoustic features, respectively. This is consistent with few studies in the auditory domain reporting greater activation for negative (Grandjean et al., 2005) and for both negative and positive (Plichta et al., 2011) stimuli in the auditory cortex. Representation of valence-related information in the auditory cortex has also been shown in single-neuron recording studies in monkeys (Brosch et al., 2011; Scheich et al., 2011). These studies show that not only the activity of auditory cortex can be modulated by valence (as in the present study), but the activity of neurons in the auditory cortex can reflect reward-related information (e.g., size of reward, expected reward, and mismatch between expected and received reward) in the absence of auditory stimulation in a behavioral task. A possible source that relays the valence-related information to the auditory cortex is the amygdala.
Using DCM, we tested how the coupling between the amygdala and auditory cortex is modulated as a function of perceived unpleasantness and acoustic features. We created a set of four models: (1) a model in which forward connections from the auditory cortex to the amygdala are modulated by valence or acoustic features, (2) a model in which only backward connections from the amygdala to the auditory cortex are modulated, (3) a model in which both forward and backward connections are modulated, and (4) a control model in which there is no modulatory effect in either direction. Our results show dissociation between the modulatory effect of valence and acoustic features. While valence modulates the backward connections from the amygdala to the auditory cortex, the acoustic feature modulates the forward connections from the auditory cortex to the amygdala. This is consistent with a current model of amygdala function (Mitchell and Greening, 2012) that postulates the amygdala augments, much like the frontoparietal network does with a mundane stimulus, the representation of emotionally salient stimuli in the sensory cortex to make them accessible to consciousness. The evidence for this role of the amygdala is based on anatomical connections from the amygdala to the sensory cortex (Amaral and Price, 1984), functional connectivity studies (Morris et al., 1998; Tschacher et al., 2010), and lesion studies (Anderson and Phelps, 2001; Rotshtein et al., 2001; Vuilleumier et al., 2004).
Conclusions
Based on our analysis of the brain response to unpleasant sounds, the overall model of how the brain processes unpleasant sounds can be summarized as follows: the stimulus is first processed to a high level in the auditory cortex (STG), which portends the assignment of valence in the amygdala. The amygdala, in turn, modulates the auditory cortex in accordance with the valence of sounds.
Since in the current study we used only unpleasant sound stimuli, the above model may be valid only for processing of these stimuli. Evidence for different brain responses to positive and negative valence stimuli exists. For example, negative stimuli are perceived to be more salient (Hansen and Hansen, 1988) and electrophysiological studies (Carretié et al., 2001, 2004; Smith et al., 2003) show stronger and faster responses to negative stimuli (Negative bias) than to positive stimuli. Functional imaging studies show different networks of brain regions may be involved in processing of positive and negative stimuli (Aldhafeeri et al., 2012). Whether the above model holds for stimuli with positive valence and also other negative affect stimuli (e.g., negative words) needs to be tested in future studies.
Footnotes
- Received April 11, 2012.
- Revision received August 7, 2012.
- Accepted August 9, 2012.
This work was supported by funding from the Wellcome Trust.
- Correspondence should be addressed to Sukhbinder Kumar, Wellcome Centre for Neuroimaging, 12 Queen Square, London WC1N 3BG, UK. sukhbinder.kumar{at}ncl.ac.uk
- Copyright © 2012 the authors 0270-6474/12/3214184-09$15.00/0
This article is freely available online through the J Neurosci Open Choice option.