Abstract
Reading words in a second language spontaneously activates native language translations in the human bilingual mind. Here, we show that the emotional valence of a word presented in English constrains unconscious access to its Chinese translation. We asked native speakers of Chinese fluent with English to indicate whether or not pairs of English words were related in meaning while monitoring their brain electrical activity. Unbeknownst to the participants, some of the word pairs hid a sound repetition if translated into Chinese. Remarkably, English words with a negative valence such as “violence” did not automatically activate their Chinese translation, even though we observed the expected sound repetition priming effect for positive and neutral words, such as “holiday” and “theory.” These findings show that emotion conveyed by words determines language activation in bilinguals, where potentially disturbing stimuli trigger inhibitory mechanisms that block access to the native language.
Introduction
More than half of the world's population uses words in two or more languages on a daily basis. It has long been debated whether bilingual individuals access word forms from long-term memory in a language-selective fashion. Some studies have shown that reading, listening, and speaking in the second language automatically activates lexical representations in the first language (Dijkstra et al., 1998; Spivey and Marian, 1999; Costa et al., 2000; Thierry and Wu, 2007; Martin et al., 2009; Wu and Thierry, 2010, 2012). Others have suggested that bilinguals can “switch off” nontarget language representations and process words in the target language without observable interference from the other (Rodriguez-Fornells et al., 2002; Elston-Güttler et al., 2005; FitzPatrick and Indefrey, 2010).
Although emotional content is a fundamental aspect of language-based communication, its influences on bilingual lexical access have been largely overlooked so far. Experimental psychology and introspective data (e.g., interviews) have shown that affective words afford less emotional resonance in the second language than the first. For instance, swear words can be perceived as more offensive in the first language. In the same vein, bilingual speakers often report being less embarrassed to discuss private topics in their second compared with their first language (Schrauf, 2000; Harris et al., 2003, 2006; Marian and Kaushanskaya, 2008; Aycicegi and Harris, 2009). These findings suggest that emotional content affects conscious language selection in bilinguals.
Here, to unravel unconscious interactions between the processing of emotional content and bilingual lexical access, we tested the impact of word affective valence on spontaneous translation in bilinguals. We collected behavioral and electrophysiological data in a group of Chinese–English bilinguals while they decided whether or not English target words were semantically related to preceding prime words varying in their affective valence. Participants were unaware that some prime–target pairs involved words that shared a sound when translated into Chinese, despite being unrelated in meaning or form in English (Table 1). Our previous studies have found implicit priming effects induced by this hidden manipulation, indicating that bilinguals automatically activate translations in the first language when processing the meaning of words in their second language (Thierry and Wu, 2007; Wu and Thierry, 2010). In the present study, affective valence of prime words was manipulated independently of the hidden character repetition. Thus, in three of the experimental conditions, the prime word either had positive, negative, or neutral valence, and all shared a sound repetition with the (neutral) target word via Chinese translation. Critically, in all three of these conditions, primes and targets were unrelated in meaning and therefore participants systematically responded “unrelated.” Since reaction times and event-related brain potentials (ERPs) were measured in response to the neutral target word in all cases, this design allowed us to investigate the effect of affective valence on language access rather than the effect of affective processing per se. We also included control conditions featuring pairs of neutral words without a hidden Chinese sound repetition and meaning-related word pairs as fillers, to which participants responded “related.” The filler trials ensured that participants engaged with semantic judgment and were not included in the statistical analysis.
If the translation effects demonstrated previously (Thierry and Wu, 2007; Wu and Thierry, 2010) is automatic, emotional valence should have no bearing on language nonselective lexical access. In other words, whether the prime word is emotional or neutral, the priming effect by hidden character repetition in Chinese translations should be of similar magnitude, independent of affective valence.
Materials and Methods
Participants.
Forty-five participants (15 native English speakers, 15 native Chinese speakers, and 15 Chinese–English bilinguals) gave written consent to take part in the experiment, which was approved by the ethics committee of Bangor University. Participants were matched for age (18–25 years), gender (9 females and 6 males in all three groups), level of education (undergraduate and Master's students), and handedness (right) across groups. They all had normal or corrected-to-normal vision and self-reported normal hearing. The English control participants were monolingual speakers of English. The Chinese control participants were students who had recently arrived in the UK to attend an English course. They had limited English training and no previous experience of living in an English-speaking environment. The bilingual participants were first exposed to English at the age of 12 years. At the time of testing, they were studying at a British university and had lived in the United Kingdom for a mean of 20.5 months (±5.27). All bilinguals used English in their everyday life and had an English proficiency score of 6.5 as measured by the International English Language Testing System (IELTS; www.ielts.org/test_takers_information/what_is_ielts.aspx). The IELTS covers four fundamental language skills (i.e., reading, listening, writing, and speaking). The maximum score being 9, the majority of candidates obtain between 4 and 7, 6.5 being what most English-speaking institutions require for nonnative speakers.
Stimuli.
Participants viewed two blocks of 90 word pairs presented in a pseudorandomized order. The English words were matched across conditions for variables that are known or expected to affect bilingual lexical access and ERPs in visual word processing in general: lexical frequency, word concreteness, average numbers of letters, and grammatical status (Hauk and Pulvermüller, 2004; Hauk et al., 2006; Sunderman and Kroll, 2006; Thierry and Wu, 2007; Baten et al., 2010) using the MRC psycholinguistic database (ps > 0.1 for all pairwise comparisons). The Chinese translations of the English stimuli were also controlled for lexical frequency and concreteness (Liu et al., 2007). The affective valence and arousal values were based on ratings in both English (Bradley and Lang, 1999) and Chinese (Yi-Niu et al., 2008). The negative words did not contain swear words or taboo words, which are often language-specific and do not have reliable translations (such words are encountered significantly more in oral communication and may, therefore, have a specific developmental pattern in terms of context and period of acquisition). Differences in affective valence were highly significant between positive, negative, and neutral word pairs (ps < 0.001 for all pairwise comparisons). There were also significant differences in arousal values between negative and positive words on one hand and neutral words on the other (ps < 0.001), but there was no difference in arousal values between positive and negative words (p > 0.1). To verify the Chinese translations for the English words, an independent group of 10 Chinese postgraduate and undergraduate students from Bangor University performed a translation task. These participants were randomly drawn from the same population as the bilingual participants tested in the study to minimize the differences attributable to levels of proficiency and everyday use of English. A modified first translation task (Tokowicz and Kroll, 2007) was used, in which participants were instructed to say aloud as soon as possible upon seeing an English word the first translation that came to their mind while voice response time and accuracy was measured. In all conditions, participants provided >90% of the intended translations, and there were no significant differences between conditions for either response latency (F(3, 42) = 0.21, p > 0.1) or accuracy (F(3,42) = 1.13, p > 0.1).
Galvanic skin response.
We collected skin conductance measures as a supplementary index of arousal from an independent group of 10 Chinese–English bilinguals, drawn from the same population as those tested in the ERP experiment. Electrodermal activity was recorded and analyzed using a GSR100C and AcqKnowledge software (Biopac System). Two electrodes were attached to the index and middle fingers of the participants' dominant hand. A modified version of the semantic relatedness judgment task was administered. The prime word, which carried the emotional manipulation, was presented for 5 s, during which the physiological response could manifest itself. A 10 s intertrial interval enabled skin conductance to return to the baseline. Participants were engaged in a semantic relatedness judgment task rather than explicit evaluation of the emotional content of words to ensure that the galvanic skin response (GSR) was measured in a similar context as that of the ERP experiment.
The amplitude of phasic skin conductance [i.e., skin conductance response (SCR)] is calculated by subtracting tonic skin conductance [i.e., skin conductance level (SCL)] from the maximum score during the 5 s presentation of the prime word (Hugdahl, 1995). To distinguish stimuli-specific from stimuli-nonspecific electrodermal responses, a poststimuli time window from 1 to 5 s was selected and GSRs appearing outside of this time window were not included in the analysis. To reduce individual variability in SCL, we followed Harris et al.'s (2003) procedure and divided the SCR amplitude by SCL to generate a ratio that shows the extent of skin conductance increase from the baseline. Trials that were flat or showed a decline in skin conductance were excluded from analysis, since this can be a sign of skin conductance levels having not returned to baseline before the onset of the trial. We also excluded trials in which the experimenter made a note of possible artifacts (e.g., participants' mouth moments due to coughing or sneezing). An average of 32 trials (∼13%) were excluded. Electrodermal responses for positive (0.061 ± 0.026) and negative (0.063 ± 0.022) words differed minimally from those for neutral (0.047 ± 0.027) and unrelated (0.05 ± 0.033) words. None of the responses differed significantly from one another (ps > 0.1 in all comparisons). This finding is consistent with previous studies showing that GSR does not differ reliably between neutral, positive, and negative words [except in the case of taboo words, swear words, and reprimands (Harris et al., 2003; Harris, 2004)]. Therefore, potential effects of emotional manipulation on implicit lexical translation would have to arise from affective valence rather than differences in arousal between experimental conditions.
Procedure.
English and Chinese words were presented at the center of a screen in 12-point Times New Roman font. To avoid possible eye movements, no English word had more than 11 letters and all Chinese translations featured two Chinese characters (i.e., two syllables). After a prestimulus interval of 500 ms, the first word was flashed for 500 ms at fixation followed by the second word after a variable interstimulus interval of 300, 400, or 500 ms. No word was repeated. Participants were instructed to indicate whether the second word was related in meaning to the first by pressing keys set under the left and right index fingers. Response sides were fully counterbalanced between blocks and participants.
ERP recording.
Electrophysiological data were recorded in reference to Cz at a rate of 1 kHz from 64 Ag/AgCl electrodes placed according to the extended 10–20 convention. Impedances were kept <5 kΩ. Electroencephalogram activity was bandpass filtered on-line between 0.1 and 200 Hz and refiltered off-line with a 25 Hz, low-pass, zero-phase shift digital filter. Eye blinks were mathematically corrected (Gratton et al., 1983), and remaining artifacts were manually dismissed. There was a minimum of 30 valid epochs per condition in every subject. Epochs ranged from −100 to 800 ms after stimulus onset of the prime word and −100 to 1000 ms after stimulus onset of the target word. Baseline correction was performed in reference to prestimulus activity and individual averages were digitally re-referenced to the global average reference. ERP data were collected simultaneously to behavioral data.
ERP data analysis.
Mean peak amplitudes were measured in temporal windows defined a priori and checked against the Global Field Power measured across the scalp (Picton et al., 2000). Mean ERP amplitudes between 450 and 650 ms for the prime word were subjected to a repeated-measures ANOVA with affective valence (negative/positive/neutral/control) and electrode (63 levels) as within-subject factors, and group as the between-subject factor (native English controls/Chinese–English bilinguals/native Chinese controls) to test whether prime word valence may have been perceived differently in the different groups of participants. Mean ERP amplitudes between 300 and 500 ms for the target words were subjected to a repeated-measures ANOVA with character repetition (positive repeated/negative repeated/neutral repeated/unrepeated) and electrode (63 levels) as within-subject factors in each group separately to test whether character repetition in Chinese was affected by emotional valence. We used Greenhouse–Geisser correction where applicable.
Results
Analysis of behavioral data showed no effect of the concealed repetitions in Chinese translations on reaction times or error rates in either group of participants (ps > 0.1; Fig. 1). Moreover, both reaction times and error rates were highly comparable between Chinese–English bilinguals and English control participants (ps > 0.1). The Chinese control participants tested in Chinese, however, were overall slower than the native English participants (F(1,28) = 21.25; p < 0.001).
ERPs elicited by target words were subjected to a repeated-measures ANOVA within each participant group. In the Chinese–English bilinguals, we found a significant main effect of affective valence between 300 and 500 ms after stimulus onset (F(3,42) = 10.59, p < 0.05; Fig. 2). This effect resembled the classic N400 effect known to reflect lexical–semantic integration and particularly sensitive to repetition (Kutas and Hillyard, 1984; Rugg, 1985; Luck et al., 1996; Liu et al., 2003). Post hoc analyses revealed that the condition in which the prime word had a negative valence failed to modulate ERP amplitude compared with the unrelated control condition (p > 0.1), while positive and neutral primes elicited the predicted N400 modulations (both ps < 0.05). Furthermore, the ERP in the negative valence condition was significantly different from both the ERPs elicited in the positive and neutral conditions (ps < 0.05). Both control groups showed a pattern of ERP variations different from that of the Chinese–English bilinguals: The native English control participants were unaffected by the hidden factor in Chinese translations (main effect: F(3,42) = 0.54, p > 0.1) and the native Chinese control participants tested using a Chinese version of the experiment showed repetition priming for all three affective valence conditions (main effect: F(3,42) = 8.43, p < 0.05, pairwise comparisons: all ps < 0.05).
To check for potential effects of emotional valence on the processing of the prime, we also analyzed ERPs elicited by prime words by means of a 4 within- by 3 between-subject repeated-measures ANOVA. In all three groups, emotional prime words elicited a late positive component (LPC) peaking at ∼500 ms post-prime onset (main effect: F(3,126) = 55.07, p < 0.001; Fig. 3) known to index attention capture during the processing of emotional stimuli (Naumann et al., 1992; Bradley and Lang, 2007; Hajcak et al., 2009; Hinojosa et al., 2010). Critically, there was no significant group effect (F(2,42) = 0.85, p > 0.1) or interaction between emotional valence and participant group (F(6,126) = 0.72, p > 0. 1).
Discussion
Using an implicit translation–priming paradigm, the present study examined the unconscious effects of affective valence on lexical access during second language processing. As previously shown, behavioral performance of Chinese–English bilinguals showed no evidence of access to Chinese translations while reading English words (Thierry and Wu, 2007; Wu and Thierry, 2010, 2012), and there was also no effect of affective valence manipulation. The overall increased reaction times in native Chinese compared with native English control participants suggested that the high proportion (60%) of Chinese word pairs with overt sound form repetitions but no semantic relatedness rendered the task substantially more difficult overall for Chinese control participants.
However, analysis of the ERPs revealed that Chinese–English bilinguals making meaning relatedness judgment on English word pairs were affected by concealed sound form repetitions in Chinese translations. Surprisingly, however, this effect was only found when the English words had positive or neutral affective valence, and no such effect was found for English word with negative affective valence. In other words, reading negative words in the second language fails to automatically activate translation equivalents in the native language—a novel effect of native lexical access suppression—whereas reading positive and neutral words leads to language coactivation.
The testing of native English participants who had no knowledge of Chinese discarded the possibility that differences between the negative valence condition and the two others could have originated in spurious semantic or lexical differences between experimental conditions, since no effect of Chinese translations was found regardless of the affective valence of the English words. Moreover, the native Chinese control participants showed priming effects of overt sound form repetition in all three emotion conditions without significant interactions. This is consistent with the interpretation of the ERPs observed in the Chinese–English bilinguals as a valence-specific effect on automatic native language translation, rather than a generic effect of affective valence on word processing.
One possible explanation for the interaction between affective valence and implicit repetition priming observed here in bilinguals is that the manipulation of affective valence might have instigated concomitant variations in attention capture by negative and positive prime words (MacKay et al., 2004; Calvo and Castillo, 2005). However, this is unlikely for two reasons. First, arousal ratings were matched between positive and negative words used in the experiment; therefore, an attentional bias cannot merely be attributed to differences in arousal. Second, Chinese native speakers tested in Chinese showed no effect of emotional valence; therefore, if differential attentional capture between positive and negative words was involved, its contribution was too weak to affect sound repetition priming, at least when participants were presented overtly with stimuli in their native language. Nevertheless, to test whether native lexical access suppression may have been attention-dependent, we analyzed ERPs elicited by the prime words. This analysis showed no interaction between emotional valence and participant group (ps > 0.1), that is, the LPC did not differ between positive and negative conditions in any of the groups. This result suggests that the emotional conditions yielded comparable levels of attentional capture and discards the interpretation of the native lexical access suppression effect as being attention-driven.
The mental operations underlying second language processing are traditionally considered as fundamentally determined by age of acquisition and/or levels of proficiency (Kroll and Stewart, 1994). For instance, differences in emotional resonance between the two languages of bilinguals have been linked to the fact that emotional referents for words acquired in the first language are encountered earlier in life than those in the second language (Bond and Lai, 1986; Pavlenko, 1999; Dewaele and Pavlenko, 2002). This explanation, however, cannot account for the current findings, because it is unclear why, in late second-language learners, negative and positive words should be acquired in systematically different contexts, in different periods of life, or mastered at relatively different levels. Consequently, differences in the emotional resonance between languages cannot account for valence-specific effects observed in a second language.
Moreover, the effect of affective valence on bilingual lexical access arose spontaneously in a semantic processing context requiring no explicit emotional judgment (e.g., emotional rating task). Therefore, the suppression of native language translation for negative words cannot be a result of top-down effects driven by emotion-specific processing of languages. A more parsimonious explanation is that emotional processing interacts with language access in a preventative manner, automatically repressing the full realization of semantic integration when the targeted meaning is potentially distressing. Such cognitive suppression mechanism may involve interactions between the limbic system (e.g., amygdala, medial temporal lobe) and the caudate nucleus, which has also been shown to be critically involved in inhibitory control during language switching in bilinguals (Crinion et al., 2006; Abutalebi and Green, 2007; Ali et al., 2010).
We contend that the present findings break new ground as regards emotion–cognition interactions. So far, insights into the spontaneous role of emotion on human cognition have been limited to basic cognitive processes such as attention, memory, vision, and motor control (Dolan, 2002; Pessoa, 2008). Here, we establish that emotional processing unconsciously interacts with cognitive mechanisms underlying language comprehension. Future studies will explore the inter-individual and cultural variability of this effect.
Footnotes
This work was supported by the Economic and Social Research Council (RES-000-23-0095, to G.T.) and the European Research Council (ERC-StG-209704, to Y. J. W and G. T.). We thank Paul Downing, Angela Friederici, Cathy Price, Steve Tipper, Oliver Turnbull, and Marilyn Vihman for advice and comments on this manuscript.
The authors declare no competing financial interests.
- Correspondence should be addressed to Prof. Guillaume Thierry, School of Psychology, University of Wales, LL57 2AS Bangor, United Kingdom. g.thierry{at}bangor.ac.uk
This article is freely available online through the J Neurosci Open Choice option.