Innocent intentions: A correlation between forgiveness for accidental harm and neural activity☆
Introduction
Classic moral dilemmas often require an observer to judge whether it is permissible to harm one innocent person to save many. For example, is it permissible to push a man off a bridge so that his body will stop a trolley from running over five other people? Competition between emotional aversion to committing harm (e.g., pushing the man), and abstract reasoning, in this case, utilitarian reasoning about maximizing aggregate welfare (e.g., five lives are worth more than one), gives rise to the ‘dilemma’, and to characteristic neural response profiles (Greene et al., 2001, Greene et al., 2004). These results have led to two-process theories of moral judgment (Cushman et al., 2006, Greene et al., 2004, Haidt, 2001, Hsu et al., 2008). Implicit, automatic processes lead observers to reject emotionally aversive harms. Explicit, controlled processes support abstract reasoning and cognitive control.
Here, we extend two-process theories by considering a third factor upon which many moral judgments depend: the agent's mental state. When we evaluate an action, be it killing one or letting many die, harming or helping, breaking the law, breaking a promise, or breaking fast with the wrong sorts of people, we consider the agent's mental state at the time of her action. Did she know what she was doing? Did she act intentionally or accidentally? Observers judge intentional harms as worse than accidental harms (e.g., Cushman, 2008). Observers are even sensitive to more subtle mental state distinctions, judging harms intended as necessary means to an end to be worse than harms that are merely foreseen as side-effects of one's action (Borg et al., 2006, Cushman et al., 2006, Hauser et al., 2007, Mikhail, 2007).
Observers differ in the degree to which they take mental states into account for moral judgments. For example, children 5 years old and younger rely primarily on the action's observable outcomes (Hebble, 1971, Piaget, 1965/1932, Shultz et al., 1986, Yuill, 1984, Yuill and Perner, 1988, Zelazo et al., 1996). Children are particularly unlikely to mitigate blame for accidental harms, and even judge accidental harms to be worse than failed attempts to harm (e.g., Baird & Astington, 2004). Not until they are 6 or 7 years old do children begin to make moral judgments that depend substantially on beliefs (Baird and Astington, 2004, Baird and Moses, 2001, Darley and Zanna, 1982, Fincham and Jaspers, 1979, Karniol, 1978, Shultz et al., 1986, Yuill, 1984) and integrate the distinct outcome and mental state features of actions (Grueneich, 1982, Weiner, 1995, Zelazo et al., 1996). There is also evidence that even adult observers differ in the extent to which they exculpate an agent for accidentally causing harm, and the extent to which they appeal to mental state factors in doing so (e.g., Cohen and Rozin, 2001, Nichols and Ulatowski, 2007).
In the current study, we investigated the neural correlates of individual differences in moral judgments that depend on agents’ beliefs about whether or not they will cause harm. Consider a case in which an agent mistakes some poisonous white substance for sugar and, as a result, accidentally makes her friend sick by putting the poisonous substance in her coffee. Here, the agent believes falsely that her action will be harmless, and it is her false belief leads her to cause harm in spite of innocent intentions. Nevertheless, observers may disagree about the amount of blame that she deserves. Young children, and even some adults, may consider the agent very morally blameworthy for making her friend sick, in spite of her innocent intentions.
The neural mechanisms for reasoning about beliefs (or, more generally, mental states) have been investigated in a series of recent functional magnetic resonance imaging (fMRI) studies. These studies reveal a consistent group of brain regions for mental state reasoning in non-moral contexts: the medial prefrontal cortex, right and left temporo-parietal junction, and precuneus (Ciaramidaro et al., 2007, Fletcher et al., 1995, Gallagher et al., 2000, Gobbini et al., 2007, Ruby and Decety, 2003, Saxe and Kanwisher, 2003, Vogeley et al., 2001). Of these regions, the right temporo-parietal junction (RTPJ) in particular appears to be selective for belief attribution (Aichorn et al., 2005, Fletcher et al., 1995, Gallagher et al., 2000, Gobbini et al., 2007, Perner et al., 2006, Saxe and Wexler, 2005). For example, the response in the RTPJ is high when subjects read stories about a character's thoughts, beliefs, knowledge but low during stories containing other socially relevant information, for example, a character's physical or cultural traits, or even internal sensations such as hunger (Saxe & Powell, 2006).
Recently, we have also investigated the neural basis of belief reasoning in moral contexts (Young et al., 2007, Young and Saxe, 2008, Young and Saxe, in press). While in the scanner, participants read stories about a protagonist, and made moral judgments about the protagonist's actions. During the story, participants read two kinds of morally relevant information: (1) the protagonist's belief (e.g., that the powder was sugar) and (2) the reality (e.g., that the powder was poison). We investigated the neural response while participants initially processed these pieces of information. We found that the response in the RTPJ and precuneus was higher while participants read about beliefs than about other facts, independent of the order in which belief and non-belief facts were presented (Young & Saxe, 2008). However, this initial encoding response did not distinguish between negative and neutral beliefs (e.g., that the powder was poison versus sugar), between true and false beliefs, or between negative and neutral outcomes. In the current paper, we investigated a different question: namely, which brain region's response predicts people's use of belief information during the moral judgment itself?
We predicted that participants’ use of belief information to make moral judgments would be correlated with the recruitment of specific brain regions associated with mental state reasoning. More specifically, we predicted that higher activation in these brain regions would lead to less blame (or more exculpation) for accidental harm, and more blame for attempted harm. Given prior evidence for its selectivity, we specifically predicted that these patterns would be observed in the RTPJ.
Section snippets
Methods
Fifteen right-handed neurologically normally adults (aged 18–22 years, 8 women, 7 men) participated in the study for payment. All participants were native English speakers, had normal or corrected-to-normal vision, and gave written informed consent in accordance with the requirements of Internal Review Board at MIT. Participants were scanned at 3T (at the MIT scanning facility in Cambridge, MA) using twenty-six 4-mm-thick near-axial slices covering the whole brain. Standard echoplanar imaging
FMRI analysis
MRI data were analyzed using SPM2 (http://www.fil.ion.ucl.ac.uk/spm) and custom software. Each subject's data were motion corrected and normalized onto a common brain space (Montreal Neurological Institute, MNI, template). Data were smoothed using a Gaussian filter (full width half maximum = 5 mm) and high-pass filtered during analysis. A slow event-related design was used and modeled using a boxcar regressor to estimate the hemodynamic response for each condition. An event was defined as a single
Theory of mind localizer experiment
A whole-brain random effects analysis of the data replicated results of previous studies using the same task (Saxe & Kanwisher, 2003), revealing a higher BOLD response during the mental state as compared to physical representation stories, in the RTPJ, LTPJ, dorsal (D), middle (M), and ventral (V) MPFC, and precuneus (PC) (p < 0.001, uncorrected, k > 20). These regions of interest (ROIs) were identified in individual subjects at the same threshold (Fig. 2, Table 1): RTPJ (identified in 15 of 15
Discussion
FMRI findings have indicated that specific brain regions, including especially the RTPJ, support the ability to attribute beliefs to agents in both moral (e.g., Young et al., 2007, Young and Saxe, 2008, Young and Saxe, in press) and non-moral contexts (e.g., Saxe and Kanwisher, 2003, Perner et al., 2006). Behavioral studies have revealed that moral judgments depend significantly on mental state attribution; judgments of moral blame in particular depend on both the mental state (e.g., belief) of
Moral universals and individual differences
Contemporary moral psychology often emphasizes the robustness of moral judgments to cultural and demographic differences: people are sensitive to the same moral principles independent of gender, age, ethnicity, and religion (e.g., O’Neill and Petrinovich, 1998, Petrinovich et al., 1993, Hauser et al., 2007). For example, the majority of subjects across cultures and demographic groups judge that it is permissible to turn a trolley away from five people and onto one person instead but
Accidents versus attempts
One open question concerning these results is: why was the response in the RTPJ correlated with the use of beliefs for moral judgments of accidental harms but not attempted harms? One possibility is that we simply had less power to detect the correlation in the attempted harm condition, because there was less variance across participants in moral judgments of attempted harms. An alternative, however, is that there are meaningful differences in the cognitive processes involved in using belief
Other neural and cognitive processes
The predicted correlation between exculpatory moral judgment and neural response was observed in the RTPJ. This result is consistent with prior research suggesting that while other regions, including the LTPJ and MPFC support moral cognition (e.g., Greene et al., 2004) and social cognition (e.g., Mitchell et al., 2006, Saxe and Wexler, 2005), the RTPJ may be more selective for representing beliefs both in non-moral contexts for the purpose of predicting and explaining behavior (e.g., Saxe and
Conclusions
In sum, different levels of activation in a specific brain region for mental state reasoning, the RTPJ, track with individual differences in exculpation. Moral judgment therefore depends not just on domain-general mechanisms for abstract reasoning, cognitive control, and emotional responding, but also on distinct neural substrates for interpreting the minds of moral agents. The results may have implications for normative models of moral cognition and theory of mind, as well as for
Acknowledgments
Many thanks to Josh Greene, Susan Carey, Fiery Cushman, and Jason Mitchell for their helpful comments, and Jon Scholz for his technical support. This project was supported by the Athinoula A. Martinos Center for Biomedical Imaging. L.Y. was supported by the NSF. R.S. was supported by MIT and the John Merck Scholars program.
References (65)
- et al.
The intentional network: How the brain reads varieties of intentions
Neuropsychologia
(2007) Crime and punishment: Distinguishing the roles of causal and intentional analysis in moral judgment
Cognition
(2008)- et al.
Other minds in the brain: A functional imaging study of “theory of mind” in story comprehension
Cognition
(1995) - et al.
Reading the mind in cartoons and stories: An fMRI study of ‘theory of mind’ in verbal and nonverbal tasks
Neuropsychologia
(2000) - et al.
The neural bases of cognitive conflict and control in moral judgment
Neuron
(2004) - et al.
The influence of prior record on moral judgment
Neuropsychologia
(2008) Theory of mind and moral and cognition: Exploring the connections
Trends in Cognitive Sciences
(2005)- et al.
Dissociable medial prefrontal contributions to judgments of similar and dissimilar others
Neuron
(2006) - et al.
A preliminary cross-cultural study of moral intuitions
Evolution and Human Behavior
(1998) Can cognitive processes be inferred from neuroimaging data?
Trends in Cognitive Sciences
(2006)
Divide and Conquer: A defense of functional localizers
Neuroimage
People thinking about thinking people. The role of the temporo-parietal junction in “theory of mind”
Neuroimage
Making sense of another mind: the role of the right temporo-parietal junction
Neuropsychologia
Mind reading: Neural mechanisms of theory of mind and self-perspective
Neuroimage
The neural basis of belief encoding and integration in moral judgment
Neuroimage
Do visual perspective tasks need theory of mind?
Neuroimage
The role of mental state understanding in the development of moral cognition and moral action
New Directions for Child and Adolescent Development
Do preschoolers appreciate that identical actions may be motivated by different intentions?
Journal of Cognition and Development
Brief report: Morality in the autistic child
Journal of Autism and Developmental Disorders
The social brain in adolescence
Nature Reviews Neuroscience
Consequences, action, and intention as factors in moral judgments: An FMRI investigation
Journal of Cognitive Neuroscience
The efficient assessment of need for cognition
Journal of Personality Assessment
Correlation: Parametric and nonparametric measures
Selective deficit in personal moral judgment following damage to ventromedial prefrontal cortex
Social Cognitive and Affective Neuroscience
Religion and the morality of mentality
Journal of Personality and Social Psychology
The role of conscious reasoning and intuitions in moral judgment: Testing three principles of harm
Psychological Science
Making moral judgment
American Scientist
Individual differences in intuitive-experiential and analytical-rational thinking styles
Journal of Personality and Social Psychology
Investigating the functional anatomy of empathy and forgiveness
Neuroreport
Attribution of responsibility to the self and other in children and adults
Journal of Personality and Social Psychology
Two takes on the social brain: A comparison of theory of mind tasks
Journal of Cognitive Neuroscience
Dynamic mapping of human cortical development during childhood through early adulthood
Proceedings of the National Academy of Sciences
Cited by (0)
- ☆
This study was carried out at the Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology.