Elsevier

Brain Research

Volume 1202, 2 April 2008, Pages 99-108
Brain Research

Research Report
A computational model of risk, conflict, and individual difference effects in the anterior cingulate cortex

https://doi.org/10.1016/j.brainres.2007.06.080Get rights and content

Abstract

The error likelihood effect in anterior cingulate cortex (ACC) has recently been shown to be a special case of an even more general risk prediction effect, which signals both the likelihood of an error and the potential severity of its consequences. Surprisingly, these error likelihood and anticipated consequence effects are strikingly absent in risk-taking individuals. Conversely, conflict effects in ACC were found to be stronger in these same individuals. Here we show that the error likelihood computational model can account for individual differences in error likelihood, predicted error consequence, and conflict effects in ACC with no changes from the published version of the model. In particular, the model accounts for the counterintuitive inverse relationship between conflict and error likelihood effects as a function of the ACC learning rate in response to errors. As the learning rate increases, ACC learns more effectively from mistakes, which increases risk prediction effects at the expense of conflict effects. Thus, the model predicts that individuals with faster error-based learning in ACC will be more risk-averse and shows greater ACC error likelihood effects but smaller ACC conflict effects. Furthermore, the model suggests that apparent response conflict effects in ACC may actually consist of two related effects: increased error likelihood and a greater number of simultaneously cued responses, whether or not the responses are mutually incompatible. The results clarify the basic computational mechanisms of learned risk aversion and may have broad implications for predicting and managing risky behavior in healthy and clinical populations.

Introduction

The anterior cingulate cortex (ACC) has recently been the focus of intense investigation as it has been implicated in several aspects of higher cognitive function. It is critically involved in performance monitoring and cognitive control (Blakemore et al., 1998, Botvinick et al., 1999, Braver et al., 2001, Carter et al., 1998, Carter et al., 2001, Gehring and Knight, 2000, Kerns et al., 2004, Liddle et al., 1992, MacDonald et al., 2000, Menon et al., 2001, Nordahl et al., 2001, Scheffers and Coles, 2000, Ullsperger and von Cramon, 2001, van Veen et al., 2001).

Performance monitoring is essential to theories of executive control, in which a central executive or supervisory attentional system takes control when it detects that undesirable outcomes may arise if control is handled only by automated processes or schema (Norman and Shallice, 1986). Initially, the ACC was described as showing greater activity for errors relative to correct trials, first with single-cell recording studies in animals (Gemba et al., 1986) and then as the probable source of the error-related negativity (ERN) in humans (Dehaene et al., 1994, Gehring et al., 1990, Gehring et al., 1993, Hohnsbein et al., 1989). In the last decade, an influential model of performance monitoring has been proposed which postulates that the ACC detects response conflict (Carter et al., 1998). In this account, when two mutually incompatible response processes are active, the ACC detects the state of conflict and drives control processes to resolve the internal conflict and facilitate appropriate behavior. This allows individuals to suppress prepotent, automatic responses and instead generate more appropriate responses to achieve current goals. Subsequent computational modeling and brain imaging studies have provided accounts of how the ACC monitors a variety of measures of performance, including incompatible response cues (Botvinick et al., 2001, Brown et al., 2007), errors (Holroyd and Coles, 2002), and error likelihood (Brown and Braver, 2005) and subsequently activates corresponding cognitive control processes (Kerns et al., 2004). Individual human neurons have also been recorded showing responses to attention demanding tasks, with some showing selectivity to high conflict trials (Davis et al., 2000, Davis et al., 2005).

More recently, the ACC has been studied as playing a key role in decision-making under risk. One prominent recent study (de Martino et al., 2006) has examined the neural mechanisms of framing effects (Kahneman and Tversky, 1984), in which a given net monetary increase can be cast either as a gain (i.e., getting to keep some percentage of an initial endowment) or a loss (i.e., having to return some complementary percentage of an initial endowment). The results (de Martino et al., 2006) showed greater ACC activity when subjects make decisions that are framed as involving a loss vs. a gain, even when the final net increase is the same in both conditions. The results suggest that ACC is sensitive to imminent perceived losses. In contrast, substance abusers show a unique hypoactivity in ACC relative to controls, and this correlates with an increased tendency to make risky decisions (Brown and Braver, submitted for publication, Fishbein et al., 2005, Forman et al., 2004). Substance abusers may be overly sensitive to anticipated reward relative to anticipated punishment as they tend to engage in risky behavior despite the adverse consequences (Fishbein et al., 2005, Yechiam et al., 2005). ACC activity may also be associated with error avoidance. Conditions in which subjects avoid engaging in a task to avoid errors elicit greater ACC activity than conditions of actual error commission (Magno et al., 2006). Similarly, greater ACC activity has been found when animals and humans change their task set to avoid errors (Bush et al., 2000, Shima and Tanji, 1998). Other evidence suggests that the ACC signals the amount of instrumental effort needed to attain the goal associated with a stimulus (Walton et al., 2004). Conversely, long-term overactivity of ACC in obsessive-compulsive disorder leads to inappropriate, excessive effort to avoid mistakes (Gehring et al., 2000, Hajcak and Simons, 2002). Given the above, the ACC is a particularly promising area for the study of risk avoidance in decision-making.

On the basis of computational modeling and fMRI results, we recently proposed the error likelihood hypothesis of ACC, i.e., that ACC activity will be proportional to the perceived likelihood of an error (Brown and Braver, 2005). The error likelihood hypothesis was implemented as a computational model, and subsequent fMRI results were consistent with the predictions of the error likelihood model but could not be accounted for solely by the response conflict model. Thus, the work suggested a reinterpretation of empirically observed response conflict effects as reflecting not a computation of response conflict per se, but rather a greater perceived likelihood of an error. Consistent with this proposed reinterpretation, stimulus cues that activate mutually incompatible responses are generally associated with higher error rates versus cues that activate compatible responses. Consequently, in this paper we make a distinction between response incongruent (RI) effects (van Veen et al., 2001) and response conflict (RC). For the present purposes, we define RI effects as the empirical observation of significant differences between incompatible versus compatible response cues. We reserve the term RC to refer to the theoretical explanation of RI effects as reflecting a computation of response conflict.

The error likelihood computational model (Fig. 1) works as follows (Brown and Braver, 2005). First, model inputs to the ACC represent not pure perceptual signals but rather motor responses driven by particular input stimuli (SR inputs). There is scant evidence for direct visual inputs to ACC, which seems to receive more inputs from frontal and association areas (Barbas, 1988, Vogt and Pandya, 1987). Past studies have shown predominantly motor rather than pure stimulus effects in ACC (van Veen et al., 2001), and ACC seems to be modulated especially when outcomes are contingent on chosen responses (Walton et al., 2004). In contrast to pure visual cells, motor cells that drive actions in response to specific sensory cues have been found in premotor cortex of monkeys (Boussaoud and Wise, 1993). Cells in the frontal regions are more likely to provide input to the model ACC (Vogt and Pandya, 1987). Thus, ACC responses are not postulated to occur merely to perceptual processing of visual stimuli (e.g., under passive viewing or fixed response conditions). Instead, the model ACC will respond to visual cues to the extent that they drive particular responses. The SR inputs provide signals both to the model ACC and to the final common response pathway in the response layer. The distinction between stimulus and SR units parallels a similar distinction between perception and category layer units in an earlier model of performance monitoring (Holroyd et al., 2005). In that model, only the category layer cells that both responded to stimuli and drove particular corresponding responses were also able to provide signals to the performance monitor (Holroyd et al., 2005), as is the case in the present model. In the present model, the response layer differs from the SR input layer in that the model response layer cells represent the intention to make a particular response, regardless of what stimuli drive the response.

When an error occurs, the error signal trains a random subset of ACC cells to learn to respond preferentially to the inputs that were active when the error occurred. Thus, if the model has more a more frequent experience of errors in certain conditions, then a greater number of ACC cells will respond to situations that share stimulus–response features with the higher error conditions. This is the essence of how the computational model yields the error likelihood effect.

In the original paper describing the error likelihood model (Brown and Braver, 2005), we tested the model with a change signal task, a modified version of the stop signal task (Husain et al., 2003, Logan and Cowan, 1984, Murthy et al., 2001). Briefly, subjects were presented with an arrow that pointed either left or right and had to press a button (left or right) that corresponded to the arrow direction. In two thirds of the trials (the “Go” conditions), the subjects simply respond to the arrow. The remaining one-third of trials constituted the “Change” condition. In these trials, there was a brief change signal delay (CSD), before the second arrow appeared – larger and pointing in the opposite direction – which served as the change signal. The appearance of the change signal served as an instruction for subjects to cancel their response to the first arrow, if possible, and instead substitute the opposite response to the second arrow. Both arrows remained visible until a response was executed or a 1000 ms response deadline (from first arrow onset) was reached. In this way, a comparison of the Change vs. Go conditions (correct trials only) yields a measure of RI effects (i.e., Change > Go). Additionally, subjects often commit errors on Change trials, by responding to the Go cue (first arrow) even when a change signal is presented. Thus, a comparison of correct vs. error responses in the Change condition yields a measure of error effects (i.e., Error > Correct).

In addition to RI and error effects, the color of the arrow stimuli was varied randomly across trials to signal whether the trial was of high or low error likelihood. The color cue relationship to error likelihood was not directly conveyed to participants and thus could only be learned through experience. Error likelihood effects were obtained by controlling error rates through manipulation of the CSD. On high error likelihood trials, the CSD was kept long (and dynamically adjusted on a trial-by-trial basis), which made it difficult to cancel the initial response to the Go cue (since a putative “point of no return” in response generation had already been reached). Conversely, on low error likelihood trials, the CSD was kept short (and also dynamically adjusted) to make it easier to cancel the initiated response. Because of the association of arrow color with errors on Change trials, the color cue information could serve as an indication of error likelihood at the onset of a trial and thus could be present even on Go trials (which were not subject to RI or errors). Moreover, since change signals were equally likely in the high and low error likelihood conditions, high vs. low error likelihood effects were independent of RI effects. The key finding of Brown and Braver (2005) was that, in human subjects, fMRI revealed greater ACC activity in the high than the low error likelihood correct, Go trials (high/go > low/go). This effect could not be accounted for by existing RC models.

In a follow-up study, we further developed the error likelihood computational model to explore effects of the magnitude of the expected error signal. We found that the model made a striking prediction: that ACC activity will be proportional to the product of perceived error likelihood and the predicted magnitude of the error consequences, should an error occur (Brown and Braver, in press). We refer to this further model prediction as the expected risk hypothesis of ACC, namely that ACC predicts both the likelihood and potential severity of errors. We tested this second prediction of the model in a corresponding fMRI study, which used an incentive variant of the change signal task to manipulate the magnitude of error consequences (Brown and Braver, in press). In this task variant, correct trials result in a specified monetary reward, and error trials yield a reduced monetary reward. The magnitude of the reward reduction in error trials relative to correct trials operationalizes the error consequence magnitude. Details of the tasks can be found in Brown and Braver (2005) and Brown and Braver (in press); see also Experimental procedures below. The result of this study confirmed ACC sensitivity to be perceived error likelihood and consequence magnitude (Brown and Braver, in press). However, the findings also revealed a striking further effect, namely that error likelihood and predicted error consequence magnitude effects were notably absent in more risk-taking individuals, despite intact RI and error effects.

These results are particularly noteworthy because recent work (Nieuwenhuis et al., 2007) has called into question the existence of the error likelihood effect, due to a failure to replicate the primary findings of Brown and Braver (2005). The results of our follow-up study which revealed a high degree of individual variability in ACC error likelihood effects may provide an account of this failure to replicate in other work (Nieuwenhuis et al., 2007). Specifically, it may be the case that ACC activity plays an important role in trait risk aversion. Thus, substantial variability in ACC activity may be observed across individuals that significantly vary in this trait. Our latest work (Brown and Braver, in press) also raises its own apparent challenge to the error likelihood computational model. Specifically, in some areas of ACC, RI effects were actually stronger rather than weaker in risk-taking individuals. Moreover, in a particular region of ACC the error likelihood and RI effects were negatively correlated across individuals, such that high error likelihood effects were linked to lower RI effects. This negative correlation appeared to be directly explained by individual variation in trait risk aversion. This set of findings seems puzzling. If ACC learns to predict the risk associated with a behavior and drive risk avoidance, and if RI effects are associated with an increased perceived likelihood of an error, then how can RI effects be stronger in individuals who are less risk-averse?

To resolve this conundrum, we return to the error likelihood computational model as published (Brown and Braver, 2005) to explore whether individual differences in error likelihood, risk prediction, and RI effects in ACC as well as trait differences in risk aversion can be simulated in a unified way as variations in a single underlying parameter. The simulations focused on individual differences in the strength and speed of error-based learning within the ACC. We hypothesized that there may be a strong causal relationship between the strength of error-based learning within the ACC and the tendency towards risk-taking behavior. Specifically, we predicted that smaller learning rates in the error likelihood model ACC would lead to smaller error likelihood effects, which in humans correlates with greater risk-taking. Nonetheless, it was not clear whether or how impaired learning from errors would impact RI effects. To examine this issue, we conducted two sets of simulations using the previously published computational model. The model had the exact architecture and parameters used in previous simulations, save for manipulation of error-based learning rate within the ACC. The first simulation examined how variation in learning rate affected the relationship between error likelihood and RI effects. The second simulation examined how variation in learning rate affected the relationship between error magnitude and RI effects (when holding error likelihood constant). Surprisingly, the simulation results show that the error likelihood computational model predicts a counterintuitive tradeoff between RI and risk prediction effects (both error magnitude and error likelihood) that is modulated by the ability to learn from previous errors. This pattern seems to strongly confirm and explain what would otherwise be the surprising fMRI findings that we obtained in our recent experimental study (Brown and Braver, in press). In what follows, we present these simulation results, their fit to the experimental data, and their implications for understanding the nature of neural and cognitive processes involved in decision making under risk.

Section snippets

Error likelihood simulation

The left panels of Fig. 2 show the results of the first simulation examining error likelihood effects as a function of ACC learning rate compared against the human fMRI data obtained in Brown and Braver (in press). Fig. 2A shows model ACC activity during task performance under high vs. low learning rate conditions. As can be seen, in the high learning rate model (which corresponds to the exact learning rate parameters used in the previous simulations (Brown and Braver, 2005)) both error

Discussion

Overall, it is not very surprising that reductions in the ACC error likelihood effect are observed when learning from errors is impaired. Such a finding is consistent with earlier model simulations, in which error signals are necessary to train the ACC to respond more strongly to situations in which errors have occurred more frequently in the past, which leads to the error likelihood effect. It is perhaps more surprising that ACC RI effects are actually greater in subjects with lower error

Experimental procedure

The computational simulations described in this paper focus on the change signal task (Brown and Braver, 2005), along with a variant in which incentives are utilized and manipulated (Brown and Braver, in press). We began with the previously published computational model of error likelihood effects in ACC (Brown and Braver, 2005). In this model (Fig. 1), ACC activity for a given trial is measured as the average activity over all ACC units and over all time points in a trial, i.e., a spatial and

Acknowledgments

The authors thank J. Cohen for helpful discussions. Supported by ONR N00014-99-1-0715. JWB was supported by a NARSAD Young Investigator Award and the Sidney R. Baer, Jr. Foundation. TSB was supported by P50 MH64445 and RO1 MH66088.

References (70)

  • M. Ullsperger et al.

    Subprocesses of performance monitoring: a dissociation of error processing and response competition revealed by event-related fMRI and ERPs

    NeuroImage

    (2001)
  • V. van Veen

    Anterior cingulate cortex, conflict monitoring, and levels of processing

    NeuroImage

    (2001)
  • H. Barbas

    Anatomic organization of basoventral and mediodorsal visual recipient prefrontal regions in the rhesus monkey

    J. Comp. Neurol.

    (1988)
  • M.M. Botvinick

    Conflict monitoring versus selection-for-action in anterior cingulate cortex

    Nature

    (1999)
  • M.M. Botvinick

    Conflict monitoring and cognitive control

    Psychol. Rev.

    (2001)
  • D. Boussaoud et al.

    Primate frontal cortex: effects of stimulus and movement

    Exp. Brain Res.

    (1993)
  • T.S. Braver

    Anterior cingulate cortex and response conflict: effects of frequency, inhibition, and errors

    Cereb. Cortex

    (2001)
  • J.W. Brown et al.

    Learned predictions of error likelihood in the anterior cingulate cortex

    Science

    (2005)
  • Brown, J., Braver, T. S., in press. Risk prediction and aversion by anterior cingulate cortex. Cogn. Aff. Behav....
  • J. Brown

    How the basal ganglia use parallel excitatory and inhibitory learning pathways to selectively respond to unexpected rewarding cues

    J. Neurosci.

    (1999)
  • C.S. Carter

    Anterior cingulate cortex, error detection, and the online monitoring of performance

    Science

    (1998)
  • C.S. Carter

    Anterior cingulate cortex activity and impaired self-monitoring of performance in patients with schizophrenia: an event-related fMRI study

    Am. J. Psychiatr.

    (2001)
  • K.D. Davis

    Human anterior cingulate cortex neurons modulated by attention-demanding tasks

    J. Neurophysiol.

    (2000)
  • K.D. Davis

    Human anterior cingulate cortex neurons encode cognitive and emotional demands

    J. Neurosci.

    (2005)
  • B. de Martino

    Frames, biases, and rational decision-making in the human brain

    Science

    (2006)
  • S. Dehaene

    Localization of a neural system for error detection and compensation

    Psychol. Sci.

    (1994)
  • B.A. Eriksen et al.

    Effects of noise letters upon the identification of a target letter in a nonsearch task

    Percept. Psychophys.

    (1974)
  • D. Farrington et al.

    Some benefits of dichotomization in psychiatric and criminological research

    Crim. Behav. Ment. Health

    (2000)
  • M.J. Frank et al.

    A mechanistic account of striatal dopamine function in human cognition: psychopharmacological studies with cabergoline and haloperidol

    Behav. Neurosci.

    (2006)
  • M.J. Frank

    By carrot or by stick: cognitive reinforcement learning in parkinsonism

    Science

    (2004)
  • W.J. Gehring et al.

    Prefrontal–cingulate interactions in action monitoring

    Nat. Neurosci.

    (2000)
  • W.J. Gehring

    The error-related negativity: an event-related potential accompanying errors

    Psychophysiology

    (1990)
  • W.J. Gehring

    A neural system for error detection and compensation

    Psychol. Sci.

    (1993)
  • W.J. Gehring

    Action-monitoring dysfunction in obsessive-compulsive disorder

    Psychol. Sci.

    (2000)
  • A. Hodgkin et al.

    A quantitative description of membrane current and its application to conduction and excitation in nerve

    J. Physiol.

    (1952)
  • Cited by (74)

    • Tracking dynamic adjustments to decision making and performance monitoring processes in conflict tasks

      2021, NeuroImage
      Citation Excerpt :

      This conflict signalling is indexed by increased fMRI BOLD signals in the anterior cingulate cortex (ACC, reviewed in Shenhav et al., 2013) and increases in theta-band (4–8 Hz) spectral activity in electrophysiological recordings (e.g., Cohen and Donner, 2013; reviewed in Cohen, 2014). In Flanker tasks, conflict is defined as co-activation of neural populations in motor areas that correspond to incompatible motor actions (Cohen, 2014; but see Brown and Braver, 2008 for an alternative definition). Strategic adjustments to decision-making processes are then implemented by a distributed network across prefrontal and parietal areas (Cavanaugh and Frank, 2014).

    • Performance Monitoring

      2015, Brain Mapping: An Encyclopedic Reference
    View all citing articles on Scopus
    View full text