The core feature of an economic exchange is a decision to trade one good for another, based on a comparison of relative value. Economists have long recognized, however, that the value an individual ascribes to a good during decision making (i.e., their relative willingness to trade for that good) does not always map onto the reward they actually experience. Here, we show that experienced value and decision value are represented in distinct regions of ventromedial prefrontal cortex (VMPFC) during the passive consumption of rewards. Participants viewed two categories of rewards—images of faces that varied in their attractiveness and monetary gains and losses—while being scanned using functional magnetic resonance imaging. An independent market task, in which participants exchanged some of the money that they had earned for brief views of attractive faces, determined the relative decision value associated with each category. We found that activation of anterior VMPFC increased with increasing experienced value, but not decision value, for both reward categories. In contrast, activation of posterior VMPFC predicted each individual's relative decision value for face and monetary stimuli. These results indicate not only that experienced value and decision value are represented in distinct regions of VMPFC, but also that decision value signals are evident even in the absence of an overt choice task. We conclude that decisions are made by comparing neural representations of the value of different goods encoded in posterior VMPFC in a common, relative currency.
In economic exchanges, individuals sacrifice their capital to obtain goods, presumably because the value of the good outweighs its cost. There has been substantial progress toward identifying the neural mechanisms of valuation (Delgado et al., 2000; Breiter et al., 2001; Knutson et al., 2001b; O'Doherty et al., 2001; Knutson et al., 2003). Key brain regions, as identified by primate electrophysiology and human neuroimaging, include the ventral striatum (vSTR) for learning about reward contingencies (Knutson and Cooper, 2005; Schultz, 2006) and the ventromedial prefrontal cortex (VMPFC) for evaluating reward outcomes (Blair et al., 2006; Padoa-Schioppa and Assad, 2008).
Relatively less is known about how these and related brain regions interact in the course of economic exchange. Across a number of recent neuroscience studies, participants have traded money for subsequently received goods (Knutson et al., 2007; Plassmann et al., 2007; Hare et al., 2008, 2009; Knutson et al., 2008; De Martino et al., 2009). For example, when hungry participants placed bids on food items, activation in VMPFC was modulated by the subjective desirability of the food (Plassmann et al., 2007). Thus, converging evidence indicates that VMPFC carries signals that support economic exchange, particularly in the representation of decision value. Such studies have heretofore estimated the value of goods (e.g., electronics, food, or money), from decisions made before consumption. This approach precludes separation of the neural representations of the value of a good during decision making [decision value, or goal value, in the terminology of Hare et al. (2008)] from the value derived from its consumption (experienced value).
Social rewards, such as photographs of attractive people, provide an ideal good for disentangling neurometric measurements of decision value and experienced value. The experienced value derived from images of attractive people occurs immediately at presentation, and thus can be measured during a neuroimaging session (Aharon et al., 2001; O'Doherty et al., 2003; Winston et al., 2007). Indeed, heterosexual males will trade small amounts of money or expend effort to view photographs of attractive females (Hayden et al., 2007).
We hypothesized that individuals' idiosyncratic VMPFC responses to passively experienced rewards would predict their subsequent economic exchanges. To test this hypothesis, we collected fMRI data from heterosexual male adults in a multimodal reward task offering two types of rewards: photographs of female faces varying in attractiveness and images indicating receipt of money (see Fig. 1A). In a subsequent economic exchange task, participants traded money to view novel faces; they could spend more money to view a more attractive face, or less money to view a less attractive face (see Fig. 1B). From the rate of exchange, we estimated each participant's relative decision value for monetary and social rewards. Our analyses addressed three key and unanswered questions: First, do distinct regions within VMPFC signal a good's experienced value and its decision value? Second, are neural representations of decision values computed even in the absence of decision making? Third, are these representations independent of the type of reward?
Materials and Methods
Twenty-six self-reported heterosexual male participants completed the study (age range, 18–28 years; mean age, 21.8 years). Data from three participants were excluded before analyses because of excessive head movement (>4 mm) or experimental noncompliance, leaving 23 individuals in the final sample. Participants were given a $40 endowment, earned an additional $15 to $25 based on the passively viewed monetary stimuli, and spent an average of $4.31 to view attractive faces. All participants gave written informed consent under a protocol approved by the Institutional Review Board of Duke University Medical Center.
Stimuli and tasks.
In the multimodal reward task (Fig. 1A), participants were presented with a succession of images drawn from two categories: photographs of U.S. currency and photographs of female faces. The currency photos ($5, $2, or $1) were either normal colors (indicating a gain) or red (indicating a loss). Photographs of young adult women were cropped to show only the face and were resized to uniform dimensions.
Before the main fMRI study, photographs of young adult women were rated for attractiveness (on a 10-point scale) by a cohort of heterosexual, young adult males (N = 16) who did not participate in subsequent studies. To remove individual bias in the use of the response scale, ratings were normalized by converting to z-scores for each participant and then averaged across all raters. We excluded from our stimulus set 83 photographs whose variability across raters was more than 2 SDs above the average for all photographs. We parsed the distribution of photographs into four distinct categories of attractiveness (supplemental Fig. 1, supplemental Table 1, available at www.jneurosci.org as supplemental material).
Participants first participated in five 60-trial runs of the multimodal reward task. On each trial, a single image appeared at fixation for 2 s, followed by a 2 s fixation interval, providing a minimum stimulus onset asynchrony of 4 s. Face and monetary photographs appeared equally often and in randomized order. The face stimuli were equally likely to be drawn from each of the four attractiveness categories. So that each run resulted in a net monetary gain, two-thirds of the monetary images indicated gains and one-third indicated losses. No response was required to either type of stimulus. To ensure attention to the experimental stimuli, the participants pressed a button to the occurrence of an infrequent (<5% of all trials) small yellow border around the edge of the images. At the end of the session, each participant rolled a die to select one of the runs and received the earnings from that run as a supplement to the endowment (range: $15 to $25).
Next, participants completed an economic exchange task (Fig. 1B). On each trial, participants chose between two options, each involving the sacrifice of a monetary payment (1–12 cents) for the opportunity to view a face from a known attractiveness category (1- to 4-stars for increasing attractiveness). Both payment and attractiveness varied randomly across trials within uniform distributions, with the constraints that the two face options always differed in attractiveness and that the more attractive face always carried the greater monetary cost. Participants thus decided whether to sacrifice a greater amount of money to view a more attractive face. After a 4 s decision window, the screen went blank for a variable anticipation interval of 2–4 s before a face from the selected category was presented for 2 s. The intertrial interval ranged from 2 to 6 s. Participants made between 75 and 150 decisions during the economic exchange task.
Both tasks were programmed using the Psychophysics Toolbox version 2.54 (Brainard, 1997). Stimuli were projected onto a screen at the back of the scanner bore, and participants viewed the stimuli through mirrored goggles. Responses were recorded using a MRI-compatible button box.
Following the main experiment, participants rated the attractiveness of the presented female faces. These postexperiment ratings were nearly identical to preliminary ratings from the independent group of participants that had been used for establishing attractiveness categories (mean r = 0.82 ± 0.07), indicating that our mapping of stimuli to categories was appropriate for our participant sample.
Estimating decision value.
From our economic exchange task, we calculated two measures of decision value for each participant: the proportion of trials in which the participant sacrificed money to view a more attractive face, and the willingness to pay (WTP) for a one-star increment in attractiveness. Prior studies have calculated WTP from decisions to procure a good, usually within the context of an incentive-compatible auction (cf. Plassmann et al., 2007). Here, we identified the minimum relative cost (i.e., the price difference between the two options) that was sufficient to predict an economic exchange (i.e., a trial in which the participant chose the more expensive/attractive option). We estimated WTP using an iterative technique that determined the amount of money (stepping from 0 to 20 cents in 1-cent increments) that best predicted the actual decisions made by that subject.
Functional MRI data sensitive to blood-oxygenation-level-dependent contrast were acquired using a novel spiral-in sensitivity encoding sequence (acceleration factor = 2), as implemented on a 3 tesla General Electric scanner with an eight-channel receiver. We chose this sequence to reduce susceptibility artifacts and recover signal in ventral frontal regions of interest (Pruessmann et al., 2001; Truong and Song, 2008). Each participant performed five runs, each consisting of 122 volumes [repetition time (TR): 2 s; echo time (TE): 27 ms; voxel dimensions: 4 × 4 × 4 mm; 30 slices; 64 × 64 matrix; field of view (FOV): 256 mm; flip angle: 60°]. The first eight volumes of each run were discarded to allow for magnetic stabilization. High-resolution anatomical images were acquired to aid in normalization and coregistration (T1-weighted three-dimensional inversion-recovery prepared gradient-recalled echo sequence; TR: 7.2 ms; TE: 2.9 ms; voxel dimensions: 1 × 1 × 2 mm; 256 × 224 matrix; FOV: 256 mm; flip angle: 12°).
fMRI data analysis.
Analyses were conducted using the FSL analysis package (Smith et al., 2004). We corrected for head motion by realigning the time series to the middle volume (Jenkinson et al., 2002). Nonbrain material was removed using the brain extraction tool (Smith, 2002). Differences in slice acquisition time were corrected using Fourier-space phase-shifting. Spatial smoothing used a Gaussian kernel of full-width-half-maximum 6 mm. The entire four-dimensional dataset was grand-mean intensity normalized using a single multiplicative factor. To remove low-frequency drift in the MR signal, we used a high-pass temporal filter (Gaussian-weighted least-squares straight line fitting, with δ = 50 s). Functional data were registered to stereotaxic space [Montreal Neurological Institute (MNI)] (Jenkinson and Smith, 2001).
Before applying a general linear model, we removed noise in our preprocessed data using independent components analysis (ICA). We first conducted a probabilistic ICA using MELODIC (Beckmann and Smith, 2004). We next applied an automated technique that classified noise components using three heuristics. First, we classified all components according to the frequency at which maximum power was observed, and we removed those within the top 10% of that classification distribution (i.e., those with power at the highest frequencies). Second, to identify components associated with transient and discontinuous changes in the MR signal (i.e., those associated with fluctuations in the scanner hardware), we calculated the maximum change for each component within a 6 s moving window and discarded those components in the top 10% of that distribution. Third, we used the motion parameter estimates (three axes of rotation and three directions of translation) from our preprocessing steps to identify the correlation of each component with head motion, removing the 10% with the greatest correlation with one or more head motion parameters. Together, these three steps removed 981 components (27.74%). We note that although the choice of a 90th percentile cutoff is arbitrary, our use of a fixed, a priori criterion and automated component removal eliminates the possibility of subjective bias in preprocessing.
Statistical analyses proceeded in three stages. First, denoised functional data were analyzed using a general linear model with local autocorrelation correction (Woolrich et al., 2001). For each run, we set up separate statistical models for monetary and social rewards. The monetary reward model comprised six regressors that corresponded to each amount ($5, $2, $1, −$1, −$2, and −$5). The social reward model comprised four regressors representing each level of attractiveness (1-star, 2-star, 3-star, 4-star). A nuisance regressor modeled the target-detection component of the task. All regressors consisted of unit impulses convolved with a canonical hemodynamic response function. Our key contrasts involved bidirectional comparisons of high versus low monetary value ($5 and $2 gains vs $5 and $2 losses) and of high versus low facial attractiveness (4-star vs 1-star). Secondary contrasts examined the responses to all face images and to all monetary images, independently. We then combined data across runs, for each subject, using a fixed-effects model, and combined data across subjects using a mixed-effects model (Beckmann et al., 2003; Woolrich et al., 2004). Measures of percentage signal change were obtained by converting the mean parameter estimates and normalizing to the lowest reward level in each model; this provides interpretable, albeit independent, baselines for both face and monetary rewards.
Except where noted, all z-statistic (Gaussianized t) images were thresholded using clusters determined by z > 2.3 and a corrected cluster-significance threshold of p < 0.05 (Worsley, 2001). Statistical overlay images were created using MRIcron (Rorden et al., 2007) and anatomical labels for local maxima were obtained from the Talairach Client (Lancaster et al., 2000). All coordinates in the manuscript are reported in MNI space.
During the multimodal reward task, heterosexual male adults passively viewed two types of images, presented in a randomized sequence: photographs of female faces and photographs of currency. Because of the potentially confounding effects associated with reward anticipation (Knutson et al., 2001b; O'Doherty et al., 2002; Knutson et al., 2003), participants viewed all images passively, with no response required for their delivery. To ensure vigilance, participants performed a simultaneous target detection task by pressing a button to the infrequent (<5% of all trials) appearance of a small yellow border around the image. Overall target detection accuracy was very good (mean 91 ± 2%, across participants) and was not significantly different between the two classes of images.
Activation in anterior VMPFC reflects experienced value
We analyzed fMRI data from the multimodal reward task using independent models for the monetary rewards and for the social rewards. Collapsing across all regressors for each reward type revealed main effects in expected regions: images of attractive faces evoked activation in the fusiform face area and lateral occipital cortex, whereas images of monetary gains and losses evoked activation in the intraparietal sulci and cuneus (supplemental Figs. 2 and 3; supplemental Tables 1 and 2, available at www.jneurosci.org as supplemental material).
To identify brain regions whose activation was modulated by the experienced value of monetary rewards, we contrasted fMRI responses to large monetary gains (defined as $5 and $2) compared with large monetary losses (defined as −$5 and −$2). We refer to this contrast as our metric of monetary value. We found that two subregions within the VMPFC and adjacent frontopolar cortex responded significantly more to monetary gains than to losses (Fig. 2A; supplemental Table 3, available at www.jneurosci.org as supplemental material). A confirmatory analysis modeling monetary value as a linear parametric effect revealed activation in a similar region within anterior VMPFC (supplemental Fig. 4; supplemental Table 4, available at www.jneurosci.org as supplemental material). Moreover, a post hoc analysis that did not use a minimum cluster size for significance nor a correction for multiple comparisons [as similar to prior studies using small-volume correction; e.g., Knutson et al. (2007) and Winston et al. (2007)] revealed increased activation in vSTR to monetary gains compared with losses (see supplemental Fig. 5, available at www.jneurosci.org as supplemental material). No regions exhibited significantly greater activation to monetary losses than to monetary gains.
We next identified regions whose activation was modulated by the attractiveness of social rewards, by comparing activation to images of attractive faces (from the 4-star category) to images of unattractive faces (from the 1-star category). We refer to this contrast as social value. Regions of the vSTR and VMPFC responded significantly more to attractive faces compared with unattractive faces (Fig. 2A; supplemental Table 5, available at www.jneurosci.org as supplemental material). A confirmatory analysis modeled experienced value of the face images using a single linear effect and found significant activation in similar regions (supplemental Fig. 6; supplemental Table 6, available at www.jneurosci.org as supplemental material). The inverse contrast (unattractive faces > attractive faces) did not reveal any significant activation at our statistical threshold.
A conjunction analysis revealed that the only region exhibiting a significant response to both forms of rewards was located within the anterior VMPFC (aVMPFC) (Fig. 2A). Activation within this region resembled a monotonic concave function (similar to a utility curve) for both increasing monetary value (Fig. 2B) and increasing social value (Fig. 2C).
Activation in posterior VMPFC reflects decision value
We next tested our core hypothesis that neurometric measures of subjective value—as obtained outside the decision context—could predict subsequent decisions. Rather than obtaining potentially unreliable self-report measures for value, which could reflect many factors other than the true valuation of the different modalities, we used a choice task that involved transactions between faces and money. Consistent with models of revealed preferences (Samuelson, 1938), economic exchanges between goods provide a measure of their relative valuation. On each trial of the economic exchange task, subjects chose whether to sacrifice more money to view a more attractive face. We observed considerable interindividual variability in participants' willingness to exchange money for viewing a more-attractive image (supplemental Fig. 7, available at www.jneurosci.org as supplemental material); the proportion of such exchanges ranged from 0.01 to 0.91 across participants (M = 0.34; SD = 0.25). We also quantified WTP for a one-star increase in attractiveness (see Materials and Methods for details). WTP ranged from 0 to 6.4 cents/star (M = 1.41; median = 0.33) and correlated strongly with the proportion of exchanges (r = 0.74; ρ = 0.91; p < 0.001 for both tests).
We introduced each participant's proportion of exchanges—a revealed-preference measure of relative value for faces over money—as a covariate in the across-participants analysis of the multimodal reward task, which was conducted earlier in the experimental session. Thus, we are using the prior fMRI data to predict the subsequent economic decisions. Strikingly, the difference between experienced social value and experienced monetary value [(4-star minus 1-star) minus (large gains minus large losses)] in the posterior VMPFC (pVMPFC) was a strong positive predictor of exchange rate. Those subjects who exhibited the greatest response to social value compared with monetary value within pVMPFC were most likely to sacrifice money for the opportunity to see a more attractive face (Fig. 3A). Conversely, participants with the weakest response to social value compared with monetary value within pVMPFC were least likely to sacrifice money to view a more attractive image. This relationship was robust throughout our entire range of decision preferences (Fig. 3B; for proportion of economic exchanges: r = 0.72; for mean WTP: r = 0.74). No other brain region (including aVMPFC; supplemental Figs. 8, 9, available at www.jneurosci.org as supplemental material) exhibited a significant relationship between neurometric value sensitivity and exchange rate, at our correction threshold.
Our results provide new insight into the neural substrates of economic value. We found that distinct regions within VMPFC track distinct aspects of economic value: aVMPFC tracks experienced value for both social and monetary rewards, whereas pVMPFC tracks the relative decision value between reward categories (supplemental Fig. 10, available at www.jneurosci.org as supplemental material). Information predictive of decision value was observed in pVMPFC even though rewards were received passively. Such a result provides strong evidence that VMPFC encodes multiple value signals that are simultaneously and obligatorily computed during experience of rewards.
Considerable prior neuroimaging and electrophysiological work has implicated VMPFC in the assignment of value to environmental stimuli (Kringelbach and Rolls, 2004; McClure et al., 2004; Padoa-Schioppa and Assad, 2008). Anterior VMPFC, in particular, has been reported to encode outcome probability in monetary decision tasks (Knutson et al., 2005) and to track rewarding outcomes across different reward modalities (O'Doherty et al., 2001; O'Doherty et al., 2002; Knutson et al., 2003). Of note, we found significant vSTR activation only to the face stimuli, not the monetary stimuli, in apparent conflict with prior work suggesting this region responds similarly for social and monetary rewards (Izuma et al., 2008). This may reflect a bias within vSTR activation toward anticipation of rewards (Knutson et al., 2001a), or it may reflect our use of strict cluster correction. When we relaxed our cluster correction threshold, which effectively simulates the small-volume correction approach common in studies of the striatum, we found robust vSTR activation to monetary rewards. Thus, aVMPFC may play an integrative role in the experience of rewarding stimuli, perhaps through indirect input from prediction error signals computed within vSTR (Hare et al., 2008) and other reward-related regions.
Yet, while activation in aVMPFC scaled with increasing experienced value for each reward category, it was unpredictive of the relative valuation of those categories, as measured in a subsequent decision-making task. Instead, information predictive of decision value was found in pVMPFC, a region that has been recently associated with willingness to pay during active decision making (Plassmann et al., 2007; Hare et al., 2008; De Martino et al., 2009; Hare et al., 2009) and with the combined representation of expected value and reward magnitude (Rolls et al., 2008). We note that our centroid of pVMPFC activation closely matches that reported in prior experiments (Plassmann et al., 2007), which provides converging evidence for that region's role in valuation.
However, our work differs from these prior studies in several important ways. First, we related neurometric measures of value to psychometric measures of value obtained in a separate task, providing an initial proof-of-concept that value computations in VMPFC are present even in the absence of overt decision making. Second, the hedonic value from our face images was obtained within the scanning session, allowing us to create parametric measures of activation during the consumption of stimuli—and thus to dissociate experienced and decision value. Third, our subjects viewed only novel images of unknown individuals, saw those images only once, and made subsequent decisions based only on category ratings (e.g., 4-star). Given these design features, our results provide compelling evidence that pVMPFC activation reflects the relative tradeoffs between two subjectively valued continua—a necessary condition for the existence of a neural common currency (Montague and Berns, 2002).
The existence of a common neurometric scale for value—whether in VMPFC (cf. Kable and Glimcher, 2007; Rangel et al., 2008) or elsewhere—would facilitate decision making by providing a means for comparison of different goods, services, and even abstract experiences. Yet, our data also indicate that the region encoding a common currency for decision value (pVMPFC) may be distinct from those supporting experienced value (aVMPFC and vSTR). We note, however, that value continua for monetary and social rewards may have different properties, in that the former reflects changes from a neutral baseline, whereas the latter may contain both positive and negative elements in the absence of a true baseline. Nevertheless, an intriguing possibility—one strongly supported by our data—is that multiple value signals exist simultaneously within ventral prefrontal cortex, each contributing to distinct aspects of experience and behavior. We speculate that neurological disorders characterized by pathological decision making (e.g., anorexia, obsessive compulsive disorder, sex addiction) could result from dysfunction of neuronal populations that code the decision value for specific types of rewards.
This research was supported by National Institutes of Health Grants NS-41328 (to A.W.S.), EY-13496 (to M.L.P.), and MH-70685 (to S.A.H.). B.Y.H. was supported by Kirschstein NRSA DA-23338. We thank Karli Watson for helpful comments on the manuscript, Jason Chen for assistance with task programming, and Amy Winecoff for assistance with figure construction.
- Correspondence should be addressed to Scott A. Huettel, Box 90999, Duke University, Durham, NC 27708.