Skip to main content

Umbrella menu

  • SfN.org
  • eNeuro
  • The Journal of Neuroscience
  • Neuronline
  • BrainFacts.org

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
  • ALERTS
  • FOR AUTHORS
    • Preparing a Manuscript
    • Submission Guidelines
    • Fees
    • Journal Club
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
  • SfN.org
  • eNeuro
  • The Journal of Neuroscience
  • Neuronline
  • BrainFacts.org

User menu

  • Log in
  • Subscribe
  • My alerts
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log in
  • Subscribe
  • My alerts
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
  • ALERTS
  • FOR AUTHORS
    • Preparing a Manuscript
    • Submission Guidelines
    • Fees
    • Journal Club
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
PreviousNext
Featured ArticleArticles, Behavioral/Cognitive

Neural Signatures of Value Comparison in Human Cingulate Cortex during Decisions Requiring an Effort-Reward Trade-off

Miriam C. Klein-Flügge, Steven W. Kennerley, Karl Friston and Sven Bestmann
Journal of Neuroscience 28 September 2016, 36 (39) 10002-10015; DOI: https://doi.org/10.1523/JNEUROSCI.0292-16.2016
Miriam C. Klein-Flügge
1Sobell Department of Motor Neuroscience and Movement Disorders, UCL Institute of Neurology, University College London, London WC1N3BG, United Kingdom,
2Department of Experimental Psychology, Oxford University, Oxford OX1 3UD, United Kingdom, and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Miriam C. Klein-Flügge
Steven W. Kennerley
1Sobell Department of Motor Neuroscience and Movement Disorders, UCL Institute of Neurology, University College London, London WC1N3BG, United Kingdom,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Steven W. Kennerley
Karl Friston
3Wellcome Trust Centre for Neuroimaging, UCL Institute of Neurology, University College London, London WC1N3BG, United Kingdom
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Karl Friston
Sven Bestmann
1Sobell Department of Motor Neuroscience and Movement Disorders, UCL Institute of Neurology, University College London, London WC1N3BG, United Kingdom,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Sven Bestmann
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

Integrating costs and benefits is crucial for optimal decision-making. Although much is known about decisions that involve outcome-related costs (e.g., delay, risk), many of our choices are attached to actions and require an evaluation of the associated motor costs. Yet how the brain incorporates motor costs into choices remains largely unclear. We used human fMRI during choices involving monetary reward and physical effort to identify brain regions that serve as a choice comparator for effort-reward trade-offs. By independently varying both options' effort and reward levels, we were able to identify the neural signature of a comparator mechanism. A network involving supplementary motor area and the caudal portion of dorsal anterior cingulate cortex encoded the difference in reward (positively) and effort levels (negatively) between chosen and unchosen choice options. We next modeled effort-discounted subjective values using a novel behavioral model. This revealed that the same network of regions involving dorsal anterior cingulate cortex and supplementary motor area encoded the difference between the chosen and unchosen options' subjective values, and that activity was best described using a concave model of effort-discounting. In addition, this signal reflected how precisely value determined participants' choices. By contrast, separate signals in supplementary motor area and ventromedial prefrontal cortex correlated with participants' tendency to avoid effort and seek reward, respectively. This suggests that the critical neural signature of decision-making for choices involving motor costs is found in human cingulate cortex and not ventromedial prefrontal cortex as typically reported for outcome-based choice. Furthermore, distinct frontal circuits seem to drive behavior toward reward maximization and effort minimization.

SIGNIFICANCE STATEMENT The neural processes that govern the trade-off between expected benefits and motor costs remain largely unknown. This is striking because energetic requirements play an integral role in our day-to-day choices and instrumental behavior, and a diminished willingness to exert effort is a characteristic feature of a range of neurological disorders. We use a new behavioral characterization of how humans trade off reward maximization with effort minimization to examine the neural signatures that underpin such choices, using BOLD MRI neuroimaging data. We find the critical neural signature of decision-making, a signal that reflects the comparison of value between choice options, in human cingulate cortex, whereas two distinct brain circuits drive behavior toward reward maximization or effort minimization.

  • cingulate cortex
  • cost-benefit decision making
  • fMRI
  • motor cost
  • physical effort
  • value comparison

Introduction

Cost-benefit decisions are a central aspect of flexible goal-directed behavior. One particularly well-studied neural system concerns choices where costs are tied to the reward outcomes (e.g., risk, delay) (Kable and Glimcher, 2007; Boorman et al., 2009; Philiastides et al., 2010). Much less is known about choices tied to physical effort costs, despite their ubiquitous presence in human and animal behavior. The intrinsic relationship between effort and action may engage neural circuits distinct from those involved in other value-based choice computations.

There is growing consensus that different types of value-guided decisions are underpinned by distinct neural systems, depending on the type of information that needs to be processed (e.g., Rudebeck et al., 2008; Camille et al., 2011b; Kennerley et al., 2011; Pastor-Bernier and Cisek, 2011; Rushworth et al., 2012). For example, activity in the ventromedial prefrontal cortex (vmPFC) carries a signature of choice comparison (chosen-unchosen value) for decisions between abstract goods or when costs are tied to the outcome (Kable and Glimcher, 2007; Boorman et al., 2009; FitzGerald et al., 2009; Philiastides et al., 2010; Hunt et al., 2012; Kolling et al., 2012; Clithero and Rangel, 2014; Strait et al., 2014). By contrast, such value difference signals are found more dorsally in medial frontal cortex when deciding between exploration versus exploitation (Kolling et al., 2012).

Choices requiring the evaluation of physical effort rest on representations of the required actions and their energetic costs, and thus likely require an evaluation of the internal state of the agent. This is distinct from choices based solely on reward outcomes (Rangel and Hare, 2010). Indeed, the proposed network for evaluating motor costs comprises brain regions involved in action planning and execution, including the cingulate cortex, putamen, and supplementary motor area (SMA) (Croxson et al., 2009; Kurniawan et al., 2010; Prévost et al., 2010; Burke et al., 2013; Kurniawan et al., 2013; Bonnelle et al., 2016). Neurons in anterior cingulate cortex (ACC) encode information about rewards, effort costs, and actions (Matsumoto et al., 2003; Kennerley and Wallis, 2009; Luk and Wallis, 2009; Hayden and Platt, 2010), and integrate this information into an economic value signal (Hillman and Bilkey, 2010; Hosokawa et al., 2013). Moreover, lesions to ACC profoundly impair choices of effortful options and between action values (Walton et al., 2003, 2006, 2009; Schweimer and Hauber, 2005; Kennerley et al., 2006; Rudebeck et al., 2006, 2008; Camille et al., 2011b).

While these studies highlight the importance of motor-related structures in representing effort information, it remains unclear whether computations in these regions are indeed related to comparing effort values (or effort-discounted net values), the essential neural signature, which would implicate these areas in decision making. Indeed, these regions could simply represent effort, which is then passed onto other regions for value comparison processes. A number of questions thus arise. First, is information about reward and effort compared in separate neural structures, or is this information fed to a region that compares options based on their integrated value? Second, do regions that preferably encode reward or effort have a direct influence on determining choice? Finally, assuming separate neural systems are present for influencing choices based on reward versus effort, how does the brain arbitrate between these signals when reward and effort information support opposing choices?

Here we used a task designed to identify signatures of a choice comparison for effort-based decisions in humans using fMRI and to test whether different neural circuits “drive” choices toward reward maximization versus energy minimization. We show that the neural substrates of effort-based choice are distinct from those computing outcome-related choices: well-known reward and effort circuits centered on vmPFC and SMA bias choices to be more driven by benefits or motor costs, respectively, with a region in cingulate cortex integrating cost and benefit information and comparing options based on these integrated subjective values.

Materials and Methods

Participants.

Twenty-four participants with no history of psychiatric or neurological disease, and with normal or corrected-to-normal vision took part in this study (mean age 28 ± 1 years, age range 19–38 years, 11 females). All participants gave written informed consent and consent to publish before the start of the experiment; the study was approved by the local research ethics committee at University College London (1825/003) and conducted in accordance with the Declaration of Helsinki. Participants were reimbursed with £15 for their time; in addition, they accumulated average winnings of £7.16 ± 0.11 during each of the two blocks of the task (the maximum winnings per block were scaled to £8; the resulting average total pay was £29.32). Three participants were excluded from the analysis: one for failing to stay awake during scanning and two due to excessive head movements (summed movement in any direction and run >40 mm). All analyses were performed on the remaining 21 participants.

Behavioral task.

Participants received both written and oral task instructions. They were asked to make a series of choices between two options, which independently varied in required grip force (effort) and reward magnitude (see Fig. 1A). The reward magnitude was shown as a number (range: 10–40 points; approximately corresponding to pence) and required force levels were indicated as the height of a horizontal bar (range: 20%–80% of the participant's maximum grip force).

Each trial comprised an offer, response, and outcome phase; a subset of 30% of trials also contained an effort production phase. During the offer phase, participants decided which option to choose but they were not yet able to indicate their response. There were two trial types (50% each): ACT (action) and ABS (abstract). In ACT trials, the two choice options were presented to the left and right of fixation, and thus in a horizontal or action space configuration in which the side of presentation directly related to the hand with which to choose that option. In ABS trials, choice options were shown above and below fixation, and thus in a vertical or goods space arrangement that did not reveal the required action. In both conditions, stimuli were presented close to the center of the screen and participants did not need to move their eyes to inspect them. To maximally distinguish the hemodynamic response from the offer and response phase, the duration of the offer phase varied between 4 and 11 s (Poisson distributed; mean 6 s).

The response phase started when the fixation cross turned red. In ACT trials, the arrangement of the two choice options remained the same; in ABS trials, the two options at the top and bottom were switched to the left and right of fixation (with a 50/50% chance), thus revealing the required action mapping. Choices were indicated by a brief squeeze of a grip device (see below for details) on the corresponding side (maximum response time: 3 s; required force level: 35% of maximum voluntary contraction [MVC]). ACT and ABS trials were merged for all analyses because no significant differences were found for the tests reported in this manuscript.

On 70% of trials, no effort was required: as soon as participants indicated their choice, the unchosen option disappeared, and the message “no force” was displayed for 500 ms. The next trial commenced after a variable delay (intertrial interval: 2–13 s; Poisson distributed; mean: 5 s). On the remaining 30% of trials, a power grip of 12 s was required (effort). Again, the unchosen option disappeared, but now a thermometer appeared centrally and displayed the target force level of the chosen option. Participants were given online visual feedback about the applied force level using changing fluid levels in the thermometer. On successful application of the required force for at least 80% of the 12 s period, a green tick appeared (500 ms; outcome phase; delay preceding outcome: 0.5–1.5 s uniform) and the reward magnitude of the chosen option was added to the total winnings. Otherwise, the total winnings remained unchanged (red cross: 500 ms). Because participants were almost always successful in applying the required force on effort trials (accuracy: 99.30 ± 0.004%; only 4 participants made any mistakes), there was no confound between effort level and risk/reward expectation.

The sensitivity of the grip device was manipulated between trials (high or low). A high gain meant that the grippers were twice as sensitive as for a low gain, and thus the same force deviation doubled the rate of change in the thermometer's fluid level. While this manipulation was introduced to study interactions between mental and physical effort, none of our behavioral or fMRI analyses revealed any significant effects of gain during the choice phase, which is the focus of the present paper.

To summarize, our task involved several important features: (1) as our aim was to specifically examine value comparison mechanisms during effort-based choice, we manipulated both options' values and thus the expected values of the two offers had to be computed and compared online in each trial, unlike in previous experiments (Croxson et al., 2009; Kurniawan et al., 2010, 2013; Prévost et al., 2010; Burke et al., 2013; Bonnelle et al., 2016); (2) the decision process and the resulting motor response were separated in time (see Fig. 1A). This enabled us to examine the value comparison in the absence of, and not confounded with, processes related to action execution; (3) both reward and effort levels were varied parametrically rather than in discrete steps, and orthogonally to each other, thereby granting high sensitivity for the identification of effort and reward signals, respectively; (4) efforts were only realized on a subset of trials, ensuring that decisions were not influenced by fatigue (Klein-Flügge et al., 2015). Importantly, however, at the time of choice, participants did not know whether a given trial was real or hypothetical; therefore, the optimal strategy was to treat each trial as potentially real; and (5) the duration of the grip on effort trials (12 s) had been determined in pilot experiments and ensured that force levels were factored into the choice process. Moreover, the fixed duration of grip force also meant that effort costs were not confounded with temporal costs.

Scanning procedure.

Before scanning, force levels were adjusted to each individual's grip strength using a grip calibration. Participants were seated in front of a computer monitor and held a custom-made grip device in both hands. Each participant's baseline (no grip) and MVC were measured over a period of 3 s, separately for both hands. The measured values were used to define individual force ranges (0%–100%) for each hand, which were then used in the behavioral task, both prescanning and during scanning.

Before entering the scanner, participants completed a training session consisting of one block of the behavioral task (112 trials, ∼30 min). This gave them the opportunity to experience different force levels and to become familiar with the task. Importantly, it also ensured that decisions made subsequently in the scanner would not be influenced by uncertainty about the difficulty of the displayed force levels. In the scanner, participants completed two blocks of the task (overall task duration ∼60 min; 224 choices).

Generation of choice stimuli.

Because our main question related to the encoding of value difference signals during effort-based choices, the generation of suitable choice stimuli was a key part of the experimental design. Choice options were identical for every individual and were chosen such that they would minimize the correlation between the fMRI regressors for chosen and unchosen effort, reward magnitude, and value (obtained mean correlations after scanning: effort: −0.23; reward magnitude: 0.11; value: 0.43; see Fig. 1C). We also ensured that left and right efforts, reward magnitudes, and values were decorrelated to be able to identify action value signals (effort: 0.28; reward magnitude: 0.05; value: 0.07). We simulated several individuals using a previously suggested value function for effort-based choice (Prévost et al., 2010). Stimuli were optimized with the following additional constraints: either the efforts or the reward magnitudes had to differ by at least 0.1 on each trial, the range of efforts and reward magnitudes was [0.2 to 0.8] × MVC or 0–50 points, respectively, and the overall expected value for both hands was comparable. Furthermore, in 85% of trials, the larger reward was paired with the larger effort level, and the smaller reward with the smaller effort level, making the choice hard, but on 15% of trials, the larger reward was associated with the smaller effort level (“no-brainer”). The two choice sets that minimized the correlations between our regressors of interest were used for the fMRI experiment. A third stimulus set was saved for the behavioral training before scanning.

Preliminary fMRI analyses revealed that we had overlooked a bias in our stimuli. In the last third of trials of the second block, the overall offer value ((magnitude1/effort1 + magnitude2/effort 2)/2) decreased steadily, leading to skewed contrast estimates. Therefore, the last 40 trials were discarded from all analyses.

We refer to choices in this study as “effort-based” to highlight the distinction from purely outcome/reward-based choices or choices involving other types of costs (e.g., delay-based). But of course, in our task, all choices were effort- as well as reward-based.

Recordings of grip strength.

The grippers were custom-made and consisted of two force transducers (FSG15N1A, Honeywell) placed between two molded plastic bars (see also Ward and Frackowiak, 2003). A continuous recording of the differential voltage signal, proportional to the exerted force, was acquired, fed into a signal conditioner (CED 1902, Cambridge Electronic Design), digitized (CED 1401, Cambridge Electronic Design), and fed into the computer running the stimulus presentation. This enabled us, during effort trials, to give online feedback about the exerted force using the thermometer display.

Behavioral analysis.

To examine which task variables affected participants' choice behavior, a logistic regression was fitted to participants' choices (1 = RH; 0 = LH) using the following nine regressors: a RH-LH bias (constant term); condition (ABS or ACT); gain (high or low); LH-effort on previous trial; RH-effort on previous trial; reward magnitude left; reward magnitude right; effort left; effort right. t tests performed across participants on the obtained regression coefficients were adjusted for multiple comparisons using Bonferroni correction. Because only reward magnitudes and efforts influenced behavior significantly (see Results), the logistic regression models performed for the analysis of the neural data below (Eqs. 2, 3) only contained these variables (or their amalgamation into combined value).

To examine the influence of reward and effort on participants' choice behavior in more depth, we tested whether participants indeed weighed up effort against reward, and whether they treated reward and effort as continuous variables. If reward and effort compete for their influence on choice, then the influence of effort should become larger as the reward difference becomes smaller, and vice versa. Thus, we performed a median split of our trials according to the absolute difference in reward (or effort) between the two choice options. We then calculated the likelihood of choosing an option as a function of its effort (reward) level, separately for the two sets of trials. Effort (reward) values were distributed across 10 bins with equal spacing; this binning was independent of the effort (reward) level of the alternative option. For statistical comparisons, we fitted a slope for each participant to the mean of all bins. t tests were performed on the resulting four slopes testing for the influence of (1) effort in trials with small reward difference, (2) effort in trials with large reward difference, (3) reward in trials with small effort difference, and (4) reward in trials with large effort difference. We report uncorrected p values, but all conclusions hold when correcting for six comparisons (1–4 against zero; 1 vs 2; 3 vs 4).

We also tested for effects of fatigue: the above logistic regression suggested that choices were not affected by whether or not the previous trial required the production of effort, as shown previously in this task (Klein-Flügge et al., 2015). More detailed analyses examined the percentage of trials in which the higher effort option was chosen (running average across 20 trials), and participants' performance in reaching and maintaining the required force. The latter was measured as the time point when 10 consecutive samples were above force criterion (the shorter, the sooner), and as the percentage of time out of 12 s that participants were at criterion, respectively. For all measures, we compared the first and last third of trials. Here we report the comparison between the first and last third across the entire experiment. However, separate analyses, using the first and last third of just the first or the second block, revealed identical results. There were no effects of fatigue: in all cases, participants either improved or stayed unchanged (percentage higher effort chosen: first third, 60.56 ± 1.94%; last third, 60.93 ± 2.79%, p = 0.69, t(20) = −0.40; reaching the force threshold: first third, 0.83 ± 0.04 s; last third, 0.76 ± 0.03 s; p = 0.01, t(20) = 2.82; maintaining the force above threshold: first third, 92.49 ± 0.47%; last third, 93.51 ± 0.28%; p = 0.01, t(20) = −2.84).

To derive participants' subjective values for the offers presented on each trial, we developed an effort discounting model (Klein-Flügge et al., 2015). This model has been shown to provide better fits than the hyperbolic model previously suggested for effort discounting (Prévost et al., 2010) both here and in our previously published work (Klein-Flügge et al., 2015). Crucially, its shape is initially concave, unlike a hyperbolic function, allowing for smaller devaluations of value for effort increases at weak force levels, and steeper devaluations at higher force levels, which is intuitive for effort discounting and biologically plausible. Our model follows the following form: Embedded Image V is subjective value, C is the effort cost, M the reward magnitude, and k and p are free parameters. C and M are scaled between 0 and 1, corresponding to 0% MVC and 100% MVC, and 0 points and 50 points, respectively. A simple logistic regression on the difference in subjective values between choice options was then used to fit participants' choices; in other words, the following function (softmax rule) was used to transform the subjective values V1 and V2 of the two options offered on each trial into the probability of choosing option 1 as follows: Embedded Image The free parameters (slope k, turning point p, softmax precision parameter βV), were fitted using the Variational Laplace algorithm (Penny et al., 2003; Friston et al., 2007). This is a Bayesian estimation method, which incorporates Gaussian priors over model parameters and uses a Gaussian approximation to the posterior density. The parameters of the posterior are iteratively updated using an adaptive step size, gradient ascent approach. Importantly, the algorithm also provides the free energy F, which is an approximation to the model evidence. The model evidence is the probability of obtaining the observed choice data, given the model. To maximize our chances to find global, rather than local maxima with this gradient ascent algorithm, parameter estimation was repeated over a grid of initialization values, with eight initializations per parameter. The optimal set of parameters (i.e., that obtained from the initialization that resulted in the maximal free energy) was used for modeling subjective values in the fMRI data. For our BOLD analyses, the most relevant parameter was βV. It reflects the weight (i.e., strength) with which participants' choices are driven by subjective value, rather than noise; it is also often referred to as precision or inverse softmax temperature. Fitting ACT and ABS, or high and low gain trials separately did not lead to any significant differences between conditions (paired t tests on parameter estimates between conditions all p > 0.3) and did not improve the model evidence (paired t test on the model evidence; fitting conditions separately or not: p = 0.82; fitting gain separately or not: p = 0.63). Trials were therefore pooled for model fitting. Once fitted, the performance of our new model was compared with that of the hyperbolic model and two parameter-free models (difference: reward − effort; quotient: reward/effort) as described by Klein-Flügge et al. (2015) using a formal model comparison.

fMRI data acquisition and preprocessing.

The fMRI methods followed standard procedures (e.g., Klein-Flügge et al., 2013): T2*-weighted EPI with BOLD contrast were acquired using a 12 channel head coil on a 3 tesla Trio MRI scanner (Siemens). A special sequence was used to minimize signal drop out in the OFC region (Weiskopf et al., 2006) and included a TE of 30 ms, a tilt of 30° relative to the rostrocaudal axis and a local z-shim with a moment of −0.4 mT/m ms applied to the OFC region. To achieve whole-brain coverage, we used 45 transverse slices of 2 mm thickness, with an interslice gap of 1 mm and in-plane resolution of 3 × 3 mm, and collected slices in an ascending order. This led to a TR of 3.15 s. In each session, a maximum of 630 volumes were collected (∼33 min) and the first five volumes of each block were discarded to allow for T1 equilibration effects. A single T1-weighted structural image with 1 mm3 voxel resolution was acquired and coregistered with the EPI images to permit anatomical localization. A fieldmap with dual echo-time images (TE1 = 10 ms, TE2 = 14.76 ms, whole brain coverage, voxel size 3 × 3 × 3 mm) was obtained for each subject to allow for corrections in geometric distortions induced in the EPIs at high field strength (Andersson et al., 2001).

During the EPI acquisition, we also obtained several physiological measures. The cardiac pulse was recorded using an MRI-compatible pulse oximeter (model 8600 F0, Nonin Medical), and thoracic movement was monitored using a custom-made pneumatic belt positioned around the abdomen. The pneumatic pressure changes were converted into an analog voltage using a pressure transducer (Honeywell International) before digitization, as reported previously (Hutton et al., 2011).

Preprocessing and statistical analyses were performed using SPM8 (Wellcome Trust Centre for Neuroimaging, London; www.fil.ion.ucl.ac.uk/spm). Image preprocessing consisted of realignment of images to the first volume, distortion correction using fieldmaps, slice time correction, conservative independent component analysis to identify and remove obvious artifacts (using MELODIC in Fmrib's Software Library; http://fsl.fmrib.ox.ac.uk/), coregistration with the structural scan, normalization to a standard MNI template, and smoothing using an 8 mm FWHM Gaussian kernel.

Data analysis: GLM.

The first GLM (GLM1) included 12 main event regressors. The offer phase was described using onsets for (1) ACT trials preparing a left response, (2) ACT trials preparing a right response, and (3) ABS trials. All three events were modeled using durations of 2 s and were each associated with four parametric modulators: the reward magnitude and effort of the chosen and unchosen option. Crucially, these four parametric modulators competed to explain common variance during the estimation, rather than being serially orthogonalized (in other words, we implicitly tested for effects that were unique to each parametric explanatory variable). The response phase was described using four regressors for “no force” trials (1 s duration) and four regressors for effort production trials (12 s duration): (4–7) no force ACTleft, ACTright, ABSleft, ABSright; (8–11) effort production left (low gain), left (high gain), right (low gain), right (high gain). Finally, the outcome was modeled as a single regressor because the proportion of trials in which efforts were not produced successfully was negligible (median: 0; mean: 0.43 ± 0.22 trials; only 4 of 21 participants had any unsuccessful trials).

In addition to event regressors, a total of 23 nuisance regressors were included to control for motion and physiological effects of no interest. First, to account for motion-related artifacts that had not been eliminated in rigid-body motion correction, the six motion regressors obtained during realignment were included. Second, to remove variance accounted for by cardiac and respiratory responses, a physiological noise model was constructed using an in-house MATLAB toolbox (The MathWorks) (Hutton et al., 2011). Models for cardiac and respiratory phase and their aliased harmonics were based on RETROICOR (Glover et al., 2000). The model for changes in respiratory volume was based on Birn et al. (2006). This resulted in 17 physiological regressors in total: 10 for cardiac phase, 6 for respiratory phase, and 1 for respiratory volume.

The parameters of the hemodynamic response function were modified to obtain a double-gamma hemodynamic response function, with the standard settings in Fmrib's Software Library (http://fsl.fmrib.ox.ac.uk/): delay to response 6 s, delay to undershoot 16 s, dispersion of response 2.5 s, dispersion of undershoot 4 s, ratio of response to undershoot 6 s, length of kernel 32 s.

The second GLM (GLM2) was identical to the first, except that the four parametric regressors (reward magnitude and effort of the chosen and unchosen option) were replaced by the subjective model-derived values of the chosen and unchosen option. This allowed us to identify regions encoding the difference in subjective value between the offers.

Three further GLMs were fitted to the data to test whether the values derived from the sigmoidal model provide the best explanation of the measured BOLD signals. These GLMs were identical to GLM2, except that the parametric regressors for the values of the chosen and unchosen option derived from the sigmoidal model were replaced by (1) the values derived from a hyperbolic model (GLM3), (2) the values derived from a parameter-free difference “reward − effort” (GLM4), or (3) the values derived from a parameter-free quotient “reward/effort” (GLM5).

Identifying signatures of choice computation.

Our first aim was to identify brain regions with BOLD signatures of choice computation (see Fig. 2A). Thus, we first identified brain regions that fulfilled the following two criteria (GLM1): (1) the BOLD signal correlated negatively with the difference in effort between chosen and unchosen options; and (2) the BOLD signal correlated positively with the difference in reward magnitude between chosen and unchosen options. Collectively, these two signals form the basis of a value difference signal because effort contributes negatively and reward magnitude contributes positively to overall value. Previous work has demonstrated, using predictions derived from a biophysical cortical attractor network, that at the level of large neural populations, as measured using human neuroimaging techniques, such as fMRI or MEG, the characteristic signature of a choice comparison process is a value difference signal (Hunt et al., 2012). The responses predicted for harder and easier choices differ because the speed of the network computations varies as a function of choice difficulty (e.g., faster for high value difference). Thus, an area at the formal conjunction of the two contrasts described by options 1 and 2 would carry the relevant signatures for computing a subjective value difference signal, a cardinal requirement for guiding choice. Importantly, while we reasoned that the choice computations in our specific task should follow similar principles as in Hunt et al. (2012), we expected this computation to occur in different regions because it would be based on the integration of a different type of decision cost. In an additional analysis (see Fig. 5), for completeness, we also identified brain regions significant in the inverse contrast (i.e., a conjunction of positive effort and negative reward magnitude difference) (Wunderlich et al., 2009; Hare et al., 2011).

Regions of interest (ROIs) and extraction of time courses.

For whole-brain analyses, we used a FWE cluster-corrected threshold of p < 0.05 (using a cluster-defining threshold of p < 0.01 and a cluster threshold of 10 voxels). For a priori ROI analyses, we used a small-volume corrected FWE cluster-level threshold of p < 0.05 in spheres of 5 mm around previous coordinates, namely, in left and right putamen ([±26, −8, −2]) (Croxson et al., 2009), SMA ([4, −6, 58]) (Croxson et al., 2009), and vmPFC ([−6, 48, −8] (Boorman et al., 2009).

BOLD time series were extracted from the preprocessed data of the identified regions by averaging the time series of all voxels that were significant at p < 0.001 (uncorrected). Time series were up-sampled with a resolution of 315 ms (1/10 × TR) and split into trials for visual illustration of the described effects (e.g., see Fig. 2B).

At the suggestion of one reviewer, the two main analyses (conjunction of reward and inverse effort difference described above, and value difference contrast described below) were repeated in FSL using Flame1 because of differences between SPM and FSL in controlling for false positives when using cluster-level corrections (Eklund et al., 2015). For this control analysis, we imported the preprocessed (unsmoothed) images to FSL. We then used FSL's default smoothing kernel of 5 mm and a cluster-forming threshold of z > 2.3 (corresponding to p < 0.01; default in FSL). The obtained results are overlaid in Figure 2A, D.

Encoding of subjective value.

We next asked whether BOLD signal changes in the regions identified using the abovementioned conjunction could indeed be described by the subjective values derived from our custom-made behavioral model. We thus performed a whole-brain contrast, identifying regions encoding the difference in subjective value between the chosen and unchosen option (GLM2; see Fig. 2D). To test whether the BOLD signal was better explained by subjective value as modeled using the sigmoidal function or three alternative models (hyperbolic; “difference”: reward − effort; “quotient”: reward/effort; GLM3-GLM5; see Fig. 3B), we calculated the difference between the value difference maps obtained on the first level for each participant (sigmoid vs hyperbolic; sigmoid vs difference; sigmoid vs quotient; see Fig. 3C). A standard second-level t test was performed on the three resulting difference images and statistical significance evaluated as usual.

Relating neural and behavioral effects of value difference.

If it was indeed the case that the regions identified to encode value difference are involved in choice computation and as a result, inform behavior, the BOLD value signal should systematically relate to behavioral measures of choice performance (Jocham et al., 2012; Kolling et al., 2012). To test this, we used the behavioral measure of the effect of value difference, βV, as derived from the logistic regression analysis above (Eq. 2). Importantly, before fitting βV, model-derived subjective values were scaled between [0, 1] for all participants so that any difference in the fitted regression coefficient βV indicated how strongly value difference influenced behavioral choices in a given participant. βV reflects how consistently participants choose the subjectively more valuable option. In other words, this parameter captures how strongly value rather than noise determines choice behavior. To examine whether the size of the neural value difference signal carried behavioral relevance, the behavioral weights βV were then used as a covariate for the value difference contrast in a second-level group analysis. At the whole-brain level, we thus identified regions where the encoding of value difference was significantly modulated by how strongly participants' choices were driven by subjective value (see Fig. 2F). This analysis was restricted to the regions that encoded value difference at the first level. For illustration of the effect, the neural signature of value difference (regression coefficients for chosen vs unchosen value at the peak time of 6 s) was plotted against βV (see Fig. 2F).

Reward maximization versus effort minimization.

In our task, reward maximization is in conflict with effort minimization in almost all trials because the option that has a higher reward value is also associated with a higher effort level. To capture the separate behavioral influences of reward and effort for each participant, another logistic regression analysis was conducted, but now both the difference in offer magnitudes and in efforts were entered into the design matrix, rather than just their combination into value as in Equation 2 as follows: Embedded Image Here, βM is the weight or precision with which reward magnitude difference (M1 − M2) influences choice, and βE is the weight (precision) with which effort difference (E1 − E2) influences choice.

Next, to identify which brain regions might bias the choice computation either toward reward or away from physical effort, we performed two independent tests. First, we used the behaviorally defined weights for effort, −βE, as a covariate on the second level, to identify regions where the encoding of effort difference scales with how “effort averse” participants were. In such regions, a larger difference between chosen and unchosen effort signals would indicate that participants avoid efforts more strongly (see Fig. 4B). Based on prior work, we had a priori hypotheses about effort preferences being guided by SMA and putamen (e.g., Croxson et al., 2009; Kurniawan et al., 2010, 2013; Burke et al., 2013). Therefore, we used a small-volume correction (p < 0.05) around previously established coordinates (putamen [±26, −8, −2]; SMA [4, −6, 58]) (Croxson et al., 2009). Second, in an analogous fashion, we used the behavioral weights for reward magnitude, βM, as a covariate on the second level to identify regions where the encoding of reward magnitude difference scales with how reward-seeking participants are. In brain regions thus identified, a larger BOLD signal difference between chosen and unchosen reward signals would imply that participants place a stronger weight on reward maximization in their choices (see Fig. 4A). Based on prior work, we expected reward magnitude comparisons to occur in vmPFC (e.g., Kable and Glimcher, 2007; Boorman et al., 2009; Philiastides et al., 2010). Therefore, we used a small-volume correction (p < 0.05) around previously established coordinates [−6, 48, −8] (Boorman et al., 2009).

We further characterized the relationship between participants' effort sensitivity and BOLD signal changes by asking whether the neural encoding of effort difference relates to the individual distortions captured in the parameters k and p of the effort discounting function. For each trial, we compared the true effort difference between the chosen and unchosen option with the modeled subjective effort difference between the chosen and unchosen option. We took the sum of the absolute error from the best linear fit between these two variables as an index of how well our initial GLM captured subjective distortions in the evaluation of effort. We used this measure as an additional regressor for our second-level analysis, in addition to βE (these two regressors are uncorrelated: r = −0.27, p = 0.24). This approach had the advantage that it combined subjective effort distortions driven by both p and k into a single parameter relevant for the effort comparison (correlation of the summed errors with k: r = 0.9646, p < 0.001; with p: r = 0.60, p = 0.0043).

Results

Human participants performed choices between options with varying rewards and physical efforts (force grips; Fig. 1A). Our main aim was to identify areas carrying neural signatures of value comparison, which are sometimes absent on choices when all decision variables favor the same choice (Hunt et al., 2012). Therefore, for the majority of decisions, larger rewards were paired with larger efforts so that reward maximization competed with energy minimization, and the reward and effort of each option had to be combined into an integrated subjective value to derive a choice. We first tested whether both the size of reward and the associated effort of each choice option had an impact on participant's choice behavior. A logistic regression showed that participants' choices were indeed guided by the reward magnitude and effort of both options (left reward: t(20) = −9.71, Cohen's d = −4.34, p = 4.28e-08; right reward: t(20) = 8.89, Cohen's d = 3.98, p = 1.44e-07; left effort: t(20) = 7.56, Cohen's d = 3.38, p = 2.79e-06; right effort: t(20) = −8.37, Cohen's d = −3.74, p = 2.79e-06; Fig. 1B). As expected, larger rewards and smaller effort costs attracted choices. Overall, participants chose the higher effort option on 48 ± 2% of trials.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Task and behavior. A, Human participants chose between two options associated with varying reward magnitude (numbers) and physical effort (bar height translates into force, Offer). Once the fixation cross turned red (Response), participants were allowed to indicate their choice. Thus, the time of choice computation was separable in time from the motor response. Following a response, the effort had to be realized on an unpredictable 30% of trials (top). On these trials, participants had to produce a 12 s power grip at a strength proportional to the bar height of the chosen option. Force levels were adjusted to individuals' maximum force at the start of the experiment. Participants received feedback about successful performance of the grip (99% accuracy), and the rewards collected on successful trials were added to the total winnings. On 70% of trials (bottom), no effort was required and the next trial commenced (intertrial interval [ITI]). B, Participants' choices were driven by both options' reward magnitude and effort level showing that all dimensions of the outcome were taken into account for computing a choice. Benefits and costs had opposite effects: larger efforts discouraged, and larger reward magnitudes encouraged, the choice of an option (standard errors: ± SEM). C, Correlations between left (L), right (R), chosen (C), and unchosen (U) effort levels (e) and reward magnitudes (r) show that the regressors of interest were sufficiently decorrelated in our design. D, Effort has a strong effect on choice in trials with small reward differences, but no effect when the reward difference is large (green panels; median split on reward difference; effort binned for visualization). Similarly, reward has a stronger effect in trials with small effort differences compared with trials with large effort differences (blue panels). This shows that participants indeed trade off effort against reward, and confirms that reward has a stronger and opposite effect compared with effort (red slope), as shown in B. Black lines indicate individual participants and suggest that reward and effort were treated as continuous variables.

Next, we examined whether effort was weighed up against reward; if this was the case, the influence of effort (reward) on the participant's choice would become stronger as the reward (effort) difference between the options becomes smaller. Indeed, effort had a bigger impact on choice in trials with a small compared with a large reward difference (median split; Fig. 1D, green panels; difference in slopes: t(20) = −18.06, p = 7.51e-14; small reward difference only: slope = −1.23 ± 0.11; t(20) = −11.51, p = 2.82e-10; large reward difference only: slope = 0.26 ± 0.12, t(20) = −1.95, p = 0.066). The same was true for reward: its impact on choice behavior was greater in trials with a small compared with a large effort difference (Fig. 1D, blue panels; difference in slopes: t(20) = 11.95, p = 1.46e-10; small effort difference only: slope = 1.65 ± 0.05, t(20) = 36.03, p = 1.14e-19; large effort difference only: slope = 0.38 ± 0.12, t(20) = 3.08, p = 0.0059). This analysis also confirmed that effort and reward were treated as continuous variables.

Given that behavior was guided by the costs as well as the benefits associated with the two choice options, we next asked whether any brain region encoded both effort and reward in a reference frame consistent with choice. Our main aim was to identify neural signatures of the choice computation: any brain region comparing the values of the two choice options should be sensitive to information about both costs and benefits. Recent work using a biophysically realistic attractor network model (Wang, 2002) suggests that the mass activity of a region computing a choice should reflect the difference of the values of both choice options (Hunt et al., 2012). In our task, a region comparing the options should hence encode (1) the inverse difference between chosen and unchosen efforts and (2) the (positive) difference between chosen and unchosen rewards. We therefore computed the formal conjunction of these two contrasts, which is a conservative test, asking whether any region is significant in both comparisons. This test focused on the decision phase, which was separated in time from the motor response (Fig. 1A). We identified a cluster of activation in the SMA and in the caudal portion of dorsal ACC (dACC), on the border of the anterior and posterior rostral cingulate zones (RCZa, RCZp) and area 24 (Neubert et al., 2015) (Fig. 2A; p < 0.05 cluster-level FWE-corrected; peak coordinate: [−6, 11, 34], t(1,40) = 4.02; SMA peak coordinate: [−9, −7, 58], t(1,40) = 5.29). No other regions reached FWE cluster-corrected significance (p < 0.05). Notably, we did not identify any activations in the vmPFC, a region commonly identified in reward-related value computations, even at lenient statistical thresholds (p < 0.01, uncorrected). Replication of this conjunction analysis in FSL, performed at the suggestion of one reviewer, obtained comparable results, with only dACC and SMA reaching cluster-level corrected significance (Fig. 2A, green overlays). The two difference signals for effort and reward are illustrated for the BOLD time series extracted from the dACC cluster in Figure 2B.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Neural signatures of effort choice comparison in SMA and dACC. A, As a marker of choice computation, we identified regions encoding (1) the difference between the chosen and unchosen reward magnitudes and (2) the inverse difference between the chosen and unchosen effort levels. The conjunction of both contrasts in SPM (shown at p < 0.001 uncorrected) revealed the SMA and a region in the caudal portion of dACC (both FWE-corrected p < 0.05). Cluster-level corrected results obtained from FSL's Flame 1 (z > 2.3, p < 0.05) are overlaid in green to confirm this finding. B, For illustration purposes, the two opposing difference signals are shown for the dACC cluster on the right (standard errors: ± SEM). C, A custom-built sigmoidal model was fitted to participants' choices to obtain individual effort discounting curves (gray; red represents group mean). In the model, the subjective value of an option's reward (y-axis, represented in %) is discounted with increasing effort levels (x-axis). This allowed inferring the subjective values ascribed to choice options and modeling of subjective value in the BOLD data. D, The difference in subjective value between the chosen and unchosen option, as derived from the behavioral effort discounting model in C, was encoded in a similar network of regions as the combined difference in reward magnitude and effort shown in A, including caudal dACC, SMA, bilateral putamen, and insula (shown at p < 0.001 uncorrected as obtained with SPM; cluster-level corrected FSL results overlaid in green for z > 2.7, p < 0.05). E, The subjective value difference signal extracted from the dACC is shown for illustration (standard errors: ± SEM). F, Left, Regions encoding subjective value as in D but where the strength of this signal additionally correlated with the extent to which value difference guided behavior (inverse softmax temperature βV; shown at p < 0.01 uncorrected; only the dACC survives cluster-level FWE-correction at p < 0.05). Right, Illustration of the correlation in dACC for visual display purposes only. The stronger the BOLD difference between the chosen and unchosen option in this region, the more precisely participants' choices are guided by value (βV). This suggests that the dACC's value signal computed at the time of choice is relevant for guiding choices. G, Regions where the encoding of effort difference correlates, across subjects, with a marker for the individual level of effort distortion as captured by the parameters k and p of the modeled discount function. The better an individual's subjectively experienced effort was captured in the GLM (i.e., the less distorted their discount function), the stronger the inverse effort difference signal in caudal dACC and SMA (light blue represents p < 0.001 uncorrected; dark blue represents p < 0.005 uncorrected; dACC/SMA survive cluster-level FWE-correction at p < 0.05). This suggests that dACC and SMA encode effort difference in the way it subjectively influences the choice.

These results raise the question of whether and how effort and reward are combined into an integrated value for each option, a prerequisite for testing whether any brain region encodes the comparison between subjective option values. Although established models exist to examine how participants compute compound values for uncertain/risky rewards (prospect theory) (Kahneman and Tversky, 1979; Tversky and Kahneman, 1992) and delayed rewards (hyperbolic) (Mazur, 1987; Laibson, 1997; Frederick et al., 2002), it remains unclear how efforts and rewards are combined into a subjective measure of value. We performed several behavioral experiments to develop a behavioral model that can formally describe the range of effort discounting behaviors observed in healthy populations (Klein-Flügge et al., 2015). One key feature of this model is that it can accommodate cases when increases in effort at lower effort levels have a comparatively small effect on value, compared with increases in effort at higher effort levels (i.e., concave discounting).

When we fitted this model to the choices recorded during the scanning session, participants' behavior was indeed best captured by an initially concave discounting shape (initially concave in 16 of 21 participants; Fig. 2C), consistent with previous work (Klein-Flügge et al., 2015) and the intuition that effort increases are less noticeable at lower levels of effort compared with higher levels of effort.

Using the individual model fits, we then directly tested for neural signatures consistent with a value comparison between the subjective values of the two choice options. This is a slightly less conservative test than the formal conjunction of effort and reward magnitude difference described above, but we note that this test revealed a highly consistent pattern of results. We found strong evidence for a network consisting of the SMA (peak: [−9, −7, 58], t(1,20) = 8.64), caudal portion of dACC (peak: [−3, 11, 34], t(1,20) = 7.1), and bilateral putamen (several peaks: left [−33, −13, 4], t(1,20) = 4.96 and [−33, −10, −2], t(1,20) = 5.28; right [33, −1, −2], t(1,20) = 4.96) to encode the (positive) difference in subjective value between the chosen and unchosen options (Fig. 2D; all cluster-level FWE-corrected; p < 0.05). Again, comparable results were obtained using FSL (Fig. 2D, green overlays). This network resembled regions previously described for the evaluation of physical effort but was clearly distinct from the neural system associated with decisions about goods involving the vmPFC (Kable and Glimcher, 2007; Boorman et al., 2009; FitzGerald et al., 2009; Philiastides et al., 2010; Hunt et al., 2012; Kolling et al., 2012; Clithero and Rangel, 2014; Strait et al., 2014).

To validate our choice of behavioral discounting model, we performed a formal model comparison and found that the sigmoidal model provided a better explanation of choice behavior than (convex) hyperbolic discounting, previously proposed for effort discounting (Prévost et al., 2010), and two parameter-free descriptions of value “reward minus effort” and “reward divided by effort” (model exceedance probability: xp = 1; mean of posterior distribution: mp_sigm = 0.75; mp_hyp = 0.05; mp_diff = 0.16; mp_div = 0.04; Fig. 3A). On average, the sigmoidal model correctly predicted 88 ± 1% of choices. To examine whether our measure of value derived from the sigmoidal model also best predicted the BOLD signal, we recalculated the value difference contrasts in an analogous way, this time modeling value using a hyperbolic or one of the two parameter-free models. The resulting whole-brain maps similarly highlighted SMA and dACC (surviving cluster-level FWE-corrected, p < 0.05 for the hyperbolic and difference models, not significant for the quotient model; Fig. 3B). But importantly, direct statistical comparison showed that the neural signal in these regions was significantly better explained by the values derived from the sigmoidal model (cluster-level FWE-corrected, p < 0.05 for the difference and quotient models; sigmoidal vs hyperbolic: SMA peak [−3, −7, 61], t(1,19) = 3.95; sigmoidal vs difference: dACC peak [−6, 11, 34], t(1,19) = 3.28; SMA peak [−6, −7, 58], t(1,19) = 5.33; sigmoidal vs quotient: dACC peak [−6, 11, 34], t(1,19) = 4.77; SMA peak [−6, −7, 61], t(1,19) = 6.72; Fig. 3C). This suggests that the BOLD signal aligns with the subjective experience of effort-discounted value, which was best captured using the sigmoidal model.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Model-derived value describes choice signals more accurately than model-free value. A, Bayesian model comparison for value modeled using the sigmoidal model, hyperbolic model, and two parameter-free descriptions of value: reward minus effort and reward divided by effort. The sigmoidal model captures choice behavior best (standard errors: ± SEM). B, Comparison of the BOLD response to value difference for the behavioral sigmoidal model (red; like Fig. 2D), the hyperbolic model (purple), and the parameter-free descriptions of value (blue: reward − effort; green: reward/effort; all shown at p < 0.001 uncorrected). The three alternative contrasts reveal a similar network, albeit less strongly. C, Crucially, the sigmoidal model provides a significantly better description of the BOLD signal in SMA, extending into caudal dACC, compared with all other models. Purple represents sigmoidal versus hyperbolic. Blue represents sigmoidal versus parameter-free subtraction. Green represents sigmoidal versus parameter-free division (shown at p < 0.001 uncorrected).

A crucial question is whether the observed value difference signal bears any behavioral relevance for choice, rather than potentially being a mere byproduct of a choice computation elsewhere. In the former case, one would expect that the encoding of subjective value difference relates to the strength, or “weight,” with which subjective value difference influenced behavior across participants (Jocham et al., 2012; Kolling et al., 2012; Khamassi et al., 2015). Such a behavioral weight was derived for each participant using a logistic regression on the normalized model-derived subjective values. The resulting parameter estimate is the same as the inverse softmax temperature or precision and reflects how consistently participants choose the subjectively more valuable option (see βV in Eq. 2). The only region that was significant in this second-level test and also encoded value difference at the first level was the dACC (Fig. 2F; cluster-level FWE-corrected, p < 0.05; peak [−3, 11, 31], t(1,19) = 3.71). In other words, dACC encoded value difference on average across the group, and participants who exhibited a larger BOLD value difference signal in the dACC were also more consistent in choosing the subjectively better option (larger βV); this relationship is illustrated in Figure 2F.

To further probe whether the identified network of regions evaluates the choice options in a subjective manner, we examined the relationship between the subjective “distortion” of effort described by the parameters k and p of the individual effort discount function, and the BOLD signal related to the effort difference across participants. We calculated a measure to describe how much the true effort difference deviated from the subjectively experienced effort difference overall across trials. This “distortion” regressor correlated with k (r = 0.9646, p < 0.001) and p (r = 0.60, p = 0.0043), but not βE (r = −0.27, p = 0.24), and was used as a second-level covariate for the effort difference contrast. GLM1 contained the efforts shown on the screen and thus should have captured the subjectively experienced effort better in participants who showed smaller effort distortions (i.e., with discounting closer to linear). Thus, in regions related to the comparison of subjective effort or effort-integrated value, we expected participants with less effort distortions to show a stronger negative effort difference signal. Indeed, we found such a positive second-level correlation with the BOLD signal in dACC and SMA, supporting the notion that effort difference is encoded in these regions in the way it subjectively influences the choice (Fig. 2G; cluster-level FWE-corrected, p < 0.05; dACC peak [−3, 14, 31], t(1,19) = 5.01, global maxima; SMA peak [−6, −7, 61], t(1,19) = 4.08).

ACC has access to information from motor structures (Selemon and Goldman-Rakic, 1985; Dum and Strick, 1991; Morecraft and Van Hoesen, 1992, 1998; Kunishio and Haber, 1994; Beckmann et al., 2009) previously linked to evaluating effort (Croxson et al., 2009; Kurniawan et al., 2010, 2013; Burke et al., 2013), and prefrontal regions known to be involved in reward processing, such as the vmPFC and OFC (Padoa-Schioppa and Assad, 2006; Kennerley and Wallis, 2009; Levy and Glimcher, 2011; Rudebeck and Murray, 2011; Klein-Flügge et al., 2013; Chau et al., 2015; Stalnaker et al., 2015). We thus reasoned that the ACC may be a key node for the type of effort-based choice assessed in the present task. To further test this hypothesis, we sought to identify regions that mediate between reward maximization versus effort minimization in our task.

To this end, we first extracted two separate behavioral weights reflecting participants' tendency to seek reward and avoid effort. These behavioral parameters were derived from a logistic regression with two regressors explaining how much choices were guided by the difference in reward magnitude and the difference in effort level between options (βM and βE in Eq. 3). This is distinct from using just one regressor for the combined subjective value difference as done above (βV). Across participants, we then first identified brain regions where the encoding of chosen versus unchosen reward magnitude correlated with the weight, βM, with which choices were influenced by the difference in reward between the chosen and unchosen option. Second, we performed the equivalent test for effort (i.e., we identified regions where the neural encoding of chosen vs unchosen effort correlated with the weight, −βE, with which behavior was guided by the difference in effort between the chosen and unchosen option). The two tests revealed two distinct networks of regions. First, the vmPFC encoded reward magnitude difference across subjects as a function of how much participants' choices were driven by the difference in reward between the options (SVC FWE-corrected cluster-level p = 0.037; peak [−6, 44, −8], t(1,19) = 2.87; Fig. 4A). Unlike in many other tasks (Kable and Glimcher, 2007; Boorman et al., 2009; FitzGerald et al., 2009; Philiastides et al., 2010; Hunt et al., 2012; Kolling et al., 2012; Clithero and Rangel, 2014; Strait et al., 2014), the vmPFC BOLD signal did not correlate with chosen reward or reward difference on average in the group. However, reward difference signals were on average positive for participants whose choices were more strongly driven by reward magnitudes (median split; Fig. 4A). At the whole-brain level, the correlation of behavioral reward-weight, βM, and BOLD reward difference encoding did not reveal any activations using our FWE cluster-level corrected criterion of p < 0.05. Using a lenient exploratory threshold (p = 0.01, uncorrected), we identified a small number of other regions including the posterior cingulate cortex bilaterally and visual cortex (Fig. 4A), but crucially no clusters in motor, supplementary motor, or striatal regions.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Distinct circuits bias choices toward reward maximization or effort minimization. A, Regions where the encoding of reward magnitude difference varied as a function of the behavioral weight participants placed on reward (βM; top: shown at p < 0.01 uncorrected). This showed that the BOLD signal in vmPFC (SVC FWE-corrected, p < 0.05) reflected the difference between chosen and unchosen reward more strongly in participants who also placed a stronger weight on maximizing reward (top, bottom left). Although we could not identify an average reward difference coding in vmPFC across the group, the subset of participants who placed a stronger weight on reward (larger βM; median split, ellipse) did encode the difference between the chosen and unchosen reward magnitudes (bottom right). This suggests that vmPFC might bias choices toward reward maximization (standard errors: ± SEM). B, A very distinct network of regions, including the SMA and putamen (both SVC FWE-corrected, p < 0.05), encoded effort difference as a function of participants' behavioral effort weight (βE; shown at p < 0.01 uncorrected). This system was active more strongly in participants who tried to more actively avoid higher efforts and has often been associated with effort evaluation. It might counteract the vmPFC circuit shown in A to achieve effort minimization, which is in constant conflict with reward maximization in our task. Correlation plots (bottom) are only shown for visual illustration of the effects for a priori ROIs; no statistical analyses were performed on these data.

By contrast, a network of motor regions, including SMA and putamen, encoded effort difference as a function of the individual behavioral effort weight −βE (Fig. 4B; SVC FWE-corrected cluster-level SMA: p = 0.048, peak [3, −7, 58], t(1,19) = 2.59; left putamen: p = 0.035, peak [−27, −4, −5], t(1,19) = 3.39; right putamen no suprathreshold voxels). In other words, these regions encoded the difference in effort between the chosen and unchosen options more strongly in participants whose choices were negatively influenced by large effort differences (i.e., participants who were more sensitive to effort costs). Using a whole-brain FWE cluster-level-corrected threshold (p < 0.05), no regions were detected in this contrast. At an exploratory threshold (p = 0.01, uncorrected), this contrast also highlighted regions in the brainstem, primary motor cortex, thalamus, and dorsal striatum (Fig. 4B), and thus regions previously implicated in evaluating motor costs and in recruiting resources in anticipation of effort (Croxson et al., 2009; Burke et al., 2013; Kurniawan et al., 2013), but clearly distinct from the vmPFC/posterior cingulate cortex network identified in the equivalent test for reward above.

Together, our data thus show that two distinct networks centered on vmPFC versus SMA/putamen encode the reward versus effort difference as a function of how much these variables influence the final choice. Yet only the caudal portion of dACC encodes the difference in overall subjective value as a function of how much overall value influences choice. This region in dACC could therefore be a potential mediator between reward maximization and effort minimization, which appear to occur in separate neural circuits.

Functionally distinct subregions of medial PFC

For completeness, we also tested whether any areas encode an opposite value difference signal (i.e., the inverse of the conjunction analysis and of the subjective value difference contrast performed above), reflecting the evidence against the chosen option and thus one notion of decision difficulty. This did not reveal any regions at our conservative (cluster-level FWE-corrected) threshold in either test. At a more lenient exploratory threshold (p = 0.01 uncorrected), a single common cluster in medial PFC (pre-SMA/area 9) was identified (Fig. 5), in agreement with previous reports of negative value difference signals in this region (Wunderlich et al., 2009; Hare et al., 2011). Importantly, the location of this activation was clearly distinct from the caudal dACC region found to encode a positive value difference (Fig. 2). Here, by contrast, value difference signals did not correlate with the strength with which subjective value difference influenced behavior across participants (βV; no suprathreshold voxels at p = 0.01 uncorrected), suggesting that this region's functions during choice are separate from those that bias behavior.

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Opposite coding of relative choice value in dorsal medial frontal cortex. A, Regions where the BOLD signal encodes an inverse rather than a positive difference between chosen and unchosen reward magnitudes and a positive rather than an inverse difference between chosen and unchosen effort (i.e., the exact inverse of the conjunction shown in Fig. 2A). The only region detected at a lenient threshold (p = 0.01 uncorrected; no regions survive FWE correction) is a nearby but anatomically distinct region in medial prefrontal cortex previously suggested to serve as a choice comparator (Wunderlich et al., 2009; Hare et al., 2011). B, However, in this region, the BOLD signal does not relate to behavior, as was the case for the caudal portion of dACC (see Fig. 2F).

Discussion

Choices requiring the consideration of motor costs are ubiquitous in everyday life. Unlike other types of choices, they require knowledge of the current state of the body and its available energy resources, to weight physical costs against potential benefits. How this trade-off might be implemented neurally remains largely unknown.

Here, we identified a region in the caudal part of dACC as the key brain region that carried the requisite signatures for effort-based choice: dACC represented the costs and benefits of the chosen relative to the alternative option, integrated effort and reward into a combined subjective value signal, computed the subjective value difference between the chosen relative to the alternative option, and activity here correlated with the degree to which participants' choices were driven by value.

ACC integrates effort and reward information

Work from several lines of research suggests that ACC may be a key region for performing cost-benefit integration for effort-based choice. For example, lesions to ACC (but not OFC) result in fewer choices of a high effort/high reward compared with a low effort/low reward option: yet such animals still choose larger reward options when effort costs for both options are equated, implying that ACC is not essential when decisions can be solved only by reward (Walton et al., 2003, 2009; Schweimer and Hauber, 2005; Rudebeck et al., 2006; Floresco and Ghods-Sharifi, 2007). BOLD responses in human ACC reflect the integrated value of effort-based options in the absence of choice (Croxson et al., 2009). Further, single neuron recordings from ACC encode information about both effort and reward (Shidara and Richmond, 2002; Kennerley et al., 2009, 2011) and integrate costs and benefits into a value signal (Hillman and Bilkey, 2010; Hosokawa et al., 2013; Hunt et al., 2015). ACC thus appears to have a critical role in integrating effort and reward information to derive the subjective value of performing a particular action.

ACC encodes a choice comparison signal

However, from the aforementioned work, it remained unclear whether cost-benefit values of different choice options are actually compared in ACC, or whether reward and effort may be compared in separate neural structures and the competition resolved between areas. When one choice option is kept constant, the value of the changing option correlates perfectly with the value difference between the options (Kurniawan et al., 2010; Prévost et al., 2010; Bonnelle et al., 2016), which precludes distinguishing between valuation and value comparison processes. This is similarly true when only one option is offered and accepted/rejected (Bonnelle et al., 2016). We here varied both options' values from trial to trial, which allowed us to identify a choice comparison signal in the ACC, and thus the essential neural signature implicating this area in decision making. First, we show that a region in the caudal portion of dACC encodes separate difference signals for effort and reward. The direction of these difference signals aligns with their respective effect on value, with effort decreasing and reward increasing an option's overall value. Second, we demonstrate a comparison signal between integrated option values. We used a novel behavioral model (Klein-Flügge et al., 2015) to characterize participants' individual tendency to discount reward given the level of motor costs. Using the resultant model-derived subjective values, we identified the dACC as a region encoding a combined value difference signal. Indeed, our model provided a better characterization of the BOLD signal than other models of effort discounting, and dACC activity was related to individuals' “distortions” of effort. This resolves an important question showing that effort and reward information are indeed brought together within a single region to inform choice.

Finally, this value comparison signal also varied as a function of how much value influenced choices across participants. This result further strengthens the idea that the dACC plays a crucial role in guiding choice, rather than merely representing effort or reward information. In our task, no other region exhibited similar dynamics, even at lenient thresholds.

Influences from “effort” and “reward” circuits

Nevertheless, an important question remains: do the regions that preferentially encode reward or effort have any influence on choice? To examine this question, we looked for regions that explain participants' tendency to avoid effort, or to seek reward. This analysis revealed two distinct circuits. Whereas signals in vmPFC reflected the relative benefits as a function of how reward-driven participants' choices were, a network more commonly linked to action selection and effort evaluation (Croxson et al., 2009; Kurniawan et al., 2010, 2013; Prévost et al., 2010; Burke et al., 2013; Bonnelle et al., 2016), including SMA and putamen, encoded relative effort as a function of how much participants tried to avoid energy expenditure. It will be of interest to examine in future work how these circuits interact, and how different modulatory systems contribute to this interplay (see e.g., Varazzani et al., 2015). This question should be extended to situations when different costs coincide or different strategies compete (for one recent example, see Burke et al., 2013), or when information about effort and reward has to be learned (Skvortsova et al., 2014; Scholl et al., 2015).

Converging evidence for multiple decision circuits

Our results contribute to an emerging literature demonstrating the existence of multiple decision systems in the brain which are flexibly recruited based on the type of decision (Rushworth et al., 2012). One well-studied system concerns choices where costs are directly tied to outcomes (e.g., risk, delay). During this type of choice, vmPFC encodes the difference between the chosen and unchosen options' cost-benefit value (Kable and Glimcher, 2007; Boorman et al., 2009; Philiastides et al., 2010; Hunt et al., 2012; Kolling et al., 2012), consistent with the decision impairments observed after vmPFC lesions (Noonan et al., 2010; Camille et al., 2011a, c). Other types of choices, however, rely on other networks (Kolling et al., 2012; Hunt et al., 2014; Wan et al., 2015). In the present study, decisions required the integration of motor costs, and we show that for this, dACC, rather than vmPFC, plays a more central role. vmPFC did not encode overall value or the difference in value between the options in our task; in our hands, vmPFC evidenced no information about effort costs, consistent with previous proposals (Prévost et al., 2010; Skvortsova et al., 2014).

Functionally dissociable anatomical subregions of mPFC

The location in the dACC identified here is distinct from a more anterior and dorsal region in medial frontal cortex (in or near pre-SMA) where BOLD encodes the opposite signal: a negative value difference (Wunderlich et al., 2009; Hare et al., 2011). It is also more posterior than a dACC region involved in foraging choices (Kolling et al., 2012). The cluster of activation identified here extends from the cingulate gyrus dorsally into the lower bank of the cingulate sulcus, and it is sometimes also referred to it as midcingulate cortex (MCC) (Procyk et al., 2016) or rostral cingulate zone (Ridderinkhof et al., 2004). According to a recent connectivity-based parcellation, our activation is on the border of areas RCZa (34%), RCZp (33%) and area 24 (48%) (Neubert et al., 2015). While it shares some voxels with the motor cingulate regions in humans (Amiez and Petrides, 2014), most parts of our cluster are more ventral and located in the gyral portion of ACC (for a discussion of functionally dissociable activations in ACC, see also Kolling et al., 2016).

Relevance for disorders of motivation

Our findings in the dACC speak to an important line of research showing deficits in effort-based decision making in a number of disorders, including depression, negative symptom schizophrenia, and apathy (Levy and Dubois, 2006; Cléry-Melin et al., 2011; Treadway et al., 2012, 2015; Fervaha et al., 2013; Gold et al., 2013; Hartmann et al., 2014; Pizzagalli, 2014; Yang et al., 2014; Bonnelle et al., 2015). Patients with these disorders often show a reduced ability to initiate effortful actions to obtain reward. Crucially, they also exhibit abnormalities in ACC and basal ganglia circuits, as well as other regions processing information about the autonomic state, including the amygdala and some brainstem structures (Drevets et al., 1997; Botteron et al., 2002; Levy and Dubois, 2006). Furthermore, individuals with greater behavioral apathy scores show enhanced recruitment of precisely the circuits implicated in the present study, including SMA and cingulate cortex, when deciding to initiate effortful behavior (Bonnelle et al., 2016). This is interesting because apathy correlates with increased effort sensitivity (βE) (Bonnelle et al., 2016), and we found that individuals with increased effort sensitivity showed enhanced recruitment of SMA and brainstem regions for encoding the effort difference (Fig. 3B). In other words, when committing to a larger (relative) effort, these circuits were more active in people who were more sensitive to effort. As discussed by Bonnelle et al. (2016), we cannot infer cause and effect, but it is possible that the neural balance between activations in reward and effort systems might be different in individuals with greater sensitivity to efforts (such as apathetic individuals). This may be why these people avoid choosing effortful options more often than others. It also provides a possible connection between the network's specific role in effort-based choice and its functional contribution to everyday life behaviors.

Footnotes

  • This work was supported by Wellcome Trust 086120/Z/08/Z to M.C.K.-F., 096689/Z/11/Z to S.W.K., and 088130/Z/09/Z and 091593/Z/10/Z to K.F., and European Research Council ActSelectContext 260424 to S.B. We thank Tim Behrens, Laurence Hunt, Matthew Rushworth, Marco Wittmann, and all our laboratory members for helpful discussions on the data; and the imaging and IT teams at the FIL and Sobell Department for their support with data acquisition and handling.

  • The authors declare no competing financial interests.

  • This is an Open Access article distributed under the terms of the Creative Commons Attribution License Creative Commons Attribution 4.0 International, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.

  • Correspondence should be addressed to Dr. Miriam C. Klein-Flügge, Department of Experimental Psychology, University of Oxford, 9 South Parks Road, Oxford OX1 3UD, UK. miriam.klein-flugge{at}psy.ox.ac.uk

This article is freely available online through the J Neurosci Author Open Choice option.

References

  1. ↵
    1. Amiez C,
    2. Petrides M
    (2014) Neuroimaging evidence of the anatomo-functional organization of the human cingulate motor areas. Cereb Cortex 24:563–578, doi:10.1093/cercor/bhs329, pmid:23131805.
    OpenUrlAbstract/FREE Full Text
  2. ↵
    1. Andersson JL,
    2. Hutton C,
    3. Ashburner J,
    4. Turner R,
    5. Friston K
    (2001) Modeling geometric deformations in EPI time series. Neuroimage 13:903–919, doi:10.1006/nimg.2001.0746, pmid:11304086.
    OpenUrlCrossRefPubMed
  3. ↵
    1. Beckmann M,
    2. Johansen-Berg H,
    3. Rushworth MF
    (2009) Connectivity-based parcellation of human cingulate cortex and its relation to functional specialization. J Neurosci 29:1175–1190, doi:10.1523/JNEUROSCI.3328-08.2009, pmid:19176826.
    OpenUrlAbstract/FREE Full Text
  4. ↵
    1. Birn RM,
    2. Diamond JB,
    3. Smith MA,
    4. Bandettini PA
    (2006) Separating respiratory-variation-related fluctuations from neuronal-activity-related fluctuations in fMRI. Neuroimage 31:1536–1548, doi:10.1016/j.neuroimage.2006.02.048, pmid:16632379.
    OpenUrlCrossRefPubMed
  5. ↵
    1. Bonnelle V,
    2. Veromann KR,
    3. Burnett Heyes S,
    4. Lo Sterzo E,
    5. Manohar S,
    6. Husain M
    (2015) Characterization of reward and effort mechanisms in apathy. J Physiol Paris 109:16–26, doi:10.1016/j.jphysparis.2014.04.002, pmid:24747776.
    OpenUrlCrossRefPubMed
  6. ↵
    1. Bonnelle V,
    2. Manohar S,
    3. Behrens T,
    4. Husain M
    (2016) Individual differences in premotor brain systems underlie behavioral apathy. Cereb Cortex 26:807–819, doi:10.1093/cercor/bhv247, pmid:26564255.
    OpenUrlAbstract/FREE Full Text
  7. ↵
    1. Boorman ED,
    2. Behrens TE,
    3. Woolrich MW,
    4. Rushworth MF
    (2009) How green is the grass on the other side? Frontopolar cortex and the evidence in favor of alternative courses of action. Neuron 62:733–743, doi:10.1016/j.neuron.2009.05.014, pmid:19524531.
    OpenUrlCrossRefPubMed
  8. ↵
    1. Botteron KN,
    2. Raichle ME,
    3. Drevets WC,
    4. Heath AC,
    5. Todd RD
    (2002) Volumetric reduction in left subgenual prefrontal cortex in early onset depression. Biol Psychiatry 51:342–344, doi:10.1016/S0006-3223(01)01280-X, pmid:11958786.
    OpenUrlCrossRefPubMed
  9. ↵
    1. Burke CJ,
    2. Brünger C,
    3. Kahnt T,
    4. Park SQ,
    5. Tobler PN
    (2013) Neural integration of risk and effort costs by the frontal pole: only upon request. J Neurosci 33:1706–1713, doi:10.1523/JNEUROSCI.3662-12.2013, pmid:23345243.
    OpenUrlAbstract/FREE Full Text
  10. ↵
    1. Camille N,
    2. Griffiths CA,
    3. Vo K,
    4. Fellows LK,
    5. Kable JW
    (2011a) Ventromedial frontal lobe damage disrupts value maximization in humans. J Neurosci 31:7527–7532, doi:10.1523/JNEUROSCI.6527-10.2011, pmid:21593337.
    OpenUrlAbstract/FREE Full Text
  11. ↵
    1. Camille N,
    2. Tsuchida A,
    3. Fellows LK
    (2011b) Double dissociation of stimulus-value and action-value learning in humans with orbitofrontal or anterior cingulate cortex damage. J Neurosci 31:15048–15052, doi:10.1523/JNEUROSCI.3164-11.2011, pmid:22016538.
    OpenUrlAbstract/FREE Full Text
  12. ↵
    1. Chau BK,
    2. Sallet J,
    3. Papageorgiou GK,
    4. Noonan MP,
    5. Bell AH,
    6. Walton ME,
    7. Rushworth MF
    (2015) Contrasting roles for orbitofrontal cortex and amygdala in credit assignment and learning in macaques. Neuron 87:1106–1118, doi:10.1016/j.neuron.2015.08.018, pmid:26335649.
    OpenUrlCrossRefPubMed
  13. ↵
    1. Cléry-Melin ML,
    2. Schmidt L,
    3. Lafargue G,
    4. Baup N,
    5. Fossati P,
    6. Pessiglione M
    (2011) Why don't you try harder? An investigation of effort production in major depression. PLoS One 6:e23178, doi:10.1371/journal.pone.0023178, pmid:21853083.
    OpenUrlCrossRefPubMed
  14. ↵
    1. Clithero JA,
    2. Rangel A
    (2014) Informatic parcellation of the network involved in the computation of subjective value. Soc Cogn Affect Neurosci 9:1289–1302, doi:10.1093/scan/nst106, pmid:23887811.
    OpenUrlAbstract/FREE Full Text
  15. ↵
    1. Croxson PL,
    2. Walton ME,
    3. O'Reilly JX,
    4. Behrens TE,
    5. Rushworth MF
    (2009) Effort-based cost-benefit valuation and the human brain. J Neurosci 29:4531–4541, doi:10.1523/JNEUROSCI.4515-08.2009, pmid:19357278.
    OpenUrlAbstract/FREE Full Text
  16. ↵
    1. Drevets WC,
    2. Price JL,
    3. Simpson JR Jr.,
    4. Todd RD,
    5. Reich T,
    6. Vannier M,
    7. Raichle ME
    (1997) Subgenual prefrontal cortex abnormalities in mood disorders. Nature 386:824–827, doi:10.1038/386824a0, pmid:9126739.
    OpenUrlCrossRefPubMed
  17. ↵
    1. Dum RP,
    2. Strick PL
    (1991) The origin of corticospinal projections from the premotor areas in the frontal lobe. J Neurosci 11:667–689, pmid:1705965.
    OpenUrlAbstract
  18. ↵
    1. Eklund A,
    2. Nichols T,
    3. Knutsson H
    (2015) Can parametric statistical methods be trusted for fMRI based group studies? ArXiv151101863 Math Stat. Available at: http://arxiv.org/abs/1511.01863. Accessed May 26, 2016.
  19. ↵
    1. Fervaha G,
    2. Foussias G,
    3. Agid O,
    4. Remington G
    (2013) Neural substrates underlying effort computation in schizophrenia. Neurosci Biobehav Rev 37:2649–2665, doi:10.1016/j.neubiorev.2013.09.001, pmid:24035741.
    OpenUrlCrossRefPubMed
  20. ↵
    1. FitzGerald TH,
    2. Seymour B,
    3. Dolan RJ
    (2009) The role of human orbitofrontal cortex in value comparison for incommensurable objects. J Neurosci 29:8388–8395, doi:10.1523/JNEUROSCI.0717-09.2009, pmid:19571129.
    OpenUrlAbstract/FREE Full Text
  21. ↵
    1. Floresco SB,
    2. Ghods-Sharifi S
    (2007) Amygdala-prefrontal cortical circuitry regulates effort-based decision making. Cereb Cortex 17:251–260, doi:10.1093/cercor/bhj143, pmid:16495432.
    OpenUrlAbstract/FREE Full Text
  22. ↵
    1. Frederick S,
    2. Loewenstein G,
    3. O'Donoghue T
    (2002) Time discounting and time preference: a critical review. J Econ Lit 40:351–401, doi:10.1257/jel.40.2.351.
    OpenUrlCrossRef
  23. ↵
    1. Friston K,
    2. Mattout J,
    3. Trujillo-Barreto N,
    4. Ashburner J,
    5. Penny W
    (2007) Variational free energy and the Laplace approximation. Neuroimage 34:220–234, doi:10.1016/j.neuroimage.2006.08.035, pmid:17055746.
    OpenUrlCrossRefPubMed
  24. ↵
    1. Glover GH,
    2. Li TQ,
    3. Ress D
    (2000) Image-based method for retrospective correction of physiological motion effects in fMRI: RETROICOR. Magn Reson Med 44:162–167, doi:10.1002/1522-2594(200007)44:1%3C162::AID-MRM23%3E3.0.CO%3B2-E, pmid:10893535.
    OpenUrlCrossRefPubMed
  25. ↵
    1. Gold JM,
    2. Strauss GP,
    3. Waltz JA,
    4. Robinson BM,
    5. Brown JK,
    6. Frank MJ
    (2013) Negative symptoms of schizophrenia are associated with abnormal effort-cost computations. Biol Psychiatry 74:130–136, doi:10.1016/j.biopsych.2012.12.022, pmid:23394903.
    OpenUrlCrossRefPubMed
  26. ↵
    1. Hare TA,
    2. Schultz W,
    3. Camerer CF,
    4. O'Doherty JP,
    5. Rangel A
    (2011) Transformation of stimulus value signals into motor commands during simple choice. Proc Natl Acad Sci U S A 108:18120–18125, doi:10.1073/pnas.1109322108, pmid:22006321.
    OpenUrlAbstract/FREE Full Text
  27. ↵
    1. Hartmann MN,
    2. Hager OM,
    3. Reimann AV,
    4. Chumbley JR,
    5. Kirschner M,
    6. Seifritz E,
    7. Tobler PN,
    8. Kaiser S
    (2014) Apathy but not diminished expression in schizophrenia is associated with discounting of monetary rewards by physical effort. Schizophr Bull 41:503–512, doi:10.1093/schbul/sbu102, pmid:25053653.
    OpenUrlCrossRefPubMed
  28. ↵
    1. Hayden BY,
    2. Platt ML
    (2010) Neurons in anterior cingulate cortex multiplex information about reward and action. J Neurosci 30:3339–3346, doi:10.1523/JNEUROSCI.4874-09.2010, pmid:20203193.
    OpenUrlAbstract/FREE Full Text
  29. ↵
    1. Hillman KL,
    2. Bilkey DK
    (2010) Neurons in the rat anterior cingulate cortex dynamically encode cost-benefit in a spatial decision-making task. J Neurosci 30:7705–7713, doi:10.1523/JNEUROSCI.1273-10.2010, pmid:20519545.
    OpenUrlAbstract/FREE Full Text
  30. ↵
    1. Hosokawa T,
    2. Kennerley SW,
    3. Sloan J,
    4. Wallis JD
    (2013) Single-neuron mechanisms underlying cost-benefit analysis in frontal cortex. J Neurosci 33:17385–17397, doi:10.1523/JNEUROSCI.2221-13.2013, pmid:24174671.
    OpenUrlAbstract/FREE Full Text
  31. ↵
    1. Hunt LT,
    2. Kolling N,
    3. Soltani A,
    4. Woolrich MW,
    5. Rushworth MF,
    6. Behrens TE
    (2012) Mechanisms underlying cortical activity during value-guided choice. Nat Neurosci 15:470–476, doi:10.1038/nn.3017, pmid:22231429.
    OpenUrlCrossRefPubMed
  32. ↵
    1. Hunt LT,
    2. Dolan RJ,
    3. Behrens TE
    (2014) Hierarchical competitions subserving multi-attribute choice. Nat Neurosci 17:1613–1622, doi:10.1038/nn.3836, pmid:25306549.
    OpenUrlCrossRefPubMed
  33. ↵
    1. Hunt LT,
    2. Behrens TE,
    3. Hosokawa T,
    4. Wallis JD,
    5. Kennerley SW
    (2015) Capturing the temporal evolution of choice across prefrontal cortex. eLife, e11945.
  34. ↵
    1. Hutton C,
    2. Josephs O,
    3. Stadler J,
    4. Featherstone E,
    5. Reid A,
    6. Speck O,
    7. Bernarding J,
    8. Weiskopf N
    (2011) The impact of physiological noise correction on fMRI at 7T. Neuroimage 57:101–112, doi:10.1016/j.neuroimage.2011.04.018, pmid:21515386.
    OpenUrlCrossRefPubMed
  35. ↵
    1. Jocham G,
    2. Hunt LT,
    3. Near J,
    4. Behrens TE
    (2012) A mechanism for value-guided choice based on the excitation-inhibition balance in prefrontal cortex. Nat Neurosci 15:960–961, doi:10.1038/nn.3140, pmid:22706268.
    OpenUrlCrossRefPubMed
  36. ↵
    1. Kable JW,
    2. Glimcher PW
    (2007) The neural correlates of subjective value during intertemporal choice. Nat Neurosci 10:1625–1633, doi:10.1038/nn2007, pmid:17982449.
    OpenUrlCrossRefPubMed
  37. ↵
    1. Kahneman D,
    2. Tversky A
    (1979) Prospect theory: analysis of decision under risk. Econometrica 47:263–291, doi:10.2307/1914185.
    OpenUrlCrossRefPubMed
  38. ↵
    1. Kennerley SW,
    2. Wallis JD
    (2009) Evaluating choices by single neurons in the frontal lobe: outcome value encoded across multiple decision variables. Eur J Neurosci 29:2061–2073, doi:10.1111/j.1460-9568.2009.06743.x, pmid:19453638.
    OpenUrlCrossRefPubMed
  39. ↵
    1. Kennerley SW,
    2. Walton ME,
    3. Behrens TE,
    4. Buckley MJ,
    5. Rushworth MF
    (2006) Optimal decision making and the anterior cingulate cortex. Nat Neurosci 9:940–947, doi:10.1038/nn1724, pmid:16783368.
    OpenUrlCrossRefPubMed
  40. ↵
    1. Kennerley SW,
    2. Dahmubed AF,
    3. Lara AH,
    4. Wallis JD
    (2009) Neurons in the frontal lobe encode the value of multiple decision variables. J Cogn Neurosci 21:1162–1178, doi:10.1162/jocn.2009.21100, pmid:18752411.
    OpenUrlCrossRefPubMed
  41. ↵
    1. Kennerley SW,
    2. Behrens TE,
    3. Wallis JD
    (2011) Double dissociation of value computations in orbitofrontal and anterior cingulate neurons. Nat Neurosci 14:1581–1589, doi:10.1038/nn.2961, pmid:22037498.
    OpenUrlCrossRefPubMed
  42. ↵
    1. Khamassi M,
    2. Quilodran R,
    3. Enel P,
    4. Dominey PF,
    5. Procyk E
    (2015) Behavioral regulation and the modulation of information coding in the lateral prefrontal and cingulate cortex. Cereb Cortex 25:3197–3218, doi:10.1093/cercor/bhu114, pmid:24904073.
    OpenUrlAbstract/FREE Full Text
  43. ↵
    1. Klein-Flügge MC,
    2. Barron HC,
    3. Brodersen KH,
    4. Dolan RJ,
    5. Behrens TE
    (2013) Segregated encoding of reward-identity and stimulus-reward associations in human orbitofrontal cortex. J Neurosci 33:3202–3211, doi:10.1523/JNEUROSCI.2532-12.2013, pmid:23407973.
    OpenUrlAbstract/FREE Full Text
  44. ↵
    1. Klein-Flügge MC,
    2. Kennerley SW,
    3. Saraiva AC,
    4. Penny WD,
    5. Bestmann S
    (2015) Behavioral modeling of human choices reveals dissociable effects of physical effort and temporal delay on reward devaluation. PLoS Comput Biol 11:e1004116, doi:10.1371/journal.pcbi.1004116, pmid:25816114.
    OpenUrlCrossRefPubMed
  45. ↵
    1. Kolling N,
    2. Behrens TE,
    3. Mars RB,
    4. Rushworth MF
    (2012) Neural mechanisms of foraging. Science 336:95–98, doi:10.1126/science.1216930, pmid:22491854.
    OpenUrlAbstract/FREE Full Text
  46. ↵
    1. Kolling N,
    2. Behrens T,
    3. Wittmann MK,
    4. Rushworth M
    (2016) Multiple signals in anterior cingulate cortex. Curr Opin Neurobiol 37:36–43, doi:10.1016/j.conb.2015.12.007, pmid:26774693.
    OpenUrlCrossRefPubMed
  47. ↵
    1. Kunishio K,
    2. Haber SN
    (1994) Primate cingulostriatal projection: limbic striatal versus sensorimotor striatal input. J Comp Neurol 350:337–356, doi:10.1002/cne.903500302, pmid:7533796.
    OpenUrlCrossRefPubMed
  48. ↵
    1. Kurniawan IT,
    2. Seymour B,
    3. Talmi D,
    4. Yoshida W,
    5. Chater N,
    6. Dolan RJ
    (2010) Choosing to make an effort: the role of striatum in signaling physical effort of a chosen action. J Neurophysiol 104:313–321, doi:10.1152/jn.00027.2010, pmid:20463204.
    OpenUrlAbstract/FREE Full Text
  49. ↵
    1. Kurniawan IT,
    2. Guitart-Masip M,
    3. Dayan P,
    4. Dolan RJ
    (2013) Effort and valuation in the brain: the effects of anticipation and execution. J Neurosci 33:6160–6169, doi:10.1523/JNEUROSCI.4777-12.2013, pmid:23554497.
    OpenUrlAbstract/FREE Full Text
  50. ↵
    1. Laibson D
    (1997) Golden eggs and hyperbolic discounting. Q J Econ 112:443–478, doi:10.1162/003355397555253.
    OpenUrlAbstract/FREE Full Text
  51. ↵
    1. Levy DJ,
    2. Glimcher PW
    (2011) Comparing apples and oranges: using reward-specific and reward-general subjective value representation in the brain. J Neurosci 31:14693–14707, doi:10.1523/JNEUROSCI.2218-11.2011, pmid:21994386.
    OpenUrlAbstract/FREE Full Text
  52. ↵
    1. Levy R,
    2. Dubois B
    (2006) Apathy and the functional anatomy of the prefrontal cortex-basal ganglia circuits. Cereb Cortex 16:916–928, doi:10.1093/cercor/bhj043, pmid:16207933.
    OpenUrlAbstract/FREE Full Text
  53. ↵
    1. Luk CH,
    2. Wallis JD
    (2009) Dynamic encoding of responses and outcomes by neurons in medial prefrontal cortex. J Neurosci 29:7526–7539, doi:10.1523/JNEUROSCI.0386-09.2009, pmid:19515921.
    OpenUrlAbstract/FREE Full Text
  54. ↵
    1. Matsumoto K,
    2. Suzuki W,
    3. Tanaka K
    (2003) Neuronal correlates of goal-based motor selection in the prefrontal cortex. Science 301:229–232, doi:10.1126/science.1084204, pmid:12855813.
    OpenUrlAbstract/FREE Full Text
  55. ↵
    1. Mazur J
    (1987) in Quantitative analyses of behavior, Vol 5. The effect of delay and of intervening events on re-inforcement value, An adjusting procedure for studying delayed reinforcement, eds Commons M, Mazur J, Nevin J, Rachlin H (Erlbaum, Hillsdale, NJ).
  56. ↵
    1. Morecraft RJ,
    2. Van Hoesen GW
    (1992) Cingulate input to the primary and supplementary motor cortices in the rhesus monkey: evidence for somatotopy in areas 24c and 23c. J Comp Neurol 322:471–489, doi:10.1002/cne.903220403, pmid:1383283.
    OpenUrlCrossRefPubMed
  57. ↵
    1. Morecraft RJ,
    2. Van Hoesen GW
    (1998) Convergence of limbic input to the cingulate motor cortex in the rhesus monkey. Brain Res Bull 45:209–232, doi:10.1016/S0361-9230(97)00344-4, pmid:9443842.
    OpenUrlCrossRefPubMed
  58. ↵
    1. Neubert FX,
    2. Mars RB,
    3. Sallet J,
    4. Rushworth MF
    (2015) Connectivity reveals relationship of brain areas for reward-guided learning and decision making in human and monkey frontal cortex. Proc Natl Acad Sci U S A 112:E2695–E2704, doi:10.1073/pnas.1410767112, pmid:25947150.
    OpenUrlAbstract/FREE Full Text
  59. ↵
    1. Noonan MP,
    2. Walton ME,
    3. Behrens TE,
    4. Sallet J,
    5. Buckley MJ,
    6. Rushworth MF
    (2010) Separate value comparison and learning mechanisms in macaque medial and lateral orbitofrontal cortex. Proc Natl Acad Sci U S A 107:20547–20552, doi:10.1073/pnas.1012246107, pmid:21059901.
    OpenUrlAbstract/FREE Full Text
  60. ↵
    1. Padoa-Schioppa C,
    2. Assad JA
    (2006) Neurons in the orbitofrontal cortex encode economic value. Nature 441:223–226, doi:10.1038/nature04676, pmid:16633341.
    OpenUrlCrossRefPubMed
  61. ↵
    1. Pastor-Bernier A,
    2. Cisek P
    (2011) Neural correlates of biased competition in premotor cortex. J Neurosci 31:7083–7088, doi:10.1523/JNEUROSCI.5681-10.2011, pmid:21562270.
    OpenUrlAbstract/FREE Full Text
  62. ↵
    1. Penny W,
    2. Kiebel S,
    3. Friston K
    (2003) Variational Bayesian inference for fMRI time series. Neuroimage 19:727–741, doi:10.1016/S1053-8119(03)00071-5, pmid:12880802.
    OpenUrlCrossRefPubMed
  63. ↵
    1. Philiastides MG,
    2. Biele G,
    3. Heekeren HR
    (2010) A mechanistic account of value computation in the human brain. Proc Natl Acad Sci U S A 107:9430–9435, doi:10.1073/pnas.1001732107, pmid:20439711.
    OpenUrlAbstract/FREE Full Text
  64. ↵
    1. Pizzagalli DA
    (2014) Depression, stress, and anhedonia: toward a synthesis and integrated model. Annu Rev Clin Psychol 10:393–423, doi:10.1146/annurev-clinpsy-050212-185606, pmid:24471371.
    OpenUrlCrossRefPubMed
  65. ↵
    1. Prévost C,
    2. Pessiglione M,
    3. Météreau E,
    4. Cléry-Melin ML,
    5. Dreher JC
    (2010) Separate valuation subsystems for delay and effort decision costs. J Neurosci 30:14080–14090, doi:10.1523/JNEUROSCI.2752-10.2010, pmid:20962229.
    OpenUrlAbstract/FREE Full Text
  66. ↵
    1. Procyk E,
    2. Wilson CR,
    3. Stoll FM,
    4. Faraut MC,
    5. Petrides M,
    6. Amiez C
    (2016) Midcingulate motor map and feedback detection: converging data from humans and monkeys. Cereb Cortex 26:467–476, doi:10.1093/cercor/bhu213, pmid:25217467.
    OpenUrlAbstract/FREE Full Text
  67. ↵
    1. Rangel A,
    2. Hare T
    (2010) Neural computations associated with goal-directed choice. Curr Opin Neurobiol 20:262–270, doi:10.1016/j.conb.2010.03.001, pmid:20338744.
    OpenUrlCrossRefPubMed
  68. ↵
    1. Ridderinkhof KR,
    2. Ullsperger M,
    3. Crone EA,
    4. Nieuwenhuis S
    (2004) The role of the medial frontal cortex in cognitive control. Science 306:443–447, doi:10.1126/science.1100301, pmid:15486290.
    OpenUrlAbstract/FREE Full Text
  69. ↵
    1. Rudebeck PH,
    2. Murray EA
    (2011) Balkanizing the primate orbitofrontal cortex: distinct subregions for comparing and contrasting values. Ann N Y Acad Sci 1239:1–13, doi:10.1111/j.1749-6632.2011.06267.x, pmid:22145870.
    OpenUrlCrossRefPubMed
  70. ↵
    1. Rudebeck PH,
    2. Walton ME,
    3. Smyth AN,
    4. Bannerman DM,
    5. Rushworth MF
    (2006) Separate neural pathways process different decision costs. Nat Neurosci 9:1161–1168, doi:10.1038/nn1756, pmid:16921368.
    OpenUrlCrossRefPubMed
  71. ↵
    1. Rudebeck PH,
    2. Behrens TE,
    3. Kennerley SW,
    4. Baxter MG,
    5. Buckley MJ,
    6. Walton ME,
    7. Rushworth MF
    (2008) Frontal cortex subregions play distinct roles in choices between actions and stimuli. J Neurosci 28:13775–13785, doi:10.1523/JNEUROSCI.3541-08.2008, pmid:19091968.
    OpenUrlAbstract/FREE Full Text
  72. ↵
    1. Rushworth MF,
    2. Kolling N,
    3. Sallet J,
    4. Mars RB
    (2012) Curr Opin Neurobiol, Valuation and decision-making in frontal cortex: one or many serial or parallel systems? Available at: http://www.ncbi.nlm.nih.gov/pubmed/22572389. Accessed May 27, 2012.
  73. ↵
    1. Scholl J,
    2. Kolling N,
    3. Nelissen N,
    4. Wittmann MK,
    5. Harmer CJ,
    6. Rushworth MF
    (2015) The good, the bad, and the irrelevant: neural mechanisms of learning real and hypothetical rewards and effort. J Neurosci 35:11233–11251, doi:10.1523/JNEUROSCI.0396-15.2015, pmid:26269633.
    OpenUrlAbstract/FREE Full Text
  74. ↵
    1. Schweimer J,
    2. Hauber W
    (2005) Involvement of the rat anterior cingulate cortex in control of instrumental responses guided by reward expectancy. Learn Mem 12:334–342, doi:10.1101/lm.90605, pmid:15930509.
    OpenUrlAbstract/FREE Full Text
  75. ↵
    1. Selemon LD,
    2. Goldman-Rakic PS
    (1985) Longitudinal topography and interdigitation of corticostriatal projections in the rhesus monkey. J Neurosci 5:776–794, pmid:2983048.
    OpenUrlAbstract
  76. ↵
    1. Shidara M,
    2. Richmond BJ
    (2002) Anterior cingulate: single neuronal signals related to degree of reward expectancy. Science 296:1709–1711, doi:10.1126/science.1069504, pmid:12040201.
    OpenUrlAbstract/FREE Full Text
  77. ↵
    1. Skvortsova V,
    2. Palminteri S,
    3. Pessiglione M
    (2014) Learning to minimize efforts versus maximizing rewards: computational principles and neural correlates. J Neurosci 34:15621–15630, doi:10.1523/JNEUROSCI.1350-14.2014, pmid:25411490.
    OpenUrlAbstract/FREE Full Text
  78. ↵
    1. Stalnaker TA,
    2. Cooch NK,
    3. Schoenbaum G
    (2015) What the orbitofrontal cortex does not do. Nat Neurosci 18:620–627, doi:10.1038/nn.3982, pmid:25919962.
    OpenUrlCrossRefPubMed
  79. ↵
    1. Strait CE,
    2. Blanchard TC,
    3. Hayden BY
    (2014) Reward value comparison via mutual inhibition in ventromedial prefrontal cortex. Neuron 82:1357–1366, doi:10.1016/j.neuron.2014.04.032, pmid:24881835.
    OpenUrlCrossRefPubMed
  80. ↵
    1. Treadway MT,
    2. Bossaller NA,
    3. Shelton RC,
    4. Zald DH
    (2012) Effort-based decision-making in major depressive disorder: a translational model of motivational anhedonia. J Abnorm Psychol 121:553–558, doi:10.1037/a0028813, pmid:22775583.
    OpenUrlCrossRefPubMed
  81. ↵
    1. Treadway MT,
    2. Peterman JS,
    3. Zald DH,
    4. Park S
    (2015) Impaired effort allocation in patients with schizophrenia. Schizophr Res 161:382–385, doi:10.1016/j.schres.2014.11.024, pmid:25487699.
    OpenUrlCrossRefPubMed
  82. ↵
    1. Tversky A,
    2. Kahneman D
    (1992) Advances in prospect theory: cumulative representation of uncertainty. J Risk Uncertain 5:297–323, doi:10.1007/BF00122574.
    OpenUrlCrossRef
  83. ↵
    1. Varazzani C,
    2. San-Galli A,
    3. Gilardeau S,
    4. Bouret S
    (2015) Noradrenaline and dopamine neurons in the reward/effort trade-off: a direct electrophysiological comparison in behaving monkeys. J Neurosci 35:7866–7877, doi:10.1523/JNEUROSCI.0454-15.2015, pmid:25995472.
    OpenUrlAbstract/FREE Full Text
  84. ↵
    1. Walton ME,
    2. Bannerman DM,
    3. Alterescu K,
    4. Rushworth MF
    (2003) Functional specialization within medial frontal cortex of the anterior cingulate for evaluating effort-related decisions. J Neurosci 23:6475–6479, pmid:12878688.
    OpenUrlAbstract/FREE Full Text
  85. ↵
    1. Walton ME,
    2. Kennerley SW,
    3. Bannerman DM,
    4. Phillips PE,
    5. Rushworth MF
    (2006) Weighing up the benefits of work: behavioral and neural analyses of effort-related decision making. Neural Netw 19:1302–1314, doi:10.1016/j.neunet.2006.03.005, pmid:16949252.
    OpenUrlCrossRefPubMed
  86. ↵
    1. Walton ME,
    2. Groves J,
    3. Jennings KA,
    4. Croxson PL,
    5. Sharp T,
    6. Rushworth MF,
    7. Bannerman DM
    (2009) Comparing the role of the anterior cingulate cortex and 6-hydroxydopamine nucleus accumbens lesions on operant effort-based decision making. Eur J Neurosci 29:1678–1691, doi:10.1111/j.1460-9568.2009.06726.x, pmid:19385990.
    OpenUrlCrossRefPubMed
  87. ↵
    1. Wan X,
    2. Cheng K,
    3. Tanaka K
    (2015) Neural encoding of opposing strategy values in anterior and posterior cingulate cortex. Nat Neurosci 18:752–759, doi:10.1038/nn.3999, pmid:25894290.
    OpenUrlCrossRefPubMed
  88. ↵
    1. Wang XJ
    (2002) Probabilistic decision making by slow reverberation in cortical circuits. Neuron 36:955–968, doi:10.1016/S0896-6273(02)01092-9, pmid:12467598.
    OpenUrlCrossRefPubMed
  89. ↵
    1. Ward NS,
    2. Frackowiak RS
    (2003) Age-related changes in the neural correlates of motor performance. Brain 126:873–888, doi:10.1093/brain/awg071, pmid:12615645.
    OpenUrlAbstract/FREE Full Text
  90. ↵
    1. Weiskopf N,
    2. Hutton C,
    3. Josephs O,
    4. Deichmann R
    (2006) Optimal EPI parameters for reduction of susceptibility-induced BOLD sensitivity losses: a whole-brain analysis at 3 T and 1.5 T. Neuroimage 33:493–504, doi:10.1016/j.neuroimage.2006.07.029, pmid:16959495.
    OpenUrlCrossRefPubMed
  91. ↵
    1. Wunderlich K,
    2. Rangel A,
    3. O'Doherty JP
    (2009) Neural computations underlying action-based decision making in the human brain. Proc Natl Acad Sci U S A 106:17199–17204, doi:10.1073/pnas.0901077106, pmid:19805082.
    OpenUrlAbstract/FREE Full Text
  92. ↵
    1. Yang XH,
    2. Huang J,
    3. Zhu CY,
    4. Wang YF,
    5. Cheung EF,
    6. Chan RC,
    7. Xie GR
    (2014) Motivational deficits in effort-based decision making in individuals with subsyndromal depression, first-episode and remitted depression patients. Psychiatry Res 220:874–882, doi:10.1016/j.psychres.2014.08.056, pmid:25262638.
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 36 (39)
Journal of Neuroscience
Vol. 36, Issue 39
28 Sep 2016
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Advertising (PDF)
  • Ed Board (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Neural Signatures of Value Comparison in Human Cingulate Cortex during Decisions Requiring an Effort-Reward Trade-off
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Neural Signatures of Value Comparison in Human Cingulate Cortex during Decisions Requiring an Effort-Reward Trade-off
Miriam C. Klein-Flügge, Steven W. Kennerley, Karl Friston, Sven Bestmann
Journal of Neuroscience 28 September 2016, 36 (39) 10002-10015; DOI: 10.1523/JNEUROSCI.0292-16.2016

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Neural Signatures of Value Comparison in Human Cingulate Cortex during Decisions Requiring an Effort-Reward Trade-off
Miriam C. Klein-Flügge, Steven W. Kennerley, Karl Friston, Sven Bestmann
Journal of Neuroscience 28 September 2016, 36 (39) 10002-10015; DOI: 10.1523/JNEUROSCI.0292-16.2016
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • cingulate cortex
  • cost-benefit decision making
  • fMRI
  • motor cost
  • physical effort
  • value comparison

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Articles

  • Choice Behavior Guided by Learned, But Not Innate, Taste Aversion Recruits the Orbitofrontal Cortex
  • Maturation of Spontaneous Firing Properties after Hearing Onset in Rat Auditory Nerve Fibers: Spontaneous Rates, Refractoriness, and Interfiber Correlations
  • Insulin Treatment Prevents Neuroinflammation and Neuronal Injury with Restored Neurobehavioral Function in Models of HIV/AIDS Neurodegeneration
Show more Articles

Behavioral/Cognitive

  • Ventral Pallidum GABA Neurons Mediate Motivation Underlying Risky Choice
  • Temporally specific roles of ventral tegmental area projections to the nucleus accumbens and prefrontal cortex in attention and impulse control
  • Temporal context actively shapes EEG signatures of time perception
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
  • Feedback
(JNeurosci logo)
(SfN logo)

Copyright © 2021 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.