Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

Effort Reinforces Learning

Huw Jarvis, Isabelle Stevenson, Amy Q. Huynh, Emily Babbage, James Coxon and Trevor T.-J. Chong
Journal of Neuroscience 5 October 2022, 42 (40) 7648-7658; DOI: https://doi.org/10.1523/JNEUROSCI.2223-21.2022
Huw Jarvis
1Turner Institute for Brain and Mental Health, Monash University, Clayton, Victoria 3800, Australia
2School of Psychological Sciences, Monash University, Clayton, Victoria 3800, Australia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Huw Jarvis
Isabelle Stevenson
1Turner Institute for Brain and Mental Health, Monash University, Clayton, Victoria 3800, Australia
2School of Psychological Sciences, Monash University, Clayton, Victoria 3800, Australia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Amy Q. Huynh
1Turner Institute for Brain and Mental Health, Monash University, Clayton, Victoria 3800, Australia
2School of Psychological Sciences, Monash University, Clayton, Victoria 3800, Australia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Emily Babbage
1Turner Institute for Brain and Mental Health, Monash University, Clayton, Victoria 3800, Australia
2School of Psychological Sciences, Monash University, Clayton, Victoria 3800, Australia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
James Coxon
1Turner Institute for Brain and Mental Health, Monash University, Clayton, Victoria 3800, Australia
2School of Psychological Sciences, Monash University, Clayton, Victoria 3800, Australia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Trevor T.-J. Chong
1Turner Institute for Brain and Mental Health, Monash University, Clayton, Victoria 3800, Australia
2School of Psychological Sciences, Monash University, Clayton, Victoria 3800, Australia
3Department of Neurology, Alfred Health, Melbourne, Victoria 3004, Australia
4Department of Clinical Neurosciences, St Vincent's Hospital, Melbourne, Victoria 3065, Australia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

Humans routinely learn the value of actions by updating their expectations based on past outcomes – a process driven by reward prediction errors (RPEs). Importantly, however, implementing a course of action also requires the investment of effort. Recent work has revealed a close link between the neural signals involved in effort exertion and those underpinning reward-based learning, but the behavioral relationship between these two functions remains unclear. Across two experiments, we tested healthy male and female human participants (N = 140) on a reinforcement learning task in which they registered their responses by applying physical force to a pair of hand-held dynamometers. We examined the effect of effort on learning by systematically manipulating the amount of force required to register a response during the task. Our key finding, replicated across both experiments, was that greater effort increased learning rates following positive outcomes and decreased them following negative outcomes, which corresponded to a differential effect of effort in boosting positive RPEs and blunting negative RPEs. Interestingly, this effect was most pronounced in individuals who were more averse to effort in the first place, raising the possibility that the investment of effort may have an adaptive effect on learning in those less motivated to exert it. By integrating principles of reinforcement learning with neuroeconomic approaches to value-based decision-making, we show that the very act of investing effort modulates one's capacity to learn, and demonstrate how these functions may operate within a common computational framework.

SIGNIFICANCE STATEMENT Recent work suggests that learning and effort may share common neurophysiological substrates. This raises the possibility that the very act of investing effort influences learning. Here, we tested whether effort modulates teaching signals in a reinforcement learning paradigm. Our results showed that effort resulted in more efficient learning from positive outcomes and less efficient learning from negative outcomes. Interestingly, this effect varied across individuals, and was more pronounced in those who were more averse to investing effort in the first place. These data highlight the importance of motivational factors in a common framework of reward-based learning, which integrates the computational principles of reinforcement learning with those of value-based decision-making.

  • effort
  • learning
  • motivation
  • reinforcement
  • reward
  • reward prediction error

Introduction

Choosing what to do next involves weighing up the value of possible actions based on their expected outcomes (Schall, 2001; Samejima et al., 2005; Rangel et al., 2008; Bartra et al., 2013). According to reinforcement learning theory, the value of an action increases following outcomes that are better than expected, and decreases following those that are worse than expected (Sutton and Barto, 1998; Niv, 2009). Critically, acting on our choices also requires the exertion of effort. However, despite substantial evidence that effort modulates the value of reward, current theories of reinforcement learning do not account for the role of effort in learning.

Recent data suggest that dopamine is fundamental to driving both effort-based decisions and reward-based learning (Berke, 2018). Striatal dopamine plays a critical role in decisions to invest effort; the exertion of effort itself; and the evaluation of choice outcomes (Wise, 2004; Salamone and Correa, 2012). Notably, dopamine also encodes reward prediction errors (RPEs), teaching signals that represent the difference between predicted and actual reward (Montague et al., 1996; Schultz et al., 1997; Watabe-Uchida et al., 2017). Emerging evidence suggests that the dopaminergic signals involved in learning and effort overlap during choice behavior (Berke, 2018). For example, transient fluctuations in dopamine activity, classically recognized as RPEs, have been found to play a key role in shaping the vigor of movement (Hamid et al., 2016; Howe and Dombeck, 2016; Coddington and Dudman, 2018; da Silva et al., 2018; Hughes et al., 2020). Moreover, a recent behavioral finding in humans suggests RPEs themselves may invigorate responses (Jarvis, 2019; Sedaghat-Nejad et al., 2019).

Behavioral data indicate that effort modulates reward valuation both before and after choices are made. When prospectively evaluating a course of action, humans and other animals typically choose the least effortful option to achieve an outcome (Hull, 1943; Bitgood and Dukes, 2006; Kool et al., 2010; Cos et al., 2011; Ranganathan et al., 2013; Shadmehr et al., 2016). For example, suppose we wished to appreciate sweeping views from atop a mountain – most of us would prefer to take the chairlift rather than physically climb to the peak. The aversiveness of effort has been quantified by a large body of experimental work across species showing that effort reduces the subjective value of a prospective reward, the phenomenon known as “effort discounting” (Aberman and Salamone, 1999; Walton et al., 2006; Botvinick et al., 2009; Kurniawan et al., 2010; Prévost et al., 2010; Chong et al., 2015, 2017; McGuigan et al., 2019; Atkins et al., 2020).

In contrast, when an action is evaluated retrospectively, effort tends to inflate the value of a realized reward (Aronson and Mills, 1959; Alessandri et al., 2008; L. Wang et al., 2017). For example, animals tend to prefer outcomes that have previously been associated with more effortful behavior (Klein et al., 2005; Singer et al., 2007; Tsukamoto et al., 2017). Returning to our earlier example, these frameworks predict that the views atop the mountain would be more rewarding if we chose to climb to the peak rather than take the chairlift. In humans, this phenomenon is often interpreted in the context of “cognitive dissonance,” whereby the application of effort is thought to be justified retrospectively by inflating the value of its outcome (Festinger, 1957).

Taken together, these separate lines of research have shown that effort can modulate both the predicted value of a future reward, as well as the observed value of a realized reward, the very two signals that define the RPE in classical reinforcement learning models. This suggests that effort has the potential to modulate learning itself (Tanaka et al., 2021). Importantly, however, this prediction has not been empirically tested in humans. Here, across two experiments, we investigated how effort modulates learning by requiring participants to exert predefined levels of physical force in a reinforcement learning paradigm. Given the close relationship between effort and reward signals in the brain (Salamone and Correa, 2002; Guitart-Masip et al., 2014; Berke, 2018; Tanaka et al., 2021), we predicted that learning would be contingent on RPEs shaped by both reward feedback and the amount of effort exerted. Indeed, in light of recent suggestions that effort and reward signals operate within a common computational framework (Berke, 2018; Sedaghat-Nejad et al., 2019; Jenkins and Walton, 2020), we hypothesized that related computations of effort costs might discount value before choice, and modulate learning after choice.

Materials and Methods

Participants

Participants were young, healthy male and female adults recruited and tested at Monash University in Melbourne, Australia. They reported normal or corrected-to-normal vision, no history of neurologic or psychiatric disorder, and no musculoskeletal injuries to the upper limbs. We tested 94 participants in experiment 1 (77 females; mean age 20 years), and 46 in experiment 2 (28 female, mean age 21.9 years). The experimental protocols were approved by Monash University's Human Research Ethics Committee, and informed consent was obtained from all participants before testing.

Experimental design and statistical analyses

At the core of both experiments was a standard reinforcement learning paradigm. The critical difference between this task and previous reinforcement learning studies is that participants were required to apply prespecified levels of physical force on a pair of hand-held dynamometers to register their responses. We examined the effect of effort on learning by systematically manipulating the amount of force they needed to exert. The primary distinction between experiments 1 and 2 was in whether these prespecified levels of force were manipulated across separate experimental blocks (experiment 1), or within individual blocks (experiment 2).

In both experiments, participants on each trial were presented with a pair of abstract stimuli on the left and right of the screen, and were required to select which was more rewarding based on probabilistic feedback after each trial. The probability of obtaining a reward on selecting a stimulus was 0.7 for a highly rewarded stimulus and 0.3 for a poorly rewarded stimulus. These contingencies periodically reversed over the course of the experiments, and these reversals were not signaled. We randomized the left/right location of the two stimuli on each trial. On rewarded trials, a “smiley face” appeared accompanied by a positively-valenced auditory tone. On nonrewarded trials, a “sad face” appeared with a negatively-valenced tone. Participants had a maximum of 2 s to register a response on each trial, otherwise a “Too slow!” message was displayed for 1 s and then the next trial began. Participants were incentivized by the opportunity to increase their remuneration based on their performance.

Participants registered their choices by applying a prespecified level of physical force to the corresponding left/right dynamometer (SS25LA, BIOPAC Systems). Force levels were standardized for each participant before testing by measuring their maximum voluntary contraction (MVC), which was defined for each hand as the maximum force generated from three ballistic contractions. Both experiments were run in Psychtoolbox (Brainard, 1997) implemented in MATLAB (R2018a, MathWorks), and presented on a monitor at a viewing distance of ∼60 cm.

Experiment 1

Participants completed two counterbalanced blocks of 180 trials. In a “control” block, participants only needed to apply a negligible amount of force (>5% MVC) to register their choices. In a separate “effort” block, choices required a greater amount of force. The precise amount of force required in this “effort” block was systematically varied across three experimental groups (>18%, 31%, or 44% MVC in separate groups of n = 32, 31, and 31, respectively). Block order (“control” vs “effort”) was counterbalanced across participants, and each block was preceded by 15 practice trials to familiarize participants with the force requirements. On any given trial, one of the two stimuli had a 0.7 probability of being rewarded, and the other a 0.3 probability of reward. These reward contingencies reversed when participants reached a cumulative accuracy of 70% (after a minimum of 10 trials) or else after every 16 trials (Park et al., 2010; Schlagenhauf et al., 2013).

Experiment 2

Experiment 2 was similar to experiment 1 except that effort requirements were manipulated within an experimental block, rather than across blocks. The same two stimuli were presented across all trials in experiment 2. One of these stimuli was designated the “low effort” stimulus, and required only a negligible amount of force to be selected (>5% MVC). The other was designated the “high effort” stimulus, which required a higher amount of force (>44% MVC). These stimulus-effort mappings remained constant for the duration of the experiment, and participants were explicitly informed about the identity of the low and high effort stimuli before the experimental task began.

To reinforce these stimulus-effort mappings, participants performed a preliminary block of 50 trials in which they were cued to generate the force corresponding to either the low or high effort stimulus (randomly determined). Participants then received binary feedback (correct vs incorrect) about whether they generated the correct amount of force (5–44% of MVC for the low effort stimulus, or >44% MVC for the high effort stimulus). To proceed to the experimental block, participants had to apply the correct force on at least 20 of the final 25 trials. All participants achieved this on their first attempt, with the exception of one participant, who achieved this on their second attempt.

The experimental task comprised two blocks of 150 trials, preceded by a practice block of 50 trials. As in experiment 1, participants were required to choose between the two stimuli on offer, but the key difference was that, here, they had to incorporate on every trial a consideration of the effort required to select each stimulus. Although the effort required to select a given stimulus was fixed, the value of each stimulus varied periodically. On any given trial, both stimuli could have a high probability of being rewarded (0.7); both could have a low probability of reward (0.3); or one could be superior to the other (0.7 vs 0.3). These contingencies changed after every 12 trials according to a pseudorandomized sequence that ensured the number of transitions and trials involving each combination of reward contingencies was the same for all participants (Manohar et al., 2021).

Data analysis

In both experiments, we first tested whether effort affected learning by examining its effect on win-stay and lose-switch behavior. In experiment 1, we fitted a generalized linear model (GLM) to test whether the probability of choosing the same stimulus after a positive outcome varied as a function of the amount of Effort, E, exerted on trial t: P(stayt+1|wint)=β0+β1Et+β2Block+β3(Block×Et).

Effort in these models was defined as the peak force amplitude on each trial (as a proportion of the participant's MVC). We included Block as a dummy variable, as well as the Block × Effort interaction. Participants were modeled as a random effect. Separately, we examined the effect of the exerted Effort on the probability of choosing the alternative stimulus after a negative outcome: P(switcht+1|losst)=β0+β1Et+β2Block+β3(Block×Et).

We analyzed the effect of Effort on behavior in experiment 2 using similar GLMs, but without the effect of Block (given that the effort manipulation was within blocks): P(stayt+1|wint)=β0+β1Et P(switcht+1|losst)=β0+β1Et.

We excluded one participant from experiment 1 whose relative accuracy was >3 SDs above the group mean. Statistical analyses were performed in R using RStudio (version 1.1.447; RStudio Team, 2016). GLMs were fitted using the lme4 package (Bates et al., 2015), and plots were created using the ggplot2 package (Wickham, 2016).

Computational modelling

To formally test how effort modulates learning on a trial-by-trial basis, we considered a family of reinforcement learning models based on the traditional Rescorla–Wagner model of reinforcement learning (Rescorla and Wagner, 1972). At the core of the Rescorla–Wagner model is the RPE, δ, which updates the expected value v of a stimulus on trial t. δ is the difference between the reward acquired r and the reward expected based on current stimulus value v, and is scaled by a learning rate parameter α: vt+1=vt+α⋅δt δt=rt−vt.

Our primary goal was to examine the effect of effort on modulating RPEs. To do so, we first constrained α between 0 and 1 by defining it as a sigmoidal function of a subject-specific signal gain parameter γ: M1:αt=11+e−γ.

We then compared this baseline model (M1) against six alternative models that hypothesized distinct effects of effort on γ (Fig. 1).

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Candidate computational models of how effort modulates learning. A, Schematic diagrams of positive (above the dotted line) and negative (below the dotted line) RPE signals. Black lines depict RPEs unaffected by effort. Orange lines show how effort alters RPEs according to each respective model. B, At the core of each computational model is a modified Rescorla–Wagner model in which the learning rate α is a sigmoidal function of a subject-specific signal gain parameter γ. We compared this baseline model with alternative models that hypothesized distinct effects of trial-by-trial effort E(t) on signal gain γ, scaled by a subject-specific effort parameter k. C, We tested the identifiability of our learning models by simulating a reinforcement learning paradigm with effortful responses. We performed 50 simulations, yielding a model recovery accuracy ≥0.88 for all models.

Signal shift model (M2)

Previous studies suggest that the mere act of investing effort increases the overall value of acquired rewards (Klein et al., 2005; Alessandri et al., 2008; Syed et al., 2016; L. Wang et al., 2017). This work suggests that effort should have a directional (positive) effect on RPEs, such that it should increase the amplitude of RPEs that are positive, and blunt those that are negative. We tested this prediction in a signal shift model (M1), which examined the effect of effort E on learning rate α. This model assumed a linear effect of effort on the signal gain parameter γ: M2:αt={11+e−(γ+k⋅Et),ifrt=111+e−(γ−k⋅Et),ifrt=0.

E is the peak amplitude of force applied on each trial (as a proportion of MVC), and k is a subject-specific parameter that captures individuals' sensitivity to effort. We let k take positive or negative values, such that k>0 denotes that effort shifts RPEs in a positive direction as predicted (i.e., boosts positive and blunts negative RPEs) and k<0 denotes the opposite (i.e., boosts negative and blunts positive RPEs).

Signal enhancement model (M3)

An alternative possibility is that effort amplifies RPEs in response to both positive and negative rewards (i.e., regardless of the valence of the outcome). Normative accounts of choice behavior describe effort as the cost of investing limited energy into one action at the expense of other candidate actions: an “opportunity cost” (Niv et al., 2007; Shadmehr et al., 2019). Effort exertion would potentially offset such costs if it enhanced learning rates independent of the outcome. We tested this in a signal enhancement (M3) model by again estimating learning rate α as a function of both subject-specific signal gain γ and the amount of effort exerted E. However, in this model, effort modulates signal gain symmetrically, such that k>0 denotes that effort boosts RPEs (i.e., increases the amplitude of both positive and negative RPEs), and k<0 denotes that effort blunts RPEs (i.e., decreases the amplitude of both positive and negative RPEs): M3:αt=11+e−(γ+k⋅Et).

Models incorporating separate learning rates for positive and negative RPEs (M4–M7)

M2 and M3 both assume that the magnitude of any effort-related effect on signal gain γ is equivalent for positive and negative RPEs. However, several studies have suggested that the neural processes underpinning positive and negative RPEs may be at least partially dissociable (Bayer and Glimcher, 2005; Cools et al., 2008; Matsumoto and Hikosaka, 2009; Collins and Frank, 2014; Westbrook et al., 2020). Consequently, we fit a further family of models that aimed to decouple the effect of effort on different reward outcomes.

First, we fit two additional models to test whether effort modulates the learning rate only in response to positive RPEs (positive RPE model, M4), or only in response to negative RPEs (negative RPE model, M5): M4:αt=11+e−(γ+k⋅Et),ifrt=1 M5:αt=11+e−(γ+k⋅Et),ifrt=0.

In addition, to examine whether effort had a weighted effect on learning as a function of RPE sign, we also fit a more complex model with separate effort parameters for positive (kpos) and negative (kneg) RPEs [dual learning rate (effort) model, M6]: M6:αt={11+e−(γ+kpos⋅Et),ifrt=111+e−(γ+kneg⋅Et),ifrt=0.

Finally, we included a model that assumes asymmetrical learning from positive and negative RPEs that is entirely independent of effort exertion [dual learning rate (no effort) model, M7]. This model includes separate parameters for positive (γpos) and negative (γneg) RPEs, in the absence of an effort (k) parameter: M7:αt={11+e−γpos,ifrt=111+e−γneg,ifrt=0.

For all models, we used a softmax function to calculate choice probabilities, whereby the probability P of choosing a given stimulus sc on each trial t was estimated based on its value v, relative to that of the nonchosen stimulus sn. An inverse temperature parameter β was included to capture individual differences in choice stochasticity (Luce, 1959): Pt(sc)=eβvt(sc)eβvt(sc)+eβvt(sn).

We constrained the signal gain parameter γ to the range [−5, 5], which ensured we allowed for the full range of learning rates (effectively 0.01 ≤ α ≤ 0.99) while also preventing extreme values close to the asymptotes. We used flat priors for all parameters. The best-fitting model parameters for each participant were found using maximum likelihood estimation, and we compared overall model fits based on the Akaike Information Criterion (AIC; Akaike, 1974). We also quantified the relative likelihood that the winning model best accounted for choice behavior compared with the others in the model space (i.e., Akaike weights): wi=e−0.5⋅Δi(AIC)∑m=1Me−0.5⋅Δm(AIC), where wi is the Akaike weight of model i; Δi(AIC) is the difference in AIC between model i and the winning model; and M is the number of models in the space. Computational modeling was performed in MATLAB (R2018a, MathWorks).

To ensure that each of these models was uniquely identifiable, we performed a model recovery analysis based on synthetic data. We simulated behavior on a reinforcement learning task in which agents registered their choices by exerting effort. On each simulation, we randomly sampled subject-specific parameter values from a plausible range, as well as each agent's effort on each trial. We performed 50 simulations, each of which generated data from 100 learning agents making 200 choices each. This analysis revealed that our procedure was able to correctly identify the true generative model with an accuracy of ≥0.88 for each model (Fig. 1C).

Finally, we tested the reliability of the parameter estimates from the winning model in both experiments by performing a parameter recovery analysis. We generated data from the best-fitting parameter estimates for each participant, and sampled trial-by-trial effort exertion from a Gamma distribution fit using maximum likelihood estimation to the observed distribution of peak amplitudes for each subject. We then repeated our model-fitting procedure on these synthetic data and quantified the reliability of the parameter estimates as the rank-order correlation between the generative “true” value and the recovered value.

Results

Experiment 1

Effort increased the tendency to repeat rewarded choices

The aim of experiment 1 was to examine the effect of effort on learning, independent of any individual preferences to exert effort itself. First, we compared the effect of effort on choice accuracy across the three (low, medium and high) effort groups. For each participant, this effect was computed as the proportion of trials in which the more rewarded stimulus was selected in the effort block relative to the control block. A one-way ANOVA revealed a significant effect of effort (F(1,91) = 6.22, p = 0.014; Fig. 2C), which was primarily driven by greater relative accuracy in the high vs the low effort group (p = 0.045, Bonferroni-corrected).

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Experiment 1 behavioral results. A, Participants made a series of choices between two stimuli by applying physical force to a pair of hand-held dynamometers. Following each choice, the chosen stimulus was displayed along with a probabilistic reward outcome (smiley or sad face). Each participant completed one effort block and one control block. B, Mean peak amplitudes in control blocks (gray) and effort blocks (blue). The effort required to register choices (dotted lines) was negligible in the control block (5% MVC), and higher in the effort block (18%, 31%, or 44% MVC for separate low, medium, and high effort groups, respectively). C, Raincloud plots of accuracy in the effort block relative to the control block for low, medium, and high effort groups. Effort group was a significant predictor of relative accuracy (p = 0.014). D, Model estimates of trial-by-trial win-stay (left) and lose-switch (right) probabilities with 95% confidence intervals (shaded area). Effort increased the tendency for participants to choose the same stimulus again following a reward (p = 0.021), and reduced the tendency to switch to the alternative stimulus following no reward (p = 0.014); *p < 0.05.

We investigated whether this effect on accuracy could be explained by differences in choice strategy. A GLM examining win-stay behavior found that trial-by-trial Effort was associated with a higher probability of choosing the same stimulus following a positive reward outcome (β = 1.08, SE = 0.47, p = 0.021; Fig. 2D). Neither the simple effect of Block nor the Block × Effort interaction was a significant predictor (both p > 0.22). An analogous GLM on lose-switch behavior showed that effort was associated with a lower probability of switching following a negative reward outcome (β = −0.84, SE = 0.34, p = 0.014; Fig. 2D). Again, neither Block nor the Block × Effort interaction was a significant predictor (both p > 0.11).

Together, these analyses suggest that effort may have a directional effect on reward signals during learning, boosting positive outcomes (promoting win-stay behavior), and blunting negative outcomes (reducing lose-switch behavior).

Computational models demonstrated that effort boosted positive and blunted negative RPEs

To test our key prediction that effort modulates learning, we compared a baseline Rescorla–Wagner model against alternative models that hypothesized distinct effects of effort on RPEs (Fig. 1). Our model comparisons revealed that the signal shift model provided the best fit to the observed data, with an improved AIC relative to the baseline model of 223.32 units (Fig. 3A), and an Akaike weight in excess of 0.99 across the model space. Critically, k values derived from the model were significantly greater than zero (0.76 ± 0.28; Wilcoxon sign-rank test, W = 2964, p = 0.006; Fig. 3B), demonstrating that the overall effect of effort was to boost positive RPEs, and blunt negative RPEs. These results provide a computational explanation for our earlier finding that greater effort was associated with more prominent win-stay behavior and less prominent lose-switch behavior. Post hoc comparisons revealed that this effect was driven by the medium effort group (1.48 ± 0.44, W = 408, p = 0.005, Bonferroni-corrected), rather than the low or high effort groups (both p > 0.16). A parameter recovery analysis confirmed that all parameters were reliably recoverable (parameter estimation reliability, γ = 0.86, β = 0.94, k = 0.67; p < 0.001 for all parameters).

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Experiment 1 computational modeling results. A, The signal shift model (orange) provided the most parsimonious account of the observed choice data based on AIC scores. B, Group average k values (mean ± SE) derived from the winning signal shift model. k values were significantly greater than 0 (p = 0.006), demonstrating that effort tended to boost positive and blunt negative RPEs. This effect was driven by the medium effort group (p = 0.005). C, RPEs from a representative participant (#74). Relative to baseline (black bars), effort boosted positive RPEs (top) and blunted negative RPEs (bottom). D, Choice probability averaged across all participants on each trial. The signal shift model (orange) was able to predict the observed choice data (black; SE shaded); **p < 0.01.

Notably, the winning signal shift model provided a superior fit compared with the dual learning rate (no effort) model (ΔAIC = 25.39), indicating that effort was critical in increasing learning rates for positive RPEs, and reducing learning rates for negative RPEs. To further confirm that this result was not simply because of a differential effect of positive and negative RPEs on value-updating independent of effort, we ran a post hoc permutation test in which we randomly shuffled the effort exerted across trials for each participant. On each permutation, we compared model fits for the signal shift model based on the empirical data against the permuted data. Across 100 permutations, the empirical data resulted in superior model fits relative to the permuted data in every case (p = 1, Z = 57.14, p < 0.001), confirming that effort played a critical role in modulating learning rate in response to positive and negative RPEs.

In summary, experiment 1 revealed that effort reinforced learning by shifting the RPEs in a positive direction. Notably, on every trial of this experiment, the effort required to select either stimulus was identical. This allowed us to focus solely on how effort affects the capacity to learn from choice outcomes, independent of an individual's aversion toward effort. An important question that remains is how learning is modulated by the willingness of an individual to invest effort in the first place. We addressed this question in a second experiment that was similar to the first, but in which the two stimuli presented on every trial were associated with different effort requirements. This allowed us to test the capacity of the winning signal shift model to account for effort on both the prospective and retrospective valuation of reward.

Experiment 2

Experiment 2 was similar to experiment 1, with the key difference that the two stimuli presented on each trial required different amounts of effort to select (“low,” >5% MVC, vs “high,” >44% MVC; Fig. 4A,B). For each participant, these mappings remained constant for the duration of the task, but their reward contingencies systematically changed over time. Thus, decisions on each trial required individuals to consider both the effort costs of each stimulus (known in advance) and its potential rewards (learned during the task).

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Experiment 2 behavioral results. A, Participants made a series of choices between two stimuli by applying physical force to a pair of hand-held dynamometers. One stimulus required negligible effort to select (low effort stimulus; 5% MVC) and the other required greater effort to select (high effort stimulus; 44% MVC). Participants therefore had to balance an aversion to the high effort stimulus against their desire to maximize reward. B, Mean peak amplitudes for the low effort (light blue) and high effort (dark blue) stimulus. C, Participants chose the low effort stimulus more often than the high effort stimulus (p = 0.003). D, There was no difference in choice accuracy between the low and high effort stimulus (p = 0.34). E, Model estimates of trial-by-trial win-stay (left) and lose-switch (right) probabilities with 95% confidence intervals (shaded area). Greater effort led to reduced win-stay behavior (p < 0.001) and increased lose-switch behavior (p < 0.001), reflecting an aversion to selecting the high effort stimulus; **p < 0.01. n.s. = Not significant.

Choices between low and high effort stimuli demonstrated effort aversion

Participants displayed an overall aversion to effort, shown by a higher proportion of choices for the low compared with the high effort stimulus (0.53 ± 0.01, d = 0.45, t(45) = 3.1, p = 0.003; Fig. 4C). This did not affect overall choice accuracy (mean difference = 0.01 ± 0.01, d = 0.13, t(45) = 0.97, p = 0.34, Fig. 4D). Greater effort was associated with reduced win-stay behavior (β = −0.88, SE = 0.17, p < 0.001; Fig. 4E), and increased lose-switch behavior (β = 0.46, SE = 0.1, p < 0.001; Fig. 4E). Note that this contrasts with experiment 1, in which effort increased win-stay behavior and reduced lose-switch behavior. This is unsurprising, given that the aversiveness of the high effort stimulus (Fig. 4C) was likely to have obscured any subtler effects of effort on the RPE in a GLM. Therefore, to test whether these two possible effects of effort could be dissociated, we used a similar computational approach to experiment 1.

Computational models indicated that effort discounted value and shifted RPEs in a positive direction

The key difference between experiments 1 and 2 was that participants now had to balance an aversion to the high effort stimulus against their desire to maximize reward. To capture individuals' aversion to effort, we paired models from experiment 1 (M2–M7) with an effort discounting function, such that the discounted value of each stimulus (v′) was computed as its learned value (v), discounted by the amount of effort required to select it (E), whereby E = 0.05 for the low effort stimulus and E = 0.44 for the high effort stimulus. Effort was scaled by a separate subject-specific effort discounting parameter (ked), which indexed each individual's aversion to effort: vt′=vt−ked⋅Et.

As in experiment 1, learning rate varied as a function of trial-by-trial effort exertion, scaled by a free parameter capturing the effect of effort on RPEs (krpe). Our model space in experiment 2 also included a baseline model (M1), as well as an effort discounting model (M8) that included an effort discounting function in the absence of any effect of effort on learning.

Model comparisons replicated our earlier findings by showing that the best fitting model comprised the same signal shift model of experiment 1, now paired with an effort discounting function to account for choices between the low and high effort stimulus on each trial. This model had an improved AIC of 156.02 units relative to the baseline model, and an Akaike weight in excess of 0.99 across the model space (Fig. 5A). Across the group, ked values were significantly greater than zero, demonstrating that effort discounted value before choices were made (0.32 ± 0.05; W = 1000, p < 0.001; Fig. 5B). ked values also correlated with choice bias toward the low effort stimulus (Spearman's ρ = 0.36, p = 0.015; Fig. 5E), confirming that this parameter successfully captured observed effort aversion. In addition, krpe values across the group were significantly greater than 0, showing that effort boosted positive and blunted negative RPEs after choices were made, consistent with experiment 1 (0.63 ± 0.32; W = 833, p = 0.001; Fig. 5B).

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Experiment 2 computational modeling results. A, The effort discounting + signal shift model (orange) provided the most parsimonious account of the observed choice data based on AIC scores. B, Group average ked and krpe values (mean ± SE) derived from the winning model. Parameter values were significantly greater than 0 (both p ≤ 0.001), demonstrating that effort discounted value before choice, while boosting positive and blunting negative RPEs after choice. C, The winning model captured observed effort aversion, demonstrated by a correlation between ked values and choice bias (ρ = 0.36, p = 0.015). D, ked and krpe values were strongly correlated (ρ = 0.70, p < 0.001), suggesting that effort modulated learning to a greater extent in those participants who were more averse to exerting it. E, Effort discounting from a representative participant (#2). Learned value (v) was discounted (v') by the amount of effort required to select each stimulus, scaled by an effort discounting parameter (ked). F, RPEs from a representative participant (#2). Relative to baseline (black bars), effort boosted positive RPEs (top) and blunted negative RPEs (bottom). G, Average choice probability across all participants on each trial. The effort discounting + signal shift model (orange) was able to predict the observed choice data (black; SE shaded). *p < 0.05, **p < 0.01.

A parameter recovery analysis confirmed that all parameter estimates from the winning signal shift model were reliably recoverable (parameter estimation reliability, γ = 0.85, β = 0.91, ked = 0.62, krpe = 0.60, p < 0.001 for all parameters).

Again, we note that the winning signal shift model provided a superior fit to the dual learning rate (no effort) model (ΔAIC = 12.53), confirming that effort played a significant role in modulating RPEs. To verify that the signal shift model was not merely approximating dual learning rates independent of effort requirements, we ran the same post hoc permutation test as in experiment 1, involving 100 permutations with effort data shuffled across trials for each participant. On every one of these permutations, the empirical data provided superior fits to the permuted data (p = 1, Z = 26.3, p < 0.001), again pointing to a critical role for effort in increasing learning rates for positive RPEs, and reducing learning rates for negative RPEs.

The effect of effort on RPEs was greater in individuals who were more averse to effort

These results demonstrate that an individual's sensitivity to effort results in both a greater aversion to effort (i.e., greater effort discounting, ked), and a greater positive shift in the direction of RPEs (krpe). An obvious question is whether these two effort parameters are related. Indeed, we found a strong correlation between ked and krpe values derived from the winning model (Spearman's ρ = 0.7, p < 0.001; Fig. 5F). This indicates that, across our group of participants, the effect of effort on RPEs was directly related to the degree to which individuals found effort aversive.

Discussion

Recent data suggest that the neural signals mediating effort exertion overlap with those that convey reward value (Hamid et al., 2016; Syed et al., 2016; Sedaghat-Nejad et al., 2019; Hughes et al., 2020). To date, however, it has remained unclear how effort and reward-based learning could co-exist within a common computational framework (Berke, 2018; Jenkins and Walton, 2020; Tanaka et al., 2021). Here, we tested how effort modulates human reinforcement learning. Our key finding was that the exertion of physical effort resulted in a unidirectional increase in the subjective value of a learned outcome. Furthermore, we showed that the extent to which effort reinforced learning was directly proportional to the degree to which an individual was averse to investing that effort. Together, these data demonstrate that learning is shaped not only by rewards, but also by the effort required to obtain them.

Our analysis aimed to reconcile two broad frameworks of value-based decision-making. Reinforcement learning theory stipulates that choices are driven by estimates of expected future reward (Sutton and Barto, 1998), whereas neuroeconomic theories frame decisions as cost-benefit trade-offs (Rangel et al., 2008). By integrating these frameworks, we show that effort influences both the prospective and retrospective valuation of reward, and in turn modulates the RPEs that drive learning. Across two different experiments, we found that effort shifted RPEs in a positive direction on a trial-by-trial basis. This positive shift was independent of the valence of the reward outcome, showing that the overall effect of effort was to boost positive RPEs, and blunt negative RPEs.

These results are in keeping with previous findings that effort exertion influences subsequent choice preferences. For example, humans prefer to view stimuli that have previously been paired with more aversive levels of effort (Klein et al., 2005; Alessandri et al., 2008). Similarly, nonhuman animals show a preference for food rewards that previously required greater effort to obtain (Clement et al., 2000; Friedrich and Zentall, 2004; Johnson and Gallagher, 2011). Here, we present a possible computational basis for these past findings by demonstrating that effort shapes choice preferences by modulating the efficiency with which action values are updated, increasing that efficiency for outcomes that are positive, and reducing it for those that are negative.

This result has interesting implications. Specifically, the net effect of effort exertion in our study gave rise to a pattern of results similar to that described in previous work on reinforcement learning. Several studies have now shown that learning rates are typically higher for positive relative to negative reward outcomes (Frank et al., 2007; den Ouden et al., 2013; Lefebvre et al., 2017; Palminteri et al., 2017). In addition, this learning rate asymmetry seems to be more pronounced following actions in which individuals are more invested (e.g., self-determined vs forced choices; Chambon et al., 2020), which accords with our effort results. Taken together, these results suggest that effort may at least in part contribute to differences in positive and negative learning rates that have been observed across a wide range of tasks, although this remains to be confirmed in future studies.

An important finding in experiment 2 is the link between the computation of prospective effort costs, which discounted value before choice, and realized effort costs (i.e., exertion itself), which reinforced learning after choice. This suggests that, for a given individual, the extent to which effort reinforces learning depends on their sensitivity to effort. One interpretation of this result is that the aversiveness of effort lowers the value of an individual's current state, which in turn increases the relative value of rewards obtained in that state (Zentall and Singer, 2007). Such state-based valuation effects have been inferred previously (Clement et al., 2000; Kacelnik and Marsh, 2002; Zentall and Singer, 2007), including in aversive contexts other than the exertion of effort, such as temporal delay (DiGian et al., 2004; Pompilio and Kacelnik, 2005), reward omission (Friedrich et al., 2005), and hunger (Aw et al., 2009), which points to a more general link between the strength of reinforcement and the motivational state of the individual (Berridge, 2004; McNamara et al., 2012).

We found that the individuals whose learning was most affected by effort were those more averse to investing effort in the first place. This provides an interesting juxtaposition to the psychological concept of effort justification, which is the finding that individuals who are more averse to effort tend to overvalue the outcomes of any such investment (Festinger and Carlsmith, 1959). Theories of effort justification argue that this is driven by the cognitive dissonance that follows the experience of unpleasant levels of effort (Festinger, 1957). In contrast, our data suggest that the augmentation of value by effort may not merely be a consequence of cognitive dissonance – rather, it may be an adaptive mechanism that offsets the potential disadvantage of being less motivated (McNamara et al., 2012).

An extensive body of work has established the importance of dopamine signals in both motivation and learning (Salamone and Correa, 2002; Wise, 2004). The classical account is that these signals operate over different timescales (Schultz, 2007), with motor vigor linked to slow fluctuations in striatal dopamine activity (Niv et al., 2007; Howe et al., 2013; Y. Wang et al., 2021), and learning driven by more rapid, phasic changes in dopamine firing rates (Montague et al., 1996; Schultz et al., 1997). However, recent data have challenged these views by showing that effort exertion may also increase the activity of dopaminergic neurons in phasic bursts (Hamid et al., 2016; Syed et al., 2016; da Silva et al., 2018; Hughes et al., 2020). These neurophysiological data raise the possibility that transient, effort-induced increases in dopamine activity could augment the reward signals that drive learning (Tanaka et al., 2021), a speculation that accords with our computational findings, and with previous studies showing that effort boosts dopamine signals for positive outcomes (Syed et al., 2016) and blunts the reductions in dopamine activity that accompany negative outcomes (Stelly et al., 2019).

A topical issue has been to reconcile the role of dopamine in signaling reward and effort (Gan et al., 2010; Hollon et al., 2014; Hamid et al., 2016; Syed et al., 2016; Skvortsova et al., 2017; Westbrook et al., 2020). In particular, several studies have probed the effect of reward on subsequent effort exertion (Nakamura and Hikosaka, 2006; Beierholm et al., 2013; Chong et al., 2015). An important unanswered question, however, is whether the reverse relationship holds, that is, whether effort can systematically affect learning about reward outcomes. Our study fills this gap by confirming the existence of a robust link between effort and the RPEs that lie at the core of reinforcement learning. Note that this also differs from previous work that has characterized the computational architecture underpinning learning about effort costs (Skvortsova et al., 2014, 2017). Here, we deliberately trained participants on the required levels of effort before testing to minimize learning about effort costs during our experiments, allowing us to focus our analysis on the effect of effort on reward signals. Taken together with earlier studies showing that reward increases the speed of subsequent movements (Milstein and Dorris, 2007; Summerside et al., 2018) and the willingness to exert effort (Chong et al., 2015), our results suggest a strong bidirectional relationship between effort and reward.

In summary, this study contributes to a growing body of work highlighting the importance of motivational factors, such as the willingness to exert effort, in models of reward-based learning (Berridge, 2007; Zhang et al., 2009; McNamara et al., 2012; Collins and Frank, 2014; Berke, 2018; Juechems and Summerfield, 2019; van Swieten and Bogacz, 2020; Tanaka et al., 2021). From a clinical perspective, learning impairments are common across a range of neurologic and psychiatric diseases, including Parkinson's disease (Peterson et al., 2009; Schapira et al., 2017; Chong, 2018), schizophrenia (Waltz et al., 2007; Schlagenhauf et al., 2014), and ADHD (Seidman et al., 2001; Luman et al., 2010). This study thus lays the foundation for future work to test the role of striatal dopamine signals in effort-based learning, and to examine whether effort-based interventions could be applied therapeutically in clinical populations with learning impairments.

Footnotes

  • H.J. is supported by an Australian Government Research Training Program Scholarship. J.C. is supported by the Australian Research Council Grant DP190100772. T.T.-J.C. is supported by Australian Research Council Grants DP 180102383 and DE 180100389, the Judith Jane Mason and Harold Stannett Williams Memorial Foundation, the Brain Foundation, the Society for Mental Health Research, and the Office of Naval Research (Global). We thank Virginia Klink and Veronica Mazur for assisting with data collection and Julian Matthews and Adam Morris for helpful discussions. H.J. and T.T.-J.C. were supported by the Rebecca L. Cooper Medical Research Foundation.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Huw Jarvis at huw.jarvis{at}monash.edu

SfN exclusive license.

References

  1. ↵
    1. Aberman JE,
    2. Salamone JD
    (1999) Nucleus accumbens dopamine depletions make rats more sensitive to high ratio requirements but do not impair primary food reinforcement. Neuroscience 92:545–552. doi:10.1016/S0306-4522(99)00004-4
    OpenUrlCrossRefPubMed
  2. ↵
    1. Akaike H
    (1974) A new look at the statistical model identification. IEEE Trans Automat Contr 19:716–723. doi:10.1109/TAC.1974.1100705
    OpenUrlCrossRef
  3. ↵
    1. Alessandri J,
    2. Darcheville JC,
    3. Delevoye-Turrell Y,
    4. Zentall TR
    (2008) Preference for rewards that follow greater effort and greater delay. Learn Behav 36:352–358. pmid:18927058
    OpenUrlCrossRefPubMed
  4. ↵
    1. Aronson E,
    2. Mills J
    (1959) The effect of severity of initiation on liking for a group. J Abnorm Soc Psychol 59:177–181. doi:10.1037/h0047195
    OpenUrlCrossRef
  5. ↵
    1. Atkins KJ,
    2. Andrews SC,
    3. Stout JC,
    4. Chong TT
    (2020) Dissociable motivational deficits in pre-manifest Huntington's disease. Cell Rep Med 1:100152. pmid:33377123
    OpenUrlCrossRefPubMed
  6. ↵
    1. Aw J,
    2. Holbrook R,
    3. De Perera TB,
    4. Kacelnik A
    (2009) State-dependent valuation learning in fish: banded tetras prefer stimuli associated with greater past deprivation. Behav Processes 81:333–336. doi:10.1016/j.beproc.2008.09.002 pmid:18834933
    OpenUrlCrossRefPubMed
  7. ↵
    1. Bartra O,
    2. Mcguire JT,
    3. Kable JW
    (2013) The valuation system: a coordinate-based meta-analysis of BOLD fMRI experiments examining neural correlates of subjective value. Neuroimage 76:412–427. doi:10.1016/j.neuroimage.2013.02.063 pmid:23507394
    OpenUrlCrossRefPubMed
  8. ↵
    1. Bates D,
    2. Mächler M,
    3. Bolker B,
    4. Walker S
    (2015) Fitting linear mixed-effects models using lme4. J Stat Soft 67:48. doi:10.18637/jss.v067.i01
    OpenUrlCrossRef
  9. ↵
    1. Bayer HM,
    2. Glimcher PW
    (2005) Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron 47:129–141. doi:10.1016/j.neuron.2005.05.020
    OpenUrlCrossRefPubMed
  10. ↵
    1. Beierholm U,
    2. Guitart-Masip M,
    3. Economides M,
    4. Chowdhury R,
    5. Duzel E,
    6. Dolan R,
    7. Dayan P
    (2013) Dopamine modulates reward-related vigor. Neuropsychopharmacology 38:1495–1503. pmid:23419875
    OpenUrlCrossRefPubMed
  11. ↵
    1. Berke JD
    (2018) What does dopamine mean? Nat Neurosci 21:787–793. pmid:29760524
    OpenUrlCrossRefPubMed
  12. ↵
    1. Berridge KC
    (2004) Motivation concepts in behavioral neuroscience. Physiol Behav 81:179–209. doi:10.1016/j.physbeh.2004.02.004
    OpenUrlCrossRefPubMed
  13. ↵
    1. Berridge KC
    (2007) The debate over dopamine's role in reward: the case for incentive salience. Psychopharmacology (Berl) 191:391–431. pmid:17072591
    OpenUrlCrossRefPubMed
  14. ↵
    1. Bitgood S,
    2. Dukes S
    (2006) Not another step! Economy of movement and pedestrian choice point behavior in shopping malls. Environ Behav 38:394–405. doi:10.1177/0013916505280081
    OpenUrlCrossRef
  15. ↵
    1. Botvinick MM,
    2. Huffstetler S,
    3. Mcguire JT
    (2009) Effort discounting in human nucleus accumbens. Cogn Affect Behav Neurosci 9:16–27. pmid:19246324
    OpenUrlCrossRefPubMed
  16. ↵
    1. Brainard DH
    (1997) The psychophysics toolbox. Spat Vis 10:433–436. pmid:9176952
    OpenUrlCrossRefPubMed
  17. ↵
    1. Chambon V,
    2. Théro H,
    3. Vidal M,
    4. Vandendriessche H,
    5. Haggard P,
    6. Palminteri S
    (2020) Information about action outcomes differentially affects learning from self-determined versus imposed choices. Nat Hum Behav 4:1067–1079.
    OpenUrl
  18. ↵
    1. Chong TT
    (2018) Updating the role of dopamine in human motivation and apathy. Curr Opin Behav Sci 22:35–41. doi:10.1016/j.cobeha.2017.12.010
    OpenUrlCrossRef
  19. ↵
    1. Chong TT,
    2. Bonnelle V,
    3. Manohar S,
    4. Veromann KR,
    5. Muhammed K,
    6. Tofaris GK,
    7. Hu M,
    8. Husain M
    (2015) Dopamine enhances willingness to exert effort for reward in Parkinson's disease. Cortex 69:40–46. pmid:25967086
    OpenUrlCrossRefPubMed
  20. ↵
    1. Chong TT,
    2. Apps M,
    3. Giehl K,
    4. Sillence A,
    5. Grima LL,
    6. Husain M
    (2017) Neurocomputational mechanisms underlying subjective valuation of effort costs. PLoS Biol 15:e1002598. pmid:28234892
    OpenUrlCrossRefPubMed
  21. ↵
    1. Clement TS,
    2. Feltus JR,
    3. Kaiser DH,
    4. Zentall TR
    (2000) “Work ethic” in pigeons: reward value is directly related to the effort or time required to obtain the reward. Psychon Bull Rev 7:100–106. doi:10.3758/bf03210727 pmid:10780022
    OpenUrlCrossRefPubMed
  22. ↵
    1. Coddington LT,
    2. Dudman JT
    (2018) The timing of action determines reward prediction signals in identified midbrain dopamine neurons. Nat Neurosci 21:1563–1573. pmid:30323275
    OpenUrlCrossRefPubMed
  23. ↵
    1. Collins AG,
    2. Frank MJ
    (2014) Opponent actor learning (OpAL): modeling interactive effects of striatal dopamine on reinforcement learning and choice incentive. Psychol Rev 121:337–366. doi:10.1037/a0037015 pmid:25090423
    OpenUrlCrossRefPubMed
  24. ↵
    1. Cools R,
    2. Robinson OJ,
    3. Sahakian B
    (2008) Acute tryptophan depletion in healthy volunteers enhances punishment prediction but does not affect reward prediction. Neuropsychopharmacol 33:2291–2299. doi:10.1038/sj.npp.1301598
    OpenUrlCrossRefPubMed
  25. ↵
    1. Cos I,
    2. Bélanger N,
    3. Cisek P
    (2011) The influence of predicted arm biomechanics on decision making. J Neurophysiol 105:3022–3033. pmid:21451055
    OpenUrlCrossRefPubMed
  26. ↵
    1. da Silva JA,
    2. Tecuapetla F,
    3. Paixao V,
    4. Costa RM
    (2018) Dopamine neuron activity before action initiation gates and invigorates future movements. Nature 554:244–248. pmid:29420469
    OpenUrlCrossRefPubMed
  27. ↵
    1. den Ouden HE,
    2. Daw ND,
    3. Fernandez G,
    4. Elshout JA,
    5. Rijpkema M,
    6. Hoogman M,
    7. Franke B,
    8. Cools R
    (2013) Dissociable effects of dopamine and serotonin on reversal learning. Neuron 80:1090–1100. pmid:24267657
    OpenUrlCrossRefPubMed
  28. ↵
    1. DiGian KA,
    2. Friedrich AM,
    3. Zentall TR
    (2004) Discriminative stimuli that follow a delay have added value for pigeons. Psychon Bull Rev 11:889–895. pmid:15732699
    OpenUrlPubMed
  29. ↵
    1. Festinger L
    (1957) A theory of cognitive dissonance. Stanford: Stanford University Press.
  30. ↵
    1. Festinger L,
    2. Carlsmith JM
    (1959) Cognitive consequences of forced compliance. J Abnorm Soc Psychol 58:203–210. doi:10.1037/h0041593
    OpenUrlCrossRefPubMed
  31. ↵
    1. Frank MJ,
    2. Moustafa AA,
    3. Haughey HM,
    4. Curran T,
    5. Hutchison KE
    (2007) Genetic triple dissociation reveals multiple roles for dopamine in reinforcement learning. Proc Natl Acad Sci U S A 104:16311–16316. pmid:17913879
    OpenUrlAbstract/FREE Full Text
  32. ↵
    1. Friedrich AM,
    2. Zentall TR
    (2004) Pigeons shift their preference toward locations of food that take more effort to obtain. Behav Processes 67:405–415. pmid:15518990
    OpenUrlCrossRefPubMed
  33. ↵
    1. Friedrich AM,
    2. Clement TS,
    3. Zentall TR
    (2005) Discriminative stimuli that follow the absence of reinforcement are preferred by pigeons over those that follow reinforcement. Learn Behav 33:337–342. pmid:16396080
    OpenUrlPubMed
  34. ↵
    1. Gan JO,
    2. Walton ME,
    3. Phillips PE
    (2010) Dissociable cost and benefit encoding of future rewards by mesolimbic dopamine. Nat Neurosci 13:25–27. pmid:19904261
    OpenUrlCrossRefPubMed
  35. ↵
    1. Guitart-Masip M,
    2. Duzel E,
    3. Dolan R,
    4. Dayan P
    (2014) Action versus valence in decision making. Trends Cogn Sci 18:194–202. pmid:24581556
    OpenUrlCrossRefPubMed
  36. ↵
    1. Hamid AA,
    2. Pettibone JR,
    3. Mabrouk OS,
    4. Hetrick VL,
    5. Schmidt R,
    6. Vander Weele CM,
    7. Kennedy RT,
    8. Aragona BJ,
    9. Berke JD
    (2016) Mesolimbic dopamine signals the value of work. Nat Neurosci 19:117–126. doi:10.1038/nn.4173
    OpenUrlCrossRefPubMed
  37. ↵
    1. Hollon NG,
    2. Arnold MM,
    3. Gan JO,
    4. Walton ME,
    5. Phillips PEM
    (2014) Dopamine-associated cached values are not sufficient as the basis for action selection. Proc Natl Acad Sci U S A 111:18357–18362. doi:10.1073/pnas.1419770111 pmid:25489094
    OpenUrlAbstract/FREE Full Text
  38. ↵
    1. Howe MW,
    2. Dombeck DA
    (2016) Rapid signalling in distinct dopaminergic axons during locomotion and reward. Nature 535:505–510. pmid:27398617
    OpenUrlCrossRefPubMed
  39. ↵
    1. Howe MW,
    2. Tierney PL,
    3. Sandberg SG,
    4. Phillips PE,
    5. Graybiel AM
    (2013) Prolonged dopamine signalling in striatum signals proximity and value of distant rewards. Nature 500:575–579. pmid:23913271
    OpenUrlCrossRefPubMed
  40. ↵
    1. Hughes RN,
    2. Bakhurin KI,
    3. Petter EA,
    4. Watson GDR,
    5. Kim N,
    6. Friedman AD,
    7. Yin HH
    (2020) Ventral tegmental dopamine neurons control the impulse vector during motivated behavior. Curr Biol 30:2681–2694.e5. pmid:32470362
    OpenUrlCrossRefPubMed
  41. ↵
    1. Hull CL
    (1943) Principles of behavior. New York: Appleton-Century-Crofts.
  42. ↵
    1. Jarvis H
    (2019) What can saccades reveal about the link between learning and motivation? J Neurosci 39:9274–9276. pmid:31748283
    OpenUrlFREE Full Text
  43. ↵
    1. Jenkins G,
    2. Walton M
    (2020) Dopamine: don't underestimate the force. Curr Biol 30:R824–R826. pmid:32693080
    OpenUrlPubMed
  44. ↵
    1. Johnson AW,
    2. Gallagher M
    (2011) Greater effort boosts the affective taste properties of food. Proc R Soc B 278:1450–1456. doi:10.1098/rspb.2010.1581
    OpenUrlCrossRefPubMed
  45. ↵
    1. Juechems K,
    2. Summerfield C
    (2019) Where does value come from? Trends Cogn Sci 23:836–850. pmid:31494042
    OpenUrlPubMed
  46. ↵
    1. Kacelnik A,
    2. Marsh B
    (2002) Cost can increase preference in starlings. Anim Behav 63:245–250. doi:10.1006/anbe.2001.1900
    OpenUrlCrossRef
  47. ↵
    1. Klein ED,
    2. Bhatt RS,
    3. Zentall TR
    (2005) Contrast and the justification of effort. Psychon Bull Rev 12:335–339. pmid:16082815
    OpenUrlPubMed
  48. ↵
    1. Kool W,
    2. Mcguire JT,
    3. Rosen ZB,
    4. Botvinick MM
    (2010) Decision making and the avoidance of cognitive demand. J Exp Psychol Gen 139:665–682. pmid:20853993
    OpenUrlCrossRefPubMed
  49. ↵
    1. Kurniawan IT,
    2. Seymour B,
    3. Talmi D,
    4. Yoshida W,
    5. Chater N,
    6. Dolan RJ
    (2010) Choosing to make an effort: the role of striatum in signaling physical effort of a chosen action. J Neurophysiol 104:313–321. pmid:20463204
    OpenUrlCrossRefPubMed
  50. ↵
    1. Lefebvre G,
    2. Lebreton M,
    3. Meyniel F,
    4. Bourgeois-Gironde S,
    5. Palminteri S
    (2017) Behavioural and neural characterization of optimistic reinforcement learning. Nat Hum Behav 1:0067. doi:10.1038/s41562-017-0067
    OpenUrlCrossRef
  51. ↵
    1. Luce RD
    (1959) Individual choice behavior. Oxford: Wiley.
  52. ↵
    1. Luman M,
    2. Tripp G,
    3. Scheres A
    (2010) Identifying the neurobiology of altered reinforcement sensitivity in ADHD: a review and research agenda. Neurosci Biobehav Rev 34:744–754. pmid:19944715
    OpenUrlCrossRefPubMed
  53. ↵
    1. Manohar S,
    2. Lockwood P,
    3. Drew D,
    4. Fallon SJ,
    5. Chong TT,
    6. Jeyaretna DS,
    7. Baker I,
    8. Husain M
    (2021) Reduced decision bias and more rational decision making following ventromedial prefrontal cortex damage. Cortex 138:24–37. pmid:33677325
    OpenUrlPubMed
  54. ↵
    1. Matsumoto M,
    2. Hikosaka O
    (2009) Two types of dopamine neuron distinctly convey positive and negative motivational signals. Nature 459:837–841. pmid:19448610
    OpenUrlCrossRefPubMed
  55. ↵
    1. McGuigan S,
    2. Zhou SH,
    3. Brosnan MB,
    4. Thyagarajan D,
    5. Bellgrove MA,
    6. Chong TT
    (2019) Dopamine restores cognitive motivation in Parkinson's disease. Brain 142:719–732. pmid:30689734
    OpenUrlCrossRefPubMed
  56. ↵
    1. McNamara JM,
    2. Trimmer PC,
    3. Houston A
    (2012) The ecological rationality of state-dependent valuation. Psychol Rev 119:114–119. pmid:22022832
    OpenUrlCrossRefPubMed
  57. ↵
    1. Milstein DM,
    2. Dorris MC
    (2007) The influence of expected value on saccadic preparation. J Neurosci 27:4810–4818. pmid:17475788
    OpenUrlAbstract/FREE Full Text
  58. ↵
    1. Montague PR,
    2. Dayan P,
    3. Sejnowski TJ
    (1996) A framework for mesencephalic dopamine systems based on predictive Hebbian learning. J Neurosci 16:1936–1947. pmid:8774460
    OpenUrlAbstract/FREE Full Text
  59. ↵
    1. Nakamura K,
    2. Hikosaka O
    (2006) Role of dopamine in the primate caudate nucleus in reward modulation of saccades. J Neurosci 26:5360–5369. pmid:16707788
    OpenUrlAbstract/FREE Full Text
  60. ↵
    1. Niv Y
    (2009) Reinforcement learning in the brain. J Math Psychol 53:139–154. doi:10.1016/j.jmp.2008.12.005
    OpenUrlCrossRef
  61. ↵
    1. Niv Y,
    2. Daw ND,
    3. Joel D,
    4. Dayan P
    (2007) Tonic dopamine: opportunity costs and the control of response vigor. Psychopharmacology (Berl) 191:507–520. pmid:17031711
    OpenUrlCrossRefPubMed
  62. ↵
    1. Palminteri S,
    2. Lefebvre G,
    3. Kilford EJ,
    4. Blakemore SJ
    (2017) Confirmation bias in human reinforcement learning: evidence from counterfactual feedback processing. PLoS Comput Biol 13:e1005684. pmid:28800597
    OpenUrlCrossRefPubMed
  63. ↵
    1. Park SQ,
    2. Kahnt T,
    3. Beck A,
    4. Cohen MX,
    5. Dolan RJ,
    6. Wrase J,
    7. Heinz A
    (2010) Prefrontal cortex fails to learn from reward prediction errors in alcohol dependence. J Neurosci 30:7749–7753. pmid:20519550
    OpenUrlAbstract/FREE Full Text
  64. ↵
    1. Peterson DA,
    2. Elliott C,
    3. Song DD,
    4. Makeig S,
    5. Sejnowski TJ,
    6. Poizner H
    (2009) Probabilistic reversal learning is impaired in Parkinson's disease. Neuroscience 163:1092–1101. pmid:19628022
    OpenUrlPubMed
  65. ↵
    1. Pompilio L,
    2. Kacelnik A
    (2005) State-dependent learning and suboptimal choice: when starlings prefer long over short delays to food. Anim Behav 70:571–578. doi:10.1016/j.anbehav.2004.12.009
    OpenUrlCrossRef
  66. ↵
    1. Prévost C,
    2. Pessiglione M,
    3. Météreau E,
    4. Cléry-Melin ML,
    5. Dreher JC
    (2010) Separate valuation subsystems for delay and effort decision costs. J Neurosci 30:14080–14090. pmid:20962229
    OpenUrlAbstract/FREE Full Text
  67. ↵
    1. Ranganathan R,
    2. Adewuyi A,
    3. Mussa-Ivaldi FA
    (2013) Learning to be lazy: exploiting redundancy in a novel task to minimize movement-related effort. J Neurosci 33:2754–2760. doi:10.1523/JNEUROSCI.1553-12.2013 pmid:23407935
    OpenUrlAbstract/FREE Full Text
  68. ↵
    1. Rangel A,
    2. Camerer C,
    3. Montague PR
    (2008) A framework for studying the neurobiology of value-based decision making. Nat Rev Neurosci 9:545–556. pmid:18545266
    OpenUrlCrossRefPubMed
  69. ↵
    1. Rescorla RA,
    2. Wagner AR
    (1972) A theory of Pavlovian conditioning: variations on the effectiveness of reinforcement and non-reinforcement. In: Classical conditioning II: current research and theory (Black AH and Prokasy WF, eds.). New York: Appleton-Century-Crofts.
  70. ↵
    RStudio Team (2016) RStudio: Integrated Development for R. RStudio, Inc., Boston, MA. http://www.rstudio.com/.
  71. ↵
    1. Salamone JD,
    2. Correa M
    (2002) Motivational views of reinforcement: implications for understanding the behavioral functions of nucleus accumbens dopamine. Behav Brain Res 137:3–25. pmid:12445713
    OpenUrlCrossRefPubMed
  72. ↵
    1. Salamone JD,
    2. Correa M
    (2012) The mysterious motivational functions of mesolimbic dopamine. Neuron 76:470–485. pmid:23141060
    OpenUrlCrossRefPubMed
  73. ↵
    1. Samejima K,
    2. Ueda Y,
    3. Doya K,
    4. Kimura M
    (2005) Representation of action-specific reward values in the striatum. Science 310:1337–1340. pmid:16311337
    OpenUrlAbstract/FREE Full Text
  74. ↵
    1. Schall JD
    (2001) Neural basis of deciding, choosing and acting. Nat Rev Neurosci 2:33–42. pmid:11253357
    OpenUrlCrossRefPubMed
  75. ↵
    1. Schapira AHV,
    2. Chaudhuri KR,
    3. Jenner P
    (2017) Non-motor features of Parkinson disease. Nat Rev Neurosci 18:435–450. doi:10.1038/nrn.2017.62 pmid:28592904
    OpenUrlCrossRefPubMed
  76. ↵
    1. Schlagenhauf F,
    2. Rapp MA,
    3. Huys QJ,
    4. Beck A,
    5. Wüstenberg T,
    6. Deserno L,
    7. Buchholz HG,
    8. Kalbitzer J,
    9. Buchert R,
    10. Bauer M,
    11. Kienast T,
    12. Cumming P,
    13. Plotkin M,
    14. Kumakura Y,
    15. Grace AA,
    16. Dolan RJ,
    17. Heinz A
    (2013) Ventral striatal prediction error signaling is associated with dopamine synthesis capacity and fluid intelligence. Hum Brain Mapp 34:1490–1499. pmid:22344813
    OpenUrlCrossRefPubMed
  77. ↵
    1. Schlagenhauf F,
    2. Huys QJ,
    3. Deserno L,
    4. Rapp MA,
    5. Beck A,
    6. Heinze HJ,
    7. Dolan R,
    8. Heinz A
    (2014) Striatal dysfunction during reversal learning in unmedicated schizophrenia patients. Neuroimage 89:171–180. pmid:24291614
    OpenUrlCrossRefPubMed
  78. ↵
    1. Schultz W
    (2007) Multiple dopamine functions at different time courses. Annu Rev Neurosci 30:259–288. pmid:17600522
    OpenUrlCrossRefPubMed
  79. ↵
    1. Schultz W,
    2. Dayan P,
    3. Montague PR
    (1997) A neural substrate of prediction and reward. Science 275:1593–1599. pmid:9054347
    OpenUrlAbstract/FREE Full Text
  80. ↵
    1. Sedaghat-Nejad E,
    2. Herzfeld DJ,
    3. Shadmehr R
    (2019) Reward prediction error modulates saccade vigor. J Neurosci 39:5010–5017.
    OpenUrlAbstract/FREE Full Text
  81. ↵
    1. Seidman LJ,
    2. Biederman J,
    3. Monuteaux MC,
    4. Doyle AE,
    5. Faraone SV
    (2001) Learning disabilities and executive dysfunction in boys with attention-deficit/hyperactivity disorder. Neuropsychology 15:544–556. pmid:11761044
    OpenUrlCrossRefPubMed
  82. ↵
    1. Shadmehr R,
    2. Huang HJ,
    3. Ahmed AA
    (2016) A representation of effort in decision-making and motor control. Curr Biol 26:1929–1934. pmid:27374338
    OpenUrlCrossRefPubMed
  83. ↵
    1. Shadmehr R,
    2. Reppert TR,
    3. Summerside EM,
    4. Yoon T,
    5. Ahmed AA
    (2019) Movement vigor as a reflection of subjective economic utility. Trends Neurosci 42:323–336. pmid:30878152
    OpenUrlCrossRefPubMed
  84. ↵
    1. Singer RA,
    2. Berry LM,
    3. Zentall TR
    (2007) Preference for a stimulus that follows a relatively aversive event: contrast or delay reduction? J Exp Anal Behav 87:275–285. doi:10.1901/jeab.2007.39-06 pmid:17465316
    OpenUrlCrossRefPubMed
  85. ↵
    1. Skvortsova V,
    2. Palminteri S,
    3. Pessiglione M
    (2014) Learning to minimize efforts versus maximizing rewards: computational principles and neural correlates. J Neurosci 34:15621–15630. doi:10.1523/JNEUROSCI.1350-14.2014 pmid:25411490
    OpenUrlAbstract/FREE Full Text
  86. ↵
    1. Skvortsova V,
    2. Degos B,
    3. Welter ML,
    4. Vidailhet M,
    5. Pessiglione M
    (2017) A selective role for dopamine in learning to maximize reward but not to minimize effort: evidence from patients with Parkinson's disease. J Neurosci 37:6087–6097. doi:10.1523/JNEUROSCI.2081-16.2017 pmid:28539420
    OpenUrlAbstract/FREE Full Text
  87. ↵
    1. Stelly CE,
    2. Haug GC,
    3. Fonzi KM,
    4. Garcia MA,
    5. Tritley SC,
    6. Magnon AP,
    7. Ramos MAP,
    8. Wanat MJ
    (2019) Pattern of dopamine signaling during aversive events predicts active avoidance learning. Proc Natl Acad Sci U S A 116:13641–13650. pmid:31209016
    OpenUrlAbstract/FREE Full Text
  88. ↵
    1. Summerside EM,
    2. Shadmehr R,
    3. Ahmed AA
    (2018) Vigor of reaching movements: reward discounts the cost of effort. J Neurophysiol 119:2347–2357. pmid:29537911
    OpenUrlCrossRefPubMed
  89. ↵
    1. Sutton RS,
    2. Barto AG
    (1998) Reinforcement learning: an introduction. Cambridge: The MIT Press.
  90. ↵
    1. Syed EC,
    2. Grima LL,
    3. Magill PJ,
    4. Bogacz R,
    5. Brown P,
    6. Walton ME
    (2016) Action initiation shapes mesolimbic dopamine encoding of future rewards. Nat Neurosci 19:34–36. doi:10.1038/nn.4187
    OpenUrlCrossRefPubMed
  91. ↵
    1. Tanaka S,
    2. Taylor JE,
    3. Sakagami M
    (2021) The effect of effort on reward prediction error signals in midbrain dopamine neurons. Curr Opin Behav Sci 41:152–159. doi:10.1016/j.cobeha.2021.07.004
    OpenUrlCrossRef
  92. ↵
    1. Tsukamoto M,
    2. Kohara K,
    3. Takeuchi K
    (2017) Effects of effort and difficulty on human preference for a stimulus: investigation of the within-trial contrast. Learn Behav 45:135–146. pmid:27619982
    OpenUrlPubMed
  93. ↵
    1. van Swieten MMH,
    2. Bogacz R
    (2020) Modeling the effects of motivation on choice and learning in the basal ganglia. PLoS Comput Biol 16:e1007465. pmid:32453725
    OpenUrlPubMed
  94. ↵
    1. Walton ME,
    2. Kennerley SW,
    3. Bannerman DM,
    4. Phillips PE,
    5. Rushworth MF
    (2006) Weighing up the benefits of work: behavioral and neural analyses of effort-related decision making. Neural Netw 19:1302–1314. doi:10.1016/j.neunet.2006.03.005 pmid:16949252
    OpenUrlCrossRefPubMed
  95. ↵
    1. Waltz JA,
    2. Frank MJ,
    3. Robinson BM,
    4. Gold JM
    (2007) Selective reinforcement learning deficits in schizophrenia support predictions from computational models of striatal-cortical dysfunction. Biol Psychiatry 62:756–764. doi:10.1016/j.biopsych.2006.09.042
    OpenUrlCrossRefPubMed
  96. ↵
    1. Wang L,
    2. Zheng J,
    3. Meng L
    (2017) Effort provides its own reward: endeavors reinforce subjective expectation and evaluation of task performance. Exp Brain Res 235:1107–1118. pmid:28120011
    OpenUrlPubMed
  97. ↵
    1. Wang Y,
    2. Toyoshima O,
    3. Kunimatsu J,
    4. Yamada H,
    5. Matsumoto M
    (2021) Tonic firing mode of midbrain dopamine neurons continuously tracks reward values changing moment-by-moment. Elife 10:e63166. doi:10.7554/eLife.63166
    OpenUrlCrossRef
  98. ↵
    1. Watabe-Uchida M,
    2. Eshel N,
    3. Uchida N
    (2017) Neural circuitry of reward prediction error. Annu Rev Neurosci 40:373–394. pmid:28441114
    OpenUrlCrossRefPubMed
  99. ↵
    1. Westbrook A,
    2. van den Bosch R,
    3. Määttä JI,
    4. Hofmans L,
    5. Papadopetraki D,
    6. Cools R,
    7. Frank MJ
    (2020) Dopamine promotes cognitive effort by biasing the benefits versus costs of cognitive work. Science 367:1362–1366. pmid:32193325
    OpenUrlAbstract/FREE Full Text
  100. ↵
    1. Wickham H
    (2016) ggplot2: elegant graphics for data analysis. New York: Springer.
  101. ↵
    1. Wise RA
    (2004) Dopamine, learning and motivation. Nat Rev Neurosci 5:483–494. pmid:15152198
    OpenUrlCrossRefPubMed
  102. ↵
    1. Zentall TR,
    2. Singer RA
    (2007) Within-trial contrast: pigeons prefer conditioned reinforcers that follow a relatively more rather than a less aversive event. J Exp Anal Behav 88:131–149. pmid:17725056
    OpenUrlPubMed
  103. ↵
    1. Zhang J,
    2. Berridge KC,
    3. Tindell AJ,
    4. Smith KS,
    5. Aldridge JW
    (2009) A neural computational model of incentive salience. PLoS Comput Biol 5:e1000437. pmid:19609350
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 42 (40)
Journal of Neuroscience
Vol. 42, Issue 40
5 Oct 2022
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Effort Reinforces Learning
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Effort Reinforces Learning
Huw Jarvis, Isabelle Stevenson, Amy Q. Huynh, Emily Babbage, James Coxon, Trevor T.-J. Chong
Journal of Neuroscience 5 October 2022, 42 (40) 7648-7658; DOI: 10.1523/JNEUROSCI.2223-21.2022

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Effort Reinforces Learning
Huw Jarvis, Isabelle Stevenson, Amy Q. Huynh, Emily Babbage, James Coxon, Trevor T.-J. Chong
Journal of Neuroscience 5 October 2022, 42 (40) 7648-7658; DOI: 10.1523/JNEUROSCI.2223-21.2022
Reddit logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • effort
  • learning
  • motivation
  • reinforcement
  • reward
  • reward prediction error

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Optogenetics reveals roles for supporting cells in force transmission to and from outer hair cells in the mouse cochlea
  • Pre-saccadic neural enhancements in marmoset area MT
  • Interareal synaptic inputs underlying whisking-related activity in the primary somatosensory barrel cortex
Show more Research Articles

Behavioral/Cognitive

  • Featural representation and internal noise underlie the eccentricity effect in contrast sensitivity
  • Dissociative effects of age on neural differentiation at the category and item level
  • Intonation Units in Spontaneous Speech Evoke a Neural Response
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
(JNeurosci logo)
(SfN logo)

Copyright © 2023 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.