Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Journal Club

Shedding Light on the Role of Ventral Tegmental Area Dopamine in Reward

Benjamin T. Saunders and Jocelyn M. Richard
Journal of Neuroscience 14 December 2011, 31 (50) 18195-18197; https://doi.org/10.1523/JNEUROSCI.4924-11.2011
Benjamin T. Saunders
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jocelyn M. Richard
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Info & Metrics
  • eLetters
  • PDF
Loading

Successful reward seeking requires the interaction of multiple psychological processes, including learning (i.e., creating connections between actions/stimuli and rewarding outcomes), motivation (i.e., wanting to obtain rewarding outcomes), and affect (i.e., liking rewarding outcomes). There are two primary forms of reward learning, instrumental and Pavlovian (Cardinal et al., 2002; Berridge and Robinson, 2003). Instrumental learning involves the establishment of associations between actions and the outcomes they produce: actions are reinforced when they produce rewarding outcomes. In contrast, Pavlovian learning involves the formation of associations between stimuli/cues and the outcomes they predict. These learning processes are dissociable but, critically, Pavlovian learning affects actions as well: environmental stimuli associated with reward acquire motivational value of their own, meaning they can instigate approach behavior, elicit consumption of rewards, reinforce actions, and/or invigorate ongoing actions. Finally, the affective components of rewards shape learning by reinforcing actions and/or by increasing the motivational value of reward-associated stimuli.

Different types of learning and motivational processes that underlie reward seeking are thought to be neurobiologically dissociable (Cardinal et al., 2002; Berridge and Robinson, 2003; Fields et al., 2007). Converging lines of research suggest that dopaminergic neurons within the ventral tegmental area (VTA), via projections onto forebrain structures such as the nucleus accumbens, prefrontal cortex, and amygdala, play a key part in Pavlovian learning and motivation, and in the expression of learned appetitive behaviors in general. Dopamine (DA) is thought to be less important for instrumental learning, however (Fields et al., 2007; Tsai et al., 2009; Flagel et al., 2011; Wassum et al., 2011). Activation of DA neurons has been suggested to have positive reinforcing properties, because pharmacological or electrical stimulation tends to facilitate reward seeking, whereas inhibition or lesion tends to reduce reward seeking (Cheer et al., 2007; Fields et al., 2007). Many studies that address the role of dopamine in instrumental learning fail, however, to adequately dissociate learning from motivation and/or fail to remove Pavlovian cue influences from instrumental reward-seeking paradigms. Collectively, the psychological processes underlying reward seeking are complex, and as such, studies must carefully isolate these processes to determine DA's specific role in reward.

Until recently, technological barriers made it impossible to specifically manipulate DA neurons to test their causal contribution to reward. In a recent issue of The Journal of Neuroscience, Adamantidis et al. (2011) overcame this barrier by using in vivo optogenetic techniques to selectively stimulate DA neurons within the VTA during different phases of a food-seeking task and in a self-stimulation paradigm. To target DA neurons, they infused a virus for Cre-dependent expression of the light-activated channelrhodopsin-2 (ChR2) into the VTA of mice expressing Cre specifically in tyrosine hydroxylase-positive (i.e., DA-producing) neurons. This resulted in the targeted expression of ChR2 in only those cells. A similar procedure was previously shown to produce ChR2 expression in 90% of DA neurons near the injection site, with high specificity (Tsai et al., 2009).

In their first experiment, Adamantidis et al. (2011) tested whether optogenetic activation of VTA DA neurons would bias responding during the acquisition phase of a food-seeking task. Pressing an “active” lever resulted in the simultaneous delivery of a food pellet, de-illumination of a cue light above the lever, and stimulation of DA neurons. Inactive-lever presses were followed by food delivery and de-illumination of a different cue light. Over several training sessions, ChR2 mice developed a preference for pressing the active lever, eventually showing a clear discrimination compared with control mice (Adamantidis et al., 2011, their Fig. 3). The authors concluded that optical activation of VTA DA neurons has positive reinforcing properties.

We suggest that reinforcement is an inappropriate description of the effect, however, as there is no evidence for a direct DA contribution to reinforcement. Critically, results from this experiment are difficult to interpret because the design does not allow a clear judgment of what role DA plays in the establishment of active lever preference. Given that reward delivery was paired with an action (lever press) and cue (light de-illumination/lever), instrumental and Pavlovian learning are confounded in the task. As we discuss above, Pavlovian learning could be expected to increase the mice's motivation to press the paired lever. Therefore, it is possible that optical DA stimulation may have directly reinforced pressing of the active lever, or enhanced the motivational value of the light/lever cues. Additionally, DA stimulation might have affected the mice's desire to eat, thus changing the food's value. Interestingly, as a consequence of making more active-lever presses, ChR2 mice earned more food pellets than controls, but did not consume significantly more pellets. This indicates that ChR2 mice left some pellets unconsumed, suggesting the facilitation of active-lever pressing by optical stimulation of DA neurons was not driven by an enhancement of the affective value or palatability of the reward itself. Nonetheless, the dopamine effect may have been driven at least in part by enhancement of the motivational value of the cues associated with the reward. This is consistent with a recent report showing that systemic dopamine antagonism does not impair instrumental incentive learning, where the value of a reward is changed after a motivational shift induced by food deprivation (Wassum et al., 2011).

Adamantidis et al. (2011) next examined the ability of optical stimulation of VTA DA neurons to reinstate extinguished food-seeking behavior. In this phase, active-lever presses resulted in optical stimulation, but no food or cue light de-illumination. Interestingly, ChR2 mice significantly renewed pressing the active lever (Adamantidis et al., 2011, their Fig. 4). An intriguing possibility suggested by this result is that through its association with active-lever presses, cue-light de-illumination, or food during acquisition training, the optically evoked DA signal acquired the ability to motivate behavior, effectively becoming a conditioned reinforcer, i.e., something for which ChR2 mice were willing to work. Given that Adamantidis et al. (2011) found that optical stimulation of DA neurons in naive mice was insufficient to reinforce lever presses (see below), we might expect the conditioned reinforcing properties of DA stimulation to eventually extinguish in the absence of primary reward, a possibility that could be tested using a longer reinstatement phase.

In a separate set of experiments, Adamantidis et al. (2011) investigated the ability of optical stimulation of VTA DA neurons to support instrumental responding in the absence of food reward using a procedure similar to electrical intracranial self-stimulation. In this paradigm, active-lever presses were followed only by optical stimulation of the VTA. ChR2 mice did not develop a preference for the active lever (Adamantidis et al., 2011, their Fig. 3). Importantly, this suggests that optical activation of VTA DA neurons in the absence of reward or reward-paired cues does not reinforce behavior. This possibility is especially interesting given that electrical stimulation of the VTA supports robust instrumental responding (Cheer et al., 2007). The authors previously demonstrated that optical activation of VTA DA neurons is sufficient to produce a conditioned place preference (Tsai et al., 2009). However, the place preference paradigm involves Pavlovian learning, rather than instrumental learning. Thus, the results of Tsai et al. (2009) demonstrate that a phasic increase in VTA DA neuron activity is sufficient to form a Pavlovian association and assign motivational value to cues.

Although the data in Adamantidis et al. (2011) suggest that VTA DA activation does not directly reinforce behavior, it may do so under different experimental conditions. Dopamine neurons are frequently treated as a homogeneous population in the context of explaining behavioral effects. Yet increasing evidence suggests that VTA DA neurons are heterogeneous with regard to their neuroanatomical targets, physiological properties, and responses to salient appetitive versus aversive stimuli (Fields et al., 2007; Lammel et al., 2011). This should be considered when interpreting the negative self-stimulation effect reported by Adamantidis et al. (2011). No data are given on the pattern and extent of ChR2 expression in the VTA; thus, it is possible that an insufficient number of DA cells were recruited to produce a signal strong enough to support self-stimulation behavior. Additionally, while different DA projection cells are dispersed throughout the VTA, there is some segregation (Lammel et al., 2011), making the characteristics of virus expression important. It is also notable that the optical stimulation parameters used by Tsai et al. (2009) (50 Hz; 25 pulses; 15 ms width) to produce a conditioned place preference are considerably different from those used by Adamantidis et al. (2011) (25 Hz; 20 pulses; 5 ms width). One possibility, therefore, is that the stimulation parameters used by Adamantidis et al. (2011) are nonoptimal or inappropriate for robust self-stimulation.

In conclusion, the results of Adamantidis et al. (2011) suggest that optical stimulation of VTA DA neurons is sufficient to increase motivation to seek out food reward, and it appears to do so by amplifying the motivational value of cues associated with the reward experience, rather than by directly reinforcing actions. Furthermore, through conditioning, optical VTA DA stimulation becomes a signal animals are later willing to work for. While these results provide evidence supportive of recent studies suggesting that DA does not play a major role in instrumental incentive learning (Wassum et al., 2011), they do not clarify potential causal roles DA may play specifically in Pavlovian learning mechanisms. Going forward from this research, it will be important to develop experiments that (1) carefully examine DA's role in instrumental versus Pavlovian learning while isolating learning mechanisms from motivational ones and (2) investigate the contribution of specific DA neuron subpopulations to different reward processes. Optogenetics provides a powerful tool to interrogate neural systems with temporal precision and cell-type specificity to investigate the function of neurons defined by their wiring within a circuit (Yizhar et al., 2011). As such, we hope future studies incorporate sophisticated psychological approaches with these cutting-edge techniques to investigate reward.

Footnotes

  • Editor's Note: These short, critical reviews of recent papers in the Journal, written exclusively by graduate students or postdoctoral fellows, are intended to summarize the important findings of the paper and provide additional insight and commentary. For more information on the format and purpose of the Journal Club, please see http://www.jneurosci.org/misc/ifa_features.shtml.

  • This work was supported by National Research Service Award Fellowships DA030801 (B.T.S.) and MH090602 (J.M.R.). We thank Terry Robinson and Kent Berridge for helpful comments.

  • Correspondence should be addressed to Benjamin T. Saunders, Department of Psychology (Biopsychology Program), University of Michigan, East Hall, 530 Church Street, Ann Arbor, MI 48109. btsaunde{at}umich.edu

References

  1. ↵
    1. Adamantidis AR,
    2. Tsai HC,
    3. Boutrel B,
    4. Zhang F,
    5. Stuber GD,
    6. Budygin EA,
    7. Touriño C,
    8. Bonci A,
    9. Deisseroth K,
    10. de Lecea L
    (2011) Optogenetic interrogation of dopaminergic modulation of the multiple phases of reward-seeking behavior. J Neurosci 31:10829–10835.
    OpenUrlAbstract/FREE Full Text
  2. ↵
    1. Berridge KC,
    2. Robinson TE
    (2003) Parsing reward. Trends Neurosci 26:507–513.
    OpenUrlCrossRefPubMed
  3. ↵
    1. Cardinal RN,
    2. Parkinson JA,
    3. Hall J,
    4. Everitt BJ
    (2002) Emotion and motivation: the role of the amygdala, ventral striatum, and prefrontal cortex. Neurosci Biobehav Rev 26:321–352.
    OpenUrlCrossRefPubMed
  4. ↵
    1. Cheer JF,
    2. Aragona BJ,
    3. Heien ML,
    4. Seipel AT,
    5. Carelli RM,
    6. Wightman RM
    (2007) Coordinated accumbal dopamine release and neural activity drive goal-directed behavior. Neuron 54:237–244.
    OpenUrlCrossRefPubMed
  5. ↵
    1. Fields HL,
    2. Hjelmstad GO,
    3. Margolis EB,
    4. Nicola SM
    (2007) Ventral tegmental area neurons in learned appetitive behavior and positive reinforcement. Annu Rev Neurosci 30:289–316.
    OpenUrlCrossRefPubMed
  6. ↵
    1. Flagel SB,
    2. Clark JJ,
    3. Robinson TE,
    4. Mayo L,
    5. Czuj A,
    6. Willuhn I,
    7. Akers CA,
    8. Clinton SM,
    9. Phillips PE,
    10. Akil H
    (2011) A selective role for dopamine in stimulus-reward learning. Nature 469:53–57.
    OpenUrlCrossRefPubMed
  7. ↵
    1. Lammel S,
    2. Ion DI,
    3. Roeper J,
    4. Malenka RC
    (2011) Projection-specific modulation of dopamine neuron synapses by aversive and rewarding stimuli. Neuron 70:855–862.
    OpenUrlCrossRefPubMed
  8. ↵
    1. Tsai HC,
    2. Zhang F,
    3. Adamantidis A,
    4. Stuber GD,
    5. Bonci A,
    6. de Lecea L,
    7. Deisseroth K
    (2009) Phasic firing in dopaminergic neurons is sufficient for behavioral conditioning. Science 324:1080–1084.
    OpenUrlAbstract/FREE Full Text
  9. ↵
    1. Wassum KM,
    2. Ostlund SB,
    3. Balleine BW,
    4. Maidment NT
    (2011) Differential dependence of Pavlovian incentive motivation and instrumental incentive learning processes on dopamine signaling. Learn Mem 18:475–483.
    OpenUrlAbstract/FREE Full Text
  10. ↵
    1. Yizhar O,
    2. Fenno LE,
    3. Davidson TJ,
    4. Mogri M,
    5. Deisseroth K
    (2011) Optogenetics in neural systems. Neuron 71:9–34.
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 31 (50)
Journal of Neuroscience
Vol. 31, Issue 50
14 Dec 2011
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Advertising (PDF)
  • Ed Board (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Shedding Light on the Role of Ventral Tegmental Area Dopamine in Reward
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Shedding Light on the Role of Ventral Tegmental Area Dopamine in Reward
Benjamin T. Saunders, Jocelyn M. Richard
Journal of Neuroscience 14 December 2011, 31 (50) 18195-18197; DOI: 10.1523/JNEUROSCI.4924-11.2011

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Shedding Light on the Role of Ventral Tegmental Area Dopamine in Reward
Benjamin T. Saunders, Jocelyn M. Richard
Journal of Neuroscience 14 December 2011, 31 (50) 18195-18197; DOI: 10.1523/JNEUROSCI.4924-11.2011
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Footnotes
    • References
  • Info & Metrics
  • eLetters
  • PDF

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

  • Universal Coding for Uncertainty?
  • Beyond Motor Control: Diffusion MRI Reveals Associations between the Cerebello-VTA Pathway and Socio-affective Behaviors in Humans
  • A Novel APP Knock-In Mouse Model to Study the Protective Effects of the Icelandic Mutation In Vivo
Show more Journal Club
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.