Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

Medial Orbitofrontal, Prefrontal, and Amygdalar Circuits Support Dissociable Component Processes of Risk/Reward Decision-Making

Nicole L. Jenni, Debra A. Bercovici and Stan B. Floresco
Journal of Neuroscience 16 April 2025, 45 (16) e2147242025; https://doi.org/10.1523/JNEUROSCI.2147-24.2025
Nicole L. Jenni
Department of Psychology and Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, British Columbia V6T 1Z4, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Nicole L. Jenni
Debra A. Bercovici
Department of Psychology and Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, British Columbia V6T 1Z4, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Stan B. Floresco
Department of Psychology and Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, British Columbia V6T 1Z4, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Stan B. Floresco
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • Peer Review
  • PDF
Loading

Abstract

The medial orbitofrontal cortex (mOFC) has been implicated in shaping decisions involving reward uncertainty, in part by using memories to infer future outcomes. This region is interconnected with other key systems that mediated these decisions, including the basolateral amygdala (BLA) and prelimbic (PL) region of the medial prefrontal cortex, yet the functional importance of these circuits remains unclear. The present study used chemogenetic silencing to examine the contribution of different input and output pathways of the mOFC to risk/reward decision-making. Male rats were well-trained on a probabilistic discounting task where they chose between a small/certain (one pellet) and a large/uncertain (four pellets) option, the odds for which changed systematically across a session. Suppressing activity of descending mOFC terminals in the BLA impaired adjustment in choice biases as reward probabilities change, suggesting this circuit tracks changes in relative value to support flexible reward-seeking. Inhibiting bottom-up BLA → mOFC circuits had no effect on choice. With respect to corticocortical circuits, inhibiting mOFC inputs to PL led to more random choice patterns, indicating this circuit promotes advantageous choice by processing context-dependent information regarding wins and losses. In comparison, PL inputs to mOFC attenuate the allure of larger yet uncertain rewards and reduce loss sensitivity, particularly early in the choice sequence. The present findings provide novel insight into the functional contribution that mOFC–BLA and PL interactions make to distinct processes that shape decision-making in situations of reward uncertainty.

  • amydala
  • anterior cingulate
  • chemogenetics
  • decision-making
  • orbitofrontal cortex
  • risk

Significance Statement

The medial orbitofrontal cortex supports the use of reward memories to guide efficient value-based decision-making, yet the functional circuits through which it mediates this form of cognition are unclear. The present study revealed that different medial orbitofrontal cortex interactions with the BLA and the prelimbic region facilitate dissociable component processes of decisions involving risks and rewards. These findings clarify the functions of corticocortical and corticoamygdalar pathways and may have implications for understanding how dysfunction in these circuits relates to aberrant decision-making seen in certain psychiatric disorders.

Introduction

The orbitofrontal cortex (OFC) is an integral node within cortical/subcortical networks subserving the use of reward memories to infer future outcomes and guide efficient reward-seeking. Anatomically, the OFC can be partitioned into medial (mOFC) and lateral subdivisions, with the former playing a particularly important role in guiding decisions involving uncertainty. Seminal studies by Damasio and Bechara were among the first to identify that humans with lesions encompassing the mOFC made more suboptimal, risky decisions on what is now referred to as the Iowa gambling task (Bechara et al., 1994). Various psychiatric disorders are associated with mOFC dysfunction, including depression, schizophrenia, obsessive/compulsive, and substance use disorders. Patients with these conditions all display maladaptive patterns of decision-making, and in some instances, the severity of core symptoms may correlate with abnormal patterns of mOFC activity (Bremner et al., 2002; Goldstein and Volkow, 2011; Moorman, 2018; Xie et al., 2021; Pizzagalli and Roberts, 2022).

Preclinical studies have provided additional insight into how the mOFC contributes to reward-seeking when outcomes are uncertain or when action–outcome associations must be drawn from internal reward memories (Bradfield et al., 2015). For example, mOFC inactivation impairs flexible responding on reversal learning tasks where contingencies are probabilistic, but not deterministic (Dalton et al., 2016; Hervig et al., 2020), as do mOFC lesions in humans (Tsuchida et al., 2010; Noonan et al., 2017). This region also refines risk/reward decisions between options that differ in reward magnitude and uncertainty. For example, on a probabilistic discounting task, rats choose between actions yielding smaller, certain rewards and larger, uncertain ones. The probability or “risk” of not obtaining the larger reward increases or decreases over a session, requiring animals to keep track of action/outcome history to infer changes in risk/reward contingencies and select more profitable options. mOFC inactivation uniformly increases risky choice on this task by enhancing win–stay behavior, reflecting the adoption of a strategy based on immediate reward feedback rather than the broader reward history (Stopper et al., 2014).

Formation and implementation of decision policies by mOFC are regulated by a variety of functional circuits it forms with cortical and subcortical inputs/outputs. For example, disrupting information transfer from mOFC to two of its striatal outputs revealed distinct functional roles for these circuits. mOFC neurons projecting to the nucleus accumbens mediate the use of reward history to stabilize decision biases, whereas those targeting the dorsomedial striatum aid in monitoring changes in volatility in nonrewarded actions to flexibly adjust choice biases toward more profitable options (Jenni et al., 2022). What is notable is that selective disruption of these mOFC–striatal circuits did not recapitulate the effects of bilateral mOFC inactivation. This highlights how targeting specific mOFC neuronal subpopulations can yield novel insight into the function of these systems that could not be revealed by a broader and more generalized disruption of mOFC activity.

The mOFC shares reciprocal connections with two other regions that refine risk/reward decisions, namely, the basolateral amygdala (BLA) and the prelimbic (PL) prefrontal cortex (PFC), which displays homologous connectivity with Area 32 of the primate pregenual anterior cingulate (Hoover and Vertes, 2011; Heilbronner et al., 2016). Top-down and bottom-up mOFC–BLA circuits make differential contributions to reward-related associative learning, with descending mOFC → BLA projections regulating the use of predictive cues to retrieve memories pertaining to the value of different options to guide reward pursuit. In comparison, ascending BLA → mOFC projections play a more nuanced role in mediating adaptive adjustments of cue-evoked pavlovian responses based on the desirability of predicted rewards (Malvaez et al., 2019; Lichtenberg et al., 2021; Wassum, 2022). Yet, how these circuits mediate decisions requiring integration of internal representations of action/outcome reward history is unclear. In addition, no studies have examined the functional roles of corticocortical circuits linking the mOFC with the PL. Functional connectivity imaging studies provide correlative evidence suggesting these regions process value in a similar manner and may interact to provide information about goals required for action selection (Rolls, 2023). Yet, studies manipulating these circuits to clarify the functional consequences of their interplay have been lacking. Thus, the present study used chemogenetic targeting of these four input/output pathways connecting BLA and PL to mOFC to clarify their specific contribution to reward-seeking in situations involving uncertainty.

Materials and Methods

Subjects

A total of 154 adult male Long–Evans rats (Charles River Laboratories) were used in this study. These animals weighed 225–275 g at the start of the experiment and were group-housed and provided access to water and food ad libitum upon arrival. They were handled daily for 1 week and then subsequently food restricted to 85–90% of their free-feeding weight. They were then fed 15–17 g of food at the end of each experimental day. Their weights were monitored daily, and their individual food intake was adjusted to maintain a steady but modest weight gain. The colony was maintained on a 12 h light/dark cycle, with lights on at 7:00 A.M. The rats underwent behavioral testing between 8 A.M. and 12 P.M. each day. All experiments were conducted in accordance with the Canadian Council on Animal Care guidelines regarding appropriate and ethical treatment of animals and were approved by the Animal Care Centre at the University of British Columbia.

Apparatus

Behavioral testing was conducted in operant chambers (31 × 24 × 21 cm; Med Associates) enclosed in sound-attenuating boxes. The chambers were equipped with a fan that provided ventilation and masked extraneous noise. Each chamber was fitted with two retractable levers located on each side of a central food receptacle in which 45 mg sweetened food reward pellets (Bio-Serv) were delivered by a dispenser. A chamber was illuminated by a single 100 mA house light that delivered ∼25 mW (∼5 lux) illuminance in the area around the levers. Four infrared photobeams were mounted on the side of each chamber (used to measure locomotion), and another photobeam was located in the food receptacle. Locomotor activity was indexed by the number of photobeam breaks that occurred during a session. All data were recorded by a computer connected to the chambers via an interface.

Lever pressing training

The initial training protocols were identical to those described in our previous studies (Jenni et al., 2017, 2022). The day before exposure to the operant chamber, each rat was given approximately 25 sweetened reward pellets in their home cage to familiarize them with this reward. On the first day of training, two pellets were delivered into the food cup, and crushed pellets were sprinkled on an extended lever before the rat was placed into the chamber. On consecutive days, rats were trained under a fixed-ratio 1 schedule to a criterion of 50 presses in 30 min on one lever and then the other side (counterbalanced). They then progressed to a simplified version of the full task. These 90 trial sessions began with the levers retracted and the operant chamber in darkness. Every 40 s, a trial was initiated with the illumination of the house light and the insertion of one of the two levers into the chamber (randomized in pairs). Failure to respond on the inserted lever within 10 s caused its retraction and the chamber to darken, and the trial was scored as an omission. A response within 10 s caused the lever to retract and the delivery of a single pellet with a 50% probability. Rats were trained for approximately 3–4 d to a criterion of 80 or more successful trials (<10 omissions).

Probabilistic discounting task

Each daily session consisted of 90 trials separated into five blocks of 18 trials and took 50 min to complete. Rats were trained 5–7 d/week. One lever was designated the large/risky lever, the other the small/certain lever, and this designation remained consistent throughout training. Each session began in darkness with both levers retracted. Trials began every 33 s with the illumination of the house light, and then 2 s later, one or both levers were inserted into the chamber. Each of the five blocks consisted of eight forced-choice trials (where only one lever was presented, randomized in pairs), followed by ten free-choice trials (Fig. 1A). If no response was made within 10 s of lever presentation, the levers retracted, and the chamber reverted to the intertrial state (omission). Selection of a lever caused its immediate retraction. A choice of the small/certain lever delivered one pellet with 100% probability. Choice of the large/risky lever delivered a four-pellet reward in a probabilistic manner that changed systematically across blocks of trials (100, 50, 25, 12.5, 6.25%; Fig. 1A). The actual probability of receiving the large reward was drawn from a set probability distribution, so that on any given day, rats may not have experienced the exact probability assigned to that block; however, the actual probabilities averaged across training sessions more closely approximated the set value.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Task diagram and histology. A, The left panel shows the operant chamber configuration for the probabilistic discounting task. The right panel displays the format of the sequence of forced- and free-choice trials within each probability block for the standard descending variant of the task, where the odds of obtaining the larger reward decreased from 100 to 6.25% across five blocks of trials. B–E, Schematic of coronal sections of the rat brain showing location of acceptable CNO infusions in DREADD (black, white, and gray circles) and mCherry-expressing control animals (red circles) into (B) the BLA (C) the mOFC (with DREADDs/mCherry expressed in BLA inputs), (D) the PL–PFC, and (E) the mOFC (with DREADDs/mCherry expressed in PL inputs). Numbers by each plate correspond to millimeters from bregma.

We tested the effects of our manipulations on rats trained on a variant of the task where the probability of obtaining the large reward was initially 100% at the start of the session and then decreased over blocks (100–6.25%, “descending” variant). If a manipulation was found to have an effect on choice under these conditions, we then trained and tested other groups on a variant where the odds started poor and increased over blocks (6.25–100%; “ascending” variant). This was done to delineate whether an increase in risky choice observed in the descending condition reflected either a general increase in preference for larger, uncertain rewards, as has been observed after bilateral mOFC inactivation (Stopper et al., 2014), or an impairment in adjusting decision biases, as has been reported following inactivation of the PL (St. Onge and Floresco, 2010). In the latter case, one would expect to see reduced preference for the risky options in the ascending condition as blocks progressed.

Squads of rats were trained until they displayed stable choice behavior, determined by analyzing data from three consecutive sessions with a two-way repeated-measures ANOVA with day and trial block as factors. If there was no main effect of day and no day–block interaction (at p > 0.10), then choice behavior of the group was deemed stable. Two to 3 d later, rats were subjected to surgery. Rats in the different groups required between 16 and 23 d of training before displaying stable patterns of choice.

Chemogenetic silencing

Suppression of activity within different mOFC circuits was achieved using local chemogenetic manipulations of terminals that either projected to or from the mOFC. Designer receptors exclusively activated by designer drugs (DREADDs) were introduced via infusion of viruses inducing their expression into neural tissue. Once expressed, this enabled transient suppression of neural activity with exposure to clozapine-N-oxide (CNO; Sigma-Aldrich). We used the inhibitory Gi-DREADD application, in combination with local intracranial administration of CNO to achieve chemogenetic axonal silencing within terminal regions. These studies infused a Gi-DREADD into an afferent structure (e.g., mOFC), combined with implantation of bilateral cannulae into a downstream terminal region (e.g., BLA), enabling administration of CNO locally within this region to effectively silence mOFC → BLA terminals. Best estimates reflect this method to produce a behaviorally relevant 60% suppression in neural activity which can last for up to 70 min (Mahler et al., 2014; Smith et al., 2021).

Surgery

Rats were group-housed for at least 7 d prior to surgery. Rats were given a subanesthetic dose of ketamine and xylazine (50 and 4 mg/kg, respectively) and maintained at a surgical plane of anesthesia with 1–2% isoflurane. Rats receive a nonsteroidal anti-inflammatory drug during surgery and for 2–3 d postoperation in accordance with our animal care protocols to minimize pain and discomfort.

Rats were infused with an adeno-associated virus (AAV) expressing the inhibitory receptor human M4 muscarinic receptor (hM4Di; AAV5-CaMKIIa-hM4D(Gi)-mCherry; Addgene; 0.5 µl). Each experiment also included additional empty vector control animals infused with an AAV virus containing the same mCherry tag into the same brain regions as our main DREADD groups, but which lacked the critical hM4Di component (AAV5-CaMKIIa-mCherry; UNC Vector Core; 0.5 µl). These rats were trained and tested concurrently within squads in the DREADD-expressing rats. Virus was infused at a rate of 0.1 µl/min via 30-gauge injection cannulae, so that each infusion lasted 5 min. In all cases, injectors were left in place for an additional 10 min to ensure adequate diffusion and to minimize spread along the injector tract (Bercovici et al., 2018).

To target mOFC output pathways, Gi-DREADDs were infused into mOFC (in mm; AP, +4.5; ML, ±0.7 from bregma; DV, −4.0 from dura), and rats were implanted with two sets of bilateral 23-gauge stainless steel guide cannula in the PL (AP, +2.8; ML, ±0.7; DV, −2.5) and the BLA (AP, −3.0; ML, ±5.2; DV, −6.3) to enable silencing of mOFC → PL or mOFC → BLA projections. To target mOFC input pathways, one set of rats received Gi-DREADD in the PL (AP, +2.7; ML, ±0.7; DV, −3.2) with a cannula implanted in mOFC (AP, +4.5; ML, ±0.7; DV, −3.3). Another set received Gi-DREADD into BLA (AP, −3.0; ML, ±5.3; DV, −7.0) and had a pair of guide cannulae implanted in the mOFC. All coordinates were chosen in conjunction with the neuroanatomical atlas of Paxinos and Watson (2005), targeting regions where mOFC projections are particularly dense (Hoover and Vertes, 2011). Cannulae were held in place with stainless steel screws and dental acrylic. Thirty-gauge obdurators were inserted into the guide cannula and remained in place until infusions were performed. The animals were given a minimum of 2 weeks to recover from surgery before commencing retraining and were tested 8–10 weeks after surgery to allow robust axonal expression in terminal regions (Mahler et al., 2014; Lichtenberg et al., 2017).

Drugs, microinfusion procedure, and experimental design

Once stable choice behavior was reestablished, animals received a mock infusion to familiarize them with the procedures. Obdurators were removed, and injectors were placed inside the guide cannula for 2 min, but no infusion was administered.

One or 2 d following the mock infusion, animals received their first microinfusion test day. CNO was infused to inhibit hM4Di-expressing axons and terminals within the cannulated regions. CNO was dissolved first in DMSO and then mixed with 0.9% saline and sonicated for ∼30 min to create a 1 mM 0.5% CNO solution (MacLaren et al., 2016; Runegaard et al., 2018; Smith et al., 2021). CNO or vehicle was infused at a volume of 0.5 µl over 85 s via 30-gauge injection cannulae protruding 0.8 mm past the end of the guide, at a rate of 0.4 µl/min by a microsyringe pump. Injection cannulae remained in place for another 60 s to allow for diffusion.

Separate groups of animals that received the different combinations of viral vector infusion and cannulae implants in the regions described above were trained on their respective tasks and then received counterbalanced vehicle or CNO infusions, 10 min prior to behavioral testing on separate days (a within-subject design). After the first infusion test, animals were retrained for 1–3 d until their choice behavior deviated by <10% from their preinfusion baseline, after which they received their second infusion.

Rats in the mOFC output pathway experiments (mOFC → PL) or (mOFC → BLA) had two sets of cannulae implanted (in PL, and BLA). They first underwent mOFC → PL silencing, receiving counterbalanced infusions of CNO and vehicle into the PL. They were then trained for a minimum of 7 d to reestablish stable baseline performance and then received CNO and vehicle infusions into the BLA.

Histology

After completion of testing, rats were deeply anesthetized and transcardially perfused with 4% paraformaldehyde. Brains were fixed in 4% paraformaldehyde for 24 h. They were then cryoprotected by soaking in a 30% sucrose solution. Each brain was flash-frozen with dry ice and sliced into 50 µm sections. Frozen slices were immediately mounted onto slides and coverslipped using Fluoromount-G with DAPI (eBioscience). The spread of viral expression and location of CNO infusion were verified using an Axio Zoom microscope (Zeiss). Viral spread was assessed by measuring the extent of mCherry fluorescence throughout the targeted region. Particular care was taken when assessing virus spread in the mOFC → PL or PL → mOFC groups, to ensure that the spread of virus did not overlap with the infusion of CNO. Data from rats that had viral expression or infusion placements residing outside the borders of the mOFC, the PL, or the BLA were removed from the analysis. This included 10 rats where we targeted mOFC → BLA descending pathways and 2 rats where we targeted the ascending BLA → mOFC pathway. Data from another 16 rats in the mOFC → PL experiment and 7 rats where we targeted the PL → mOFC pathway were not included due to either inaccurate cannular placements or because viral expression violated our stringent expression criteria as it was not exclusive to the mOFC and clearly encroached into cell bodies in a neighboring PL or vice versa. Locations of all acceptable infusion sites are displayed in Figure 1B–D, the spread of viral expression in each region is displayed in Figure 2, and the final ns for each group are reported in the respective sections of the results.

Experimental design and statistical analysis

The primary dependent variable of interest was the proportion of choices of the large reward option, factoring out trial omissions. This was calculated in each block by dividing the number of choices of the large/risky lever by the total number of trials in which the rats made a choice. Choice data were analyzed using a three-way between/within-subject ANOVA, with treatment and probability block as two within-subject factors and task variant (ascending or descending odds) as a between-subject factor. In these analyses, a three-way interaction indicates that the effect of treatment on probabilistic discounting was different between the descending and ascending task variants, so follow-up analyses compared these conditions separately. In comparison, a lack of an interaction with the task variant factor implies that a manipulation induced similar effects in particular probability blocks, irrespective of the manner in which reward probabilities changed over a session. When a significant statistical interaction was obtained, simple main effects analyses were conducted using one-way ANOVAs where appropriate. In these analyses, the main effect of the trial block was always significant (p < 0.001), indicating all rats adjusted risky choice across probability blocks, and will not be discussed further.

If a treatment induced a significant alteration in choice, supplementary analyses were conducted to clarify whether these effects were attributable to changes in reward sensitivity (win–stay behavior) and/or negative-feedback sensitivity (lose–shift behavior). Each choice was analyzed according to the outcome of the preceding trial and expressed as a ratio. The win–stay score was calculated as a proportion of risky choices made following a receipt of the larger reward (a risky win) divided by the total number of larger rewards (wins) obtained. Lose–shift scores were calculated as the proportion of small/certain choices made following a nonrewarded risky choice (risky loss) over the total number of nonrewarded choice trials (losses). These scores were analyzed together using a two-way ANOVA, with outcome type (win–stay or lose–shift), and treatment as two within-subject factors. Changes in win–stay/lose–shift behavior indexed changes in reward and negative-feedback sensitivity, respectively. In addition, response latencies, locomotion (photobeam breaks made during a session), and the number of trial omissions were analyzed with one or two-way repeated-measures ANOVAs as appropriate.

Results

Top-down, but not bottom-up, mOFC–BLA interactions mediate flexible decision-making

Bilateral mOFC inactivation causes a uniform increase in risky choice during probabilistic discounting, irrespective of how reward probabilities change, whereas BLA inactivation reduces risky choice (Ghods-Sharifi et al., 2009; Stopper et al., 2014). To determine how communication between BLA → mOFC or mOFC → BLA pathways influence probabilistic choice, two separate cohorts of rats were infused with an inhibitory hM4D(Gi)-DREADD or empty vectors either in the mOFC or the BLA, with roughly half of the mOFC → BLA cohort trained on the descending (100–6.25%) or ascending (6.25–100%) task variant. They received counterbalanced infusions of CNO and vehicle into downstream structure (BLA and mOFC, respectively) to inhibit terminal activity in either mOFC → BLA or BLA → mOFC pathways.

mOFC → BLA circuits

Data from 24 rats (n = 14 descending variant) with acceptable placements and viral expression in both regions were included (Figs. 1B, 2A). Inhibition of mOFC projections to BLA impaired adjustments in choice bias, which manifested as differential alterations in the risky choice that were dependent on how reward probabilities varied across the session. A three-way ANOVA of the choice data produced a significant treatment–task interaction (F(1,23) = 35.09, p < 0.001) and a trending treatment–task–block interaction (F(4,92) = 2.42, p = 0.054). In partitioning the two-way interaction by task variant (descending vs ascending), we observed that under control conditions, rats trained on the descending variant initially displayed a strong bias for the large/risky option but gradually shifted choice away from this option as reward probabilities decreased. Suppressing mOFC inputs to the BLA led to an increased proportion of risky choices across the session (F(1,13) = 23.93, p < 0.001; Fig. 3A, left). Conversely, rats trained on the ascending variant initially preferred the small/certain option when reward odds were low and began to choose the large/risky one more often as its profitability increased over blocks. mOFC → BLA inactivation again disrupted shifts in choice bias, which in this instance presented as fewer risky choices in subsequent blocks relative to control conditions (main effect of treatment F(1,9) = 16.05, p = 0.003; Fig. 3A, right). In both instances, inhibiting mOFC → BLA circuits hindered adjustments in choice bias from the option they preferred at the start of the session. With respect to other performance measures, there were no significant differences across treatments in locomotion [Sal (mean ± SEM) = 1,697 ± 222; CNO = 1,562 ± 240], trial omissions (vehicle = 4.5 ± 1.7; CNO = 3.8 ± 1.1), or decision-making latencies (vehicle = 1.0 s ± 0.2, CNO = 0.9 s ± 0.1; all Fs < 2.26, all ps > 0.15).

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Viral expression maps. Representative fluorescent image of hM4Di-mCherry expression in cell body regions (top), terminals (middle), and schematic of coronal sections of the rat brain showing aggregate mCherry labeling (as a proxy of DREADD expression) combined across all rats (bottom) in groups where we targeted circuits linking (A) mOFC → BLA, (B) BLA → mOFC, (C) mOFC → PL, or (D) PL → mOFC. For the top and middle panels, blue is DAPI, and for bottom panels, numbers by each plate correspond to millimeters from bregma.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Chemogenetic inhibition of mOFC → BLA signals impaired adjustments in choice biases. A, Percentage choice of the large/risky option following vehicle control conditions and CNO infusions in the BLA, to inhibit mOFC terminals. This increased the risky choice in rats trained on the descending task (left) but reduced it in those trained on the ascending task (right). B, Win–stay, lose–shift data. Inhibiting mOFC inputs to BLA reduced lose–shift behavior in rats tested on the descending task variant, but not the ascending one. Win–stay behavior was unaffected. C, Small-stay analyses revealed that disrupting mOFC → BLA signals increase repetitive selection of the small reward in rats trained on the ascending, but not descending task variant. D, Intra-BLA CNO infusions in rats expressing mCherry as a control did not affect choice. E, Inhibiting BLA inputs to mOFC did not affect choice. F, Intra-mOFC CNO infusions also did not affect choice in mCherry control animals. For this and all other figures, stars denote p < 0.05 versus vehicle averaged across all blocks (main effect of treatment), hashtags denote p < 0.05 relative to vehicle for a respective treatment, and error bars indicate SEM.

Subsequent analyses probed how these disruptions in flexible decision-making were related to alterations in how risky choice outcomes influenced subsequent choice. Under control conditions, animals showed a strong bias toward repeating a rewarded risky choice with another (win–stay behavior), whereas they would shift choice on 40–50% of trials when a risky selection did not pay off (lose–shift). For rats trained on the descending variant, the analysis of these data revealed a significant treatment–outcome type (win–stay/lose–shift) interaction (F(1,13) = 7.77, p = 0.02). This was driven by a reduction in lose–shift responses following mOFC → BLA inhibition (F(1,13) = 16.40, p = 0.001, Fig. 3B) with no change in win–stay behavior (F(1,13) = 0.01, p = 0.91). In contrast, for rats trained on the ascending variant, even though suppression of mOFC inputs to the BLA reduced risky choice, this was not accompanied by any reliable changes in win–stay or lose–shift tendencies (all Fs < 1.6, all ps > 0.25).

This lack of effect on win or loss sensitivity when large/risky reward probabilities were initially low and then increased was curious, as superficially, this might imply that mOFC → BLA communication may not always influence how the outcomes of recent choices shape subsequent ones. However, to explore this further, we conducted a supplementary “small-stay” analysis, examining how this manipulation affected the tendency to repeat selections of small/certain rewards. These ratios were computed by taking the total number of free-choice trials a rat repeated a small/certain choice with another and dividing this by the total number of small rewards obtained on these trials. Analysis of these data unveiled another asymmetry between the different task conditions. Suppressing mOFC → BLA inputs did not alter small-stay behavior for rats trained on the descending variant (F(1,13) = 0.70, p > 0.40; Fig. 3C, left bars). In contrast, in the ascending condition, the small/certain option initially had greater relative utility. Here, disrupting mOFC → BLA communication impaired shifts in bias away from the small/certain option as the value of the large/risky one increased, as reflected by an enhanced tendency to repeat small/certain choices (F(1,9) = 11.41, p < 0.010; Fig. 3C, right bars).

In comparison to the effects of intra-BLA CNO infusions in DREADD-expressing rats, similar infusions in animals only expressing mCherry (n = 12, 8 descending variants) had no effect on choice (Fig. 3D) or any other performance measures (all Fs < 1.0, all ps > 0.40). Collectively, these findings suggest that activity within mOFC → BLA pathways facilitates modifications in decision biases, surveying changes in the relative value of rewards that differ in terms of magnitude and probability. However, the outcome-related information used by these circuits to detect changes in value can vary, depending on which option is initially deemed more valuable.

BLA → mOFC

Data from 10 rats (all descending odds) with acceptable expression of the hM4D(Gi)-DREADD in the BLA and infusions within mOFC were included (Figs. 1C, 2B). In contrast to the marked effects of disrupting top-down, mOFC → BLA signaling, perturbing bottom-up, BLA-to-mOFC signals had no effect on choice (all Fs < 1.0, all ps > 0.40; Fig. 3D). These treatments actually caused a slight reduction in choice latencies (vehicle = 0.9 s ± 0.1, CNO = 0.7 s ± 0.1), but effect only approached statistical significance (F(1,9) = 4.12, p = 0.07). Moreover, other performance measures were unaffected by disrupting BLA inputs to the mOFC (locomotion, vehicle = 1,099 ± 125, CNO = 1,136 ± 161; trial omissions, vehicle = 3.7 ± 1.6, CNO = 3.5 ± 1.5; all Fs < 0.2, all ps > 0.60). Similarly, intra-mOFC infusion in rats only expressing mCherry (n = 8, all descending) had no effect on choice (Fig. 3F) or other performance measures (all Fs < 1.0, all ps > 0.35). Thus, BLA projections interfacing with the mOFC do not appear to play an integral role in shaping risk/reward decision biases in well-trained rats.

mOFC and PL interactions mediating risk/reward decision-making

Bilateral inactivation of the mOFC and adjacent PL medial PFC induce distinct alterations in probabilistic discounting, with the latter perturbing flexible adjustments in choice (St. Onge and Floresco, 2010). These two regions are reciprocally connected (Vertes, 2004; Hoover and Vertes, 2011), yet there have been no studies examining how corticocortical communication in this circuit may regulate cognition or reward-seeking.

mOFC → PL

Data from 20 rats (n = 9 trained on the descending variant) were included (Figs. 1D, 2C). Disruption of mOFC signals to PL induced suboptimal decision-making profiles, characterized by disadvantageous choice patterns across the different probability blocks and a flattening of the discounting curve (Fig. 4A, left). The most pertinent result from the choice data analysis was a treatment–block interaction (F(4,76) = 13.20, p < 0.001) in the absence of main effects of task variant (F(1,18) = 1.23, p = 0.28) or three-way interaction (F(4,72) = 0.95, p = 0.44). This indicates mOFC → PL inhibition induced similar disruptions in decision-making, irrespective of how reward probabilities transitioned over the session (100–6.25% or 6.25–100%; Fig. 4A, right). Partitioning the treatment–block interaction confirmed the impression made by visual inspection of the data that disrupting mOFC → PL communication reduced risky choice during the higher-probability 100 and 50% blocks but, conversely, increased risky choice during the 12.5% block (all Fs > 9.11, ps < 0.007). The reduction in risky choice in the 100% block of rats trained on the descending variant may reflect a disruption in “resetting” their choice bias away from the risky option that emerged during the 6.25% block at the end of the previous training session. Notably, this markedly suboptimal decision profile resulted in rats earning fewer reward pellets across the session (vehicle, 137 ± 2; CNO, 128 ± 3; t(19) = 2.16, p = 0.02).

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Disrupting mOFC → PL communication induced stochastic and suboptimal choice profiles during probabilistic discounting A, Left, Percent choice of the large/risky option following vehicle and CNO infusions, collapsed across task variants. Inhibition of PL inputs to mOFC caused a significant reduction in risky choice during the 100 and 50 blocks and increased risky choice during the 12.5% block. Right, Insets shows data from rats performing the “descending” (top) and “ascending” (bottom) task variant conditions. B, Win–stay/lose–shift data. Disrupting mOFC → PL communication caused significant reductions in win–stay tendencies. However, lose–shift ratios did not differ when collapsed across the entire session. C, Partitioning lose–shift ratios by task phase revealed increased sensitivity to losses when large/risky reward odds were better (50–25%) and reduced sensitivity when odds were worse (12.5–6.25%). D, intra-PL CNO infusions did not affect choice in mCherry control animals. Daggers denote p < 0.05 versus vehicle at a particular probability block.

We then probed how these impairments were related to alterations in how risky choice outcomes influenced subsequent action selection. Analysis of win–stay/lose–shift data produced a treatment–outcome interaction (F(1,19) = 6.53, p = 0.02), driven in part by reduced win–stay behavior following mOFC → PL inhibition (F(1,19) = 8.67, p = 0.008, Fig. 4B). However, the overall lose–shift ratios for the entire session were unaltered (F(1,19) = 0.77, p = 0.39). Thus, disrupting mOFC → PL communication negatively impacted decision-making in part by rendering rats less sensitive to the volatility of reward probabilities, suggestive of an impairment in integrating action/outcome history (particularly regarding wins) to optimally bias risky choice. This was driven primarily by reduced risky choice in the higher-probability blocks, when the risky option was advantageous, as few “wins” occurred in the lower probability ones.

On the other hand, the increase in risky choice in the lower probability blocks is seemingly at odds with the lack of effect of lose–shift tendencies, at least when averaged over the entire session. To further interrogate this effect, an exploratory analysis partitioned lose–shift ratios across different phases of the session, based on choice when reward odds were “better” (50 and 25%) blocks or “worse” (12.5 and 6.25%) blocks, as we have done previously (Bercovici et al., 2023). This evenly split up the blocks where rats could experience losses, as risky choices were always rewarded during the 100% block. These lose–shift values were analyzed together with a two-way ANOVA with treatment and task phase (better vs worse odds) as factors. Analyzing these data in this manner showed that under control conditions, rats were relatively unlikely to shift choice after a loss when the odds were “better,” as they had learned that despite the occasional loss, it was still more advantageous to play risky during this phase of the session. Conversely, lose–shift tendencies were much more prominent during the “worse” odds blocks. Inhibiting mOFC → PL communication abolished this differential sensitivity to nonrewarded actions, as lose–shift behavior hovered ∼50% for both advantageous and disadvantageous phases of the task. This was confirmed statistically by a significant treatment–phase interaction (F(1,19) = 14.28, p = 0.001), wherein mOFC → PL inhibition increased lose–shift behavior when the odds were better (F(1,19) = 5.07, p = 0.036) and reduced it when they were worse (F(1,19) = 8.67, p = 0.008, Fig. 4C).

Despite the marked disruption in optimal choice behavior, mOFC → PL inhibition had no reliable effect on other performance variables, including decision latencies (vehicle = 0.7 ± 0.1, CNO = 0.7 ± 0.1), locomotion (vehicle = 2,045 ± 240, CNO = 1,965 ± 234), or omissions (vehicle = 1.1 ± 0.5, CNO = 1.1 ± 0.4; all Fs < 1.22, all ps > 0.28). The null effects suggest that these impairments in decision-making cannot be easily attributable to more generalized motivational impairments. Relatedly, these effects are unlikely to be driven by deficits in reward magnitude discrimination, as disconnection of mOFC outputs to the nucleus accumbens induced a similar choice profile on probabilistic discounting, but did not affect preference for larger versus smaller rewards delivered with 100% probability (Jenni et al., 2022). Likewise, PL inactivation also does not disrupt simpler reward magnitude discriminations (St. Onge and Floresco, 2010). Furthermore, intra-PL infusion of CNO had no effect on choice (Fig. 4D) or other performance measures in rats only expressing mCherry (n = 11 rats, 7 descending; all Fs < 0.60, all ps > 0.40). Collectively, these data indicate that functional mOFC → PL circuits promote optimal probabilistic decisions by conveying information about recently rewarded actions and nuancing the impact of nonrewarded ones, interpreting information about the relevance of recent losses within the broader context of reward history.

PL → mOFC

Data were analyzed from 24 rats (n = 15 descending task) with acceptable placements in both regions (Fig. 1D). Inhibition of PL → mOFC circuitry caused a subtle but reliable increase in risky choice [Fig. 5A, left; main effect of treatment (F(1,22) = 8.29, p = 0.009)], irrespective of how reward probabilities changed over a session [main effect of task (F(1,22) = 0.62, p = 0.44; treatment–block interaction (F(4,88) = 0.35, p > 0.80; Fig. 4A, right insets)]. Inspection of the choice profile across the descending and ascending task conditions showed that PL → mOFC inhibition increased risky choice most prominently during the early phases of the task, irrespective of whether reward probabilities were high or low during these phases of the task (Fig. 5A, insets). Although the three-way interaction only approached statistical significance (F(4,88) = 2.14, p = 0.08), we ran separate exploratory analyses comparing differences in average risky choice during the early (first–second) vs late (third–fifth) blocks of trials. This analysis confirmed for all rats, irrespective of task conditions, PL → mOFC inhibition increased the average risky choice in the early blocks (t(14) = 3.36, p = 0.004) but not the latter ones (t(14) = 0.47, p = 0.65; Fig. 5A, insets). Thus, this increased risky choice was most apparent at the start of the session, irrespective of whether reward probabilities were high or low during this phase of the task.

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Disrupting PL → mOFC communication increased risky choice and reduced loss sensitivity. A, Left, Percent choice of the large/risky option following vehicle and CNO infusions into the mOFC to inhibit PL terminals. Chemogenetic inhibition of PL inputs to mOFC increased risky choice, with data collapsed across task variants. Right, Insets shows data from rats performing the “descending” (top) and “ascending” (bottom) task variants. Exploratory comparisons revealed that disrupting PL → mOFC communication preferentially increased risky choice in the first two task blocks on both task variants (daggers, p < 0.05). B, Win–stay/lose–shift data. The increase in risky choice induced by disrupting mOFC → PL communication was driven by a reduction in lose–shift tendencies. C, Intra-mOFC CNO infusions did not affect choice in mCherry control animals.

The enhanced risky choice was accompanied by a marked reduction in sensitivity to nonrewarded actions. Analysis of win–stay/lose–shift data revealed a significant treatment–outcome type interaction (F(1,23) = 5.06, p = 0.03), reflecting a reduction in lose–shift responses following the mOFC → BLA inhibition (F(1,23) = 5.36, p = 0.03, Fig. 5B), with no change in win–stay responding (F(1,23) = 0.32, p = 0.58).

Despite these changes in choice, PL → mOFC inhibition had no effects on other performance variables. These treatments did not affect decision-making latencies (vehicle = 0.8 ± 0.1, CNO = 0.8 ± 0.1), locomotion (vehicle = 1,846 ± 146, CNO = 1,749 ± 143), or omissions (vehicle = 2.5 ± 1.5, CNO = 1.2 ± 0.4; all Fs < 0.76, all ps > 0.39). In addition, infusion of CNO into the mOFC in rats only expressing mCherry (n = 10, 6 descending) again had no effect on choice (all Fs < 1.6, all ps > 0.20; Fig. 5C). All considered, it appears that PL inputs to mOFC promote loss sensitivity that in turn attenuates the allure of larger rewards but has a more prominent role in influencing choice before accumulation of information about local reward history.

Discussion

Here, we report that distinct functional circuits linking mOFC to the BLA and PL regulate dissociable component processes of risk/reward decision-making. mOFC → BLA circuits facilitate in tracking changes in relative values of different options to support flexible reward-seeking. mOFC → PL circuits promote advantageous choice, by relaying context-dependent information regarding wins and losses, while PL inputs to the mOFC attenuate the allure of larger yet uncertain rewards, particularly early in the decision-making sequence.

mOFC­–BLA circuitry and changes in relative value

Perturbing mOFC output signals to the BLA impaired shifts in choice bias away from or toward the risky option when the probability of obtaining the larger/risky reward decreased or increased over time. These opposing changes were associated with differences in how choice outcomes influenced subsequent decisions. When reward odds started high, disrupting this circuit rendered animals less likely to shift to the small reward after losses, leading to a more persistent bias for the risky option. Conversely, when probabilities were initially low, disrupting mOFC → BLA communication increased perseverative responding toward the small/certain option preferred at the start of the session, without affecting how rats reacted after recent risky wins or losses. In this instance, rats were generally slower to notice the increasing utility of the large/risky option over trials, remaining mired in their initial bias and more likely to repeat small/certain choices. Thus, under different task conditions, animals may monitor different aspects of choice–outcome–history to displace biases away from options they initially prefer—focusing either on recent nonrewarded choices or accumulating information over a broader history of risky choice outcomes. Nevertheless, these findings indicate that mOFC projections interfacing with BLA integrate choice–outcome reward memories to detect variations in the relative value of different options (Burton et al., 2014; Lopatina et al., 2016), which in turn facilitate flexible and more profitable decisions. This idea is in keeping with human imaging studies suggesting the mOFC acts as a “choice option comparator” (Klein-Flügge et al., 2022).

The notion that mOFC → BLA circuits process reward memories to modify behavior dovetails with findings that this circuit mediates retrieval of pavlovian stimulus–outcome memories to modify ongoing behavior. Chemogenetic inactivation of mOFC → BLA projections abolishes the ability of outcome-specific pavlovian stimuli to invigorate instrumental responding (Lichtenberg et al., 2021). Likewise, this circuit also enables retrieval of state-dependent value information following outcome devaluation (Malvaez et al., 2019). These data, combined with the present findings, suggest that this circuit promotes efficient reward-seeking by identifying higher value targets when behavior may be directed by external pavlovian stimuli, internal memories of motivational state, or choice–outcome reward history.

In contrast to top-down mOFC → BLA circuits, disrupting bottom-up BLA → mOFC pathways in well-trained animals did not alter risk/reward decisions. This null effect is comparable to a previous report that disconnection of ascending BLA signals to the PL also did not alter probabilistic discounting. Rather BLA efferents to the nucleus accumbens appear to exert a greater influence over these decisions, promoting choice of larger, uncertain rewards (St. Onge et al., 2012; Bercovici et al., 2018). In a similar vein, chemogenetic perturbation of BLA → mOFC communication did not affect the expression of pavlovian-to-instrumental transfer when contingencies were acquired prior to test, nor did they disrupt retrieval of sensory-specific outcome devaluation memories that guide goal-directed choice (Lichtenberg et al., 2021). However, these manipulations did cause more generalized reductions in pavlovian approach following sensory-specific devaluation (Lichtenberg et al., 2021). Thus, BLA → mOFC communication may be more important for selective modification of pavlovian responses upon changes in the perceived value of specific expected rewards following shifts in motivational state (hunger-to-satiety). In comparison, these ascending pathways play less of a role in influencing volitional choices guided by stimulus- or action–outcome reward memories, at least once these contingencies have been learned. With this in mind, ascending BLA signals to OFC or other PFC regions may play a more prominent role during the initial learning of reward contingencies, as the BLA is necessary for reward value learning (Parkes and Balleine, 2013) and disconnection of BLA and OFC slowed acquisition of a rat Gambling task (Zeeb and Winstanley, 2013). Alternatively, situations involving threats may recruit ascending BLA–frontal lobe circuits more readily (Sotres-Bayon et al., 2012), given the arguably more prominent contribution of the BLA to aversively motivated behaviors (e.g., Dalton et al., 2025). Additional studies are required to better characterize the functional significance of this pathway.

It is of interest to compare the contribution of mOFC–BLA circuits on flexible risk/reward decisions to other studies examining PL–BLA circuits (St. Onge et al., 2012; Jenni et al., 2017). Disconnecting descending PL inputs to BLA induced similar, inflexible patterns of choice during probabilistic discounting. Notably, PL → BLA disconnection more uniformly altered how recent risky choice outcomes influenced subsequent action selection. In comparison, silencing mOFC → BLA inputs impeded the detection of changes in the relative value of options that animals initially preferred in their starting state. Thus, rather than acting as redundant circuits, these parallel descending corticoamygdala pathways may facilitate flexibility by processing complementary information. The PL may provide BLA information about recent, one-trial back outcomes associated with risky choices while the mOFC may provide a broader survey of variations in relative value across different task states (Sharpe et al., 2019).

mOFC and PL circuitry

Neurophysiological studies in primates/humans have revealed that activity in mOFC and pregenual anterior cingulate (i.e., PL) represent reward value in a similar manner, and a recent theoretical synthesis by Rolls (2023) has proposed value-related signals from mOFC are transmitted to PL to guide action selection. In keeping with this idea, we observed that disrupting signals from mOFC to PL markedly perturbed risk/reward decisions, leading to stochastic and suboptimal choice profiles. Under control conditions, animals established an initial bias toward one option that shifted toward the other as they transitioned across task probability “states.” Yet, depriving the PL of value-related mOFC signals perturbed both the initial establishment of an optimal choice bias and subsequent transitions across task states. Animals were less adept at using action–outcome information to estimate reward probabilities and bias choice toward higher utility options, playing less risky when probabilities were higher, and more so when the odds were slim; in essence, animals were lost in state.

The more random choice profile induced by inhibiting mOFC → PL communication may be interpreted to reflect a generalized impairment in estimation of the relative risk of obtaining larger rewards. On this task, the primary means through which animals estimate these risks is by tracking variations in decision outcomes over time, processes which can be assessed to a certain degree by examining changes in influence these outcomes have on subsequent choice biases. In this regard, disrupting mOFC → PL circuits also induced a variety of interesting perturbations in these processes, one of which was a reduction in win–stay behavior. Thus, when reward receipt is uncertain, one function of mOFC → PL circuitry may be to enhance the impact of rewarded actions so they are repeated. However, we also found this circuit processes information about losses in a context-dependent manner, related to the relative profitability of risky options. Under control conditions, a risky loss shifted choice to the small/certain option on ∼30% of trials in the higher (50–25%) probability blocks, as it was more profitable to disregard this loss and play risky in this context. Conversely, when reward odds were worse (12.5–6.25%), even though animals made fewer risky choices overall, when they did, a risky loss here had a much greater impact, shifting choice on ∼80% of these trials. Suppressing mOFC → PL signals abolished this differential sensitivity to losses, increasing lose–shifts when reward probabilities were higher, and vice versa. Thus, rather than always interpreting nonrewarded actions as an indicator to direct choice toward other options, mOFC → PL circuits appear to frame losses within a broader context of reward history, differentially weighing their impact on subsequent choice (disregard a recent loss or attend to it). Notably, optogenetic silencing of PL activity time-locked to risky losses induced similar, differential changes in lose–shift behavior, whereas silencing activity concurrent with risky wins reduced win–stay tendencies in rats performing a similar discounting task (Bercovici et al., 2023). The present data suggest that the mOFC integrates outcome-related information to estimate the profitability (or risk) of different options. In turn, transmission of these value-related signals to the PL aids in establishing and then transitioning across task states by placing rewards and their omissions in the broader context of reward history.

With respect to other systems these circuits may interface with, both the mOFC and PL are connected with the mediodorsal thalamus; however, inactivation of this nucleus does not affect choice on this task (Stopper and Floresco, 2014). On the other hand, similar disruptions in probabilistic discounting were observed following disconnection of the mOFC and nucleus accumbens, suggesting parallel output pathways from mOFC to prefrontal and striatal regions may anchor and stabilize task states, allowing for context-appropriate patterns of choice (Jenni et al., 2022).

In a separate experiment, we targeted the reverse, PL → mOFC pathway, and this yielded a subtle but significant increase in risky choice. Interestingly, of all of the mOFC circuits targeted in this study, this effect was the one that most closely resembled that of bilateral mOFC inactivation on its own (Stopper et al., 2014). However, in contrast to the effects inactivating all mOFC neurons indiscriminately, the increase in risky choice induced by suppressing PL inputs to this region was associated with a reduced sensitivity to losses (rather than enhanced rewards sensitivity, as in our previous study). This suggests that the PL activity highlights recent losses, a notion supported by human imaging studies revealing increased activation of pregenual cingulate when anticipated rewards are not obtained (Rolls et al., 2008). Thus, recent loss information processed by PL may be transmitted to mOFC as a type of feedback signal which is integrated to adjust estimates of the relative value of larger/risky rewards, tempering the urge to pursue them. Additional analyses revealed this increased risky choice was most prominent during the early phases of the task. Thus, communication in PL → mOFC circuits appears to exert a greater influence on action selection when information about local reward history is relatively sparse, requiring access to more remote memories about which options may initially have more utility. This may be adaptive, in that when there is relatively little information about reward availability in an environment, this corticocortical circuit may increase the tendency to “play safe.” Yet, once more information has accumulated, value-related computations by mOFC may begin to rely less on information from the PL and instead may integrate signals from other regions [e.g.; the dopamine system (Jenni et al., 2021)] and/or rely more on information generated by recurrent patterns of activity within mOFC networks.

Conclusion

OFC regions have been implicated in guiding risk/reward decisions since the original studies by Damasio and Bechara (Bechara et al., 1994, 1999, 2001). The present findings provide novel insight into the functional contribution that mOFC output and input pathways make to distinct component processes that shape these decisions. One subpopulation of mOFC projection neurons, surveying variations in the relative value of different options, interfaces with the BLA to facilitate flexible adjustments in decision biases and help a decision maker ascertain where they should be going as they transition across task states. Separate subsets of mOFC neurons interfacing with the PL help a decision maker identify where they are with respect to the particular task state, integrating choice–outcome reward history to compare relative values and estimate the relative risks during different phases of the action sequence. Some of these value-related computations by mOFC are in turn facilitated by incoming signals from the PL, highlighting when actions are not rewarded. An important question that remains pertains to the specific microcircuity of these connections, clarifying whether the separate mOFC cells project to either BLA or PL or send collateral projections to both regions. Relatedly, whether these functions of mOFC circuits observed in male rats generalize females remains an open question. Although few studies have probed sex differences in rodent mOFC function, there is one preliminary report that chemogenetic inhibition of lateral OFC exerts differential effects in male versus female rats on various reversal learning tasks (Aguirre et al., 2023). Likewise, increasing noradrenergic tone in lateral OFC facilitates optimal decisions on a rat gambling task in male, but not female, rats (Chernoff et al., 2024). These studies highlight the importance of assessing sex differences in OFC subregion modulation of different aspects of risk/reward decisions.

Previous work by our group showed that bilateral inactivation of all mOFC outputs yielded a generalized increase in risky choice, leading to an initial interpretation that this region tempers the impact that large/risky rewards exert over later decisions (Stopper et al., 2014). It is of particular interest to juxtapose this observation with our more recent ones, given that this effect was not recapitulated by more selective targeting of mOFC output circuits project to either the BLA, PL, dorsal, or ventral striatum (present study; Jenni et al., 2022). Instead, separate subpopulations of mOFC neurons appear to promote either flexible adjustments in choice biases or interpreting outcome-related information to establish and stabilize context-appropriate decision policies. This highlights that examining how distinct frontal lobe subcircuits regulate complex cognition can uncover novel functions that might not be revealed with more generalized disruption of cortical activity. It follows that rather than prescribing specific functions to a frontal region, a more comprehensive understanding of how mOFC and other PFC areas guide behavior may be obtained by elucidating the information processed and behavioral impact of different subpopulations of cortical neurons, distinguished in part by their afferent connectivity.

Footnotes

  • This work was supported by a grant from the Canadian Institutes of Health Research (CIHR; PJT-162444) to S.B.F. and an NSERC Fellowship to N.L.J.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Stan B. Floresco at floresco{at}psych.ubc.ca.

SfN exclusive license.

References

  1. ↵
    1. Aguirre C, et al.
    (2023) Sex-dependent contributions of ventrolateral orbitofrontal cortex and basolateral amygdala to learning under uncertainty. IBRO Neurosci Rep 15:S808. https://doi.org/10.1016/j.ibneur.2023.08.1670
    OpenUrl
  2. ↵
    1. Bechara A,
    2. Damasio AR,
    3. Damasio H,
    4. Anderson SW
    (1994) Insensitivity to future consequences following damage to human prefrontal cortex. Cognition 50:7–15. https://doi.org/10.1016/0010-0277(94)90018-3
    OpenUrlCrossRefPubMed
  3. ↵
    1. Bechara A,
    2. Damasio H,
    3. Damasio AR,
    4. Lee GP
    (1999) Different contributions of the human amygdala and ventromedial prefrontal cortex to decision-making. J Neurosci 19:5473–5481. https://doi.org/10.1523/JNEUROSCI.19-13-05473.1999 pmid:10377356
    OpenUrlAbstract/FREE Full Text
  4. ↵
    1. Bechara A,
    2. Dolan S,
    3. Denburg N,
    4. Hindes A,
    5. Anderson SW,
    6. Nathan PE
    (2001) Decision-making deficits, linked to a dysfunctional ventromedial prefrontal cortex, revealed in alcohol and stimulant abusers. Neuropsychologia 39:376–389. https://doi.org/10.1016/S0028-3932(00)00136-6
    OpenUrlCrossRefPubMed
  5. ↵
    1. Bercovici DA,
    2. Princz-Lebel O,
    3. Tse MT,
    4. Moorman DE,
    5. Floresco SB
    (2018) Optogenetic dissection of temporal dynamics of amygdala-striatal interplay during risk/reward decision making. eNeuro 5:ENEURO.0422-18.2018. https://doi.org/10.1523/ENEURO.0422-18.2018 pmid:30627636
    OpenUrlCrossRefPubMed
  6. ↵
    1. Bercovici DA,
    2. Princz-Lebel O,
    3. Schumacher JD,
    4. Lo VM,
    5. Floresco SB
    (2023) Temporal dynamics underlying prelimbic prefrontal cortical regulation of action selection and outcome evaluation during risk/reward decision-making. J Neurosci 43:1238–1255. https://doi.org/10.1523/JNEUROSCI.0802-22.2022 pmid:36609453
    OpenUrlAbstract/FREE Full Text
  7. ↵
    1. Bradfield LA,
    2. Dezfouli A,
    3. Van Holstein M,
    4. Chieng B,
    5. Balleine BW
    (2015) Medial orbitofrontal cortex mediates outcome retrieval in partially observable task situations. Neuron 88:1268–1280. https://doi.org/10.1016/j.neuron.2015.10.044
    OpenUrlCrossRefPubMed
  8. ↵
    1. Bremner JD,
    2. Vythilingam M,
    3. Vermetten E,
    4. Nazeer A,
    5. Adil J,
    6. Khan S,
    7. Staib LH,
    8. Charney DS
    (2002) Reduced volume of orbitofrontal cortex in major depression. Biol Psychiatry 51:273–279. https://doi.org/10.1016/S0006-3223(01)01336-1
    OpenUrlCrossRefPubMed
  9. ↵
    1. Burton AC,
    2. Kashtelyan V,
    3. Bryden DW,
    4. Roesch MR
    (2014) Increased firing to cues that predict low-value reward in the medial orbitofrontal cortex. Cereb Cortex 24:3310–3321. https://doi.org/10.1093/cercor/bht189 pmid:23901075
    OpenUrlCrossRefPubMed
  10. ↵
    1. Chernoff CS,
    2. Hynes TJ,
    3. Schumacher JD,
    4. Ramaiah S,
    5. Avramidis DK,
    6. Mortazavi L,
    7. Floresco SB,
    8. Winstanley CA
    (2024) Noradrenergic regulation of cue-guided decision making and impulsivity is doubly dissociable across frontal brain regions. Psychopharmacology (Berl) 241:767–783. https://doi.org/10.1007/s00213-023-06508-2 pmid:38001266
    OpenUrlCrossRefPubMed
  11. ↵
    1. Dalton GL,
    2. Wang NY,
    3. Phillips AG,
    4. Floresco SB
    (2016) Multifaceted contributions by different regions of the orbitofrontal and medial prefrontal cortex to probabilistic reversal learning. J Neurosci 36:1996–4002. https://doi.org/10.1523/JNEUROSCI.3366-15.2016 pmid:26865622
    OpenUrlAbstract/FREE Full Text
  12. ↵
    1. Dalton GL,
    2. Daly ID,
    3. Floresco SB
    (2025) Valence-dependent contribution by the basolateral amygdala to active but not inhibitory avoidance and reward-seeking. Behav Brain Res 484:115503. https://doi.org/10.1016/j.bbr.2025.115503
    OpenUrlCrossRefPubMed
  13. ↵
    1. Ghods-Sharifi S,
    2. St Onge JR,
    3. Floresco SB
    (2009) Fundamental contribution by the basolateral amygdala to different forms of decision making. J Neurosci 29:5251–5259. https://doi.org/10.1523/JNEUROSCI.0315-09.2009 pmid:19386921
    OpenUrlAbstract/FREE Full Text
  14. ↵
    1. Goldstein RZ,
    2. Volkow ND
    (2011) Dysfunction of the prefrontal cortex in addiction: neuroimaging findings and clinical implications. Nat Rev Neurosci 12:652–669. https://doi.org/10.1038/nrn3119 pmid:22011681
    OpenUrlCrossRefPubMed
  15. ↵
    1. Heilbronner SR,
    2. Rodriguez-Romaguera J,
    3. Quirk GJ,
    4. Groenewegen HJ,
    5. Haber SN
    (2016) Circuit-based corticostriatal homologies between rat and primate. Biol Psychiatry 80:509–521. https://doi.org/10.1016/j.biopsych.2016.05.012 pmid:27450032
    OpenUrlCrossRefPubMed
  16. ↵
    1. Hervig ME,
    2. Fiddian L,
    3. Piilgaard L,
    4. Božic T,
    5. Blanco-Pozo M,
    6. Knudsen C,
    7. Olesen SF,
    8. Alsiö J,
    9. Robbins TW
    (2020) Dissociable and paradoxical roles of rat medial and lateral orbitofrontal cortex in visual serial reversal learning. Cereb cortex 30:1016–1029. https://doi.org/10.1093/cercor/bhz144 pmid:31343680
    OpenUrlCrossRefPubMed
  17. ↵
    1. Hoover WB,
    2. Vertes RP
    (2011) Projections of the medial orbital and ventral orbital cortex in the rat. J Comp Neurol 519:3766–3801. https://doi.org/10.1002/cne.22733
    OpenUrlCrossRefPubMed
  18. ↵
    1. Jenni NL,
    2. Larkin JD,
    3. Floresco SB
    (2017) Prefrontal dopamine D1 and D2 receptors regulate dissociable aspects of risk/reward decision-making via distinct ventral striatal and amygdalar circuits. J Neurosci 37:6200–6213. https://doi.org/10.1523/JNEUROSCI.0030-17.2017 pmid:28546312
    OpenUrlAbstract/FREE Full Text
  19. ↵
    1. Jenni NL,
    2. Li YT,
    3. Floresco SB
    (2021) Medial orbitofrontal cortex dopamine D1/D2 receptors differentially modulate distinct forms of probabilistic decision-making. Neuropsychopharmacology 46:1240–1251. https://doi.org/10.1038/s41386-020-00931-1 pmid:33452435
    OpenUrlCrossRefPubMed
  20. ↵
    1. Jenni NL,
    2. Rutledge G,
    3. Floresco SB
    (2022) Distinct medial orbitofrontal–striatal circuits support dissociable component processes of risk/reward decision-making. J Neurosci 42:2743–2755. https://doi.org/10.1523/JNEUROSCI.2097-21.2022 pmid:35135853
    OpenUrlAbstract/FREE Full Text
  21. ↵
    1. Klein-Flügge MC,
    2. Bongioanni A,
    3. Rushworth MFS
    (2022) Medial and orbital frontal cortex in decision-making and flexible behavior. Neuron 110:2743–2770. https://doi.org/10.1016/j.neuron.2022.05.022
    OpenUrlCrossRefPubMed
  22. ↵
    1. Lichtenberg NT,
    2. Pennington ZT,
    3. Holley SM,
    4. Greenfield VY,
    5. Cepeda C,
    6. Levine MS,
    7. Wassum KM
    (2017) Basolateral amygdala to orbitofrontal cortex projections enable cue-triggered reward expectations. J Neurosci 37:8374–8384. https://doi.org/10.1523/JNEUROSCI.0486-17.2017 pmid:28743727
    OpenUrlAbstract/FREE Full Text
  23. ↵
    1. Lichtenberg NT,
    2. Sepe-Forrest L,
    3. Pennington ZT,
    4. Lamparelli AC,
    5. Greenfield VY,
    6. Wassum KM
    (2021) The medial orbitofrontal cortex–basolateral amygdala circuit regulates the influence of reward cues on adaptive behavior and choice. J Neurosci 41:7267–7277. https://doi.org/10.1523/JNEUROSCI.0901-21.2021 pmid:34272313
    OpenUrlAbstract/FREE Full Text
  24. ↵
    1. Lopatina N,
    2. McDannald MA,
    3. Styer CV,
    4. Peterson JF,
    5. Sadacca BF,
    6. Cheer JF,
    7. Schoenbaum G
    (2016) Medial orbitofrontal neurons preferentially signal cues predicting changes in reward during unblocking. J Neurosci 36:8416–8424. https://doi.org/10.1523/JNEUROSCI.1101-16.2016 pmid:27511013
    OpenUrlAbstract/FREE Full Text
  25. ↵
    1. MacLaren D,
    2. Browne R,
    3. Shaw J,
    4. Radhakrishnan S,
    5. Khare P,
    6. Espana R,
    7. Clark S
    (2016) Clozapine N-Oxide administration produces behavioral effects in long-evans rats: implications for designing DREADD experiments. eNeuro 3:ENEURO.0219-16.2016. https://doi.org/10.1523/ENEURO.0219-16.2016 pmid:27822508
    OpenUrlAbstract/FREE Full Text
  26. ↵
    1. Mahler SV,
    2. Vazey EM,
    3. Beckley JT,
    4. Keistler CR,
    5. McGlinchey EM,
    6. Kaufling J,
    7. Wilson SP,
    8. Deisseroth K,
    9. Woodward JJ,
    10. Aston-Jones G
    (2014) Designer receptors show role for ventral pallidum input to ventral tegmental area in cocaine seeking. Nat Neurosci 17:577–585. https://doi.org/10.1038/nn.3664 pmid:24584054
    OpenUrlCrossRefPubMed
  27. ↵
    1. Malvaez M,
    2. Shieh C,
    3. Murphy MD,
    4. Greenfield VY,
    5. Wassum KM
    (2019) Distinct cortical–amygdala projections drive reward value encoding and retrieval. Nat Neurosci 22:762–769. https://doi.org/10.1038/s41593-019-0374-7 pmid:30962632
    OpenUrlCrossRefPubMed
  28. ↵
    1. Moorman DE
    (2018) The role of the orbitofrontal cortex in alcohol use, abuse, and dependence. Prog Neuropsychopharmacol Biol Psychiatry 87:85–107. https://doi.org/10.1016/j.pnpbp.2018.01.010 pmid:29355587
    OpenUrlCrossRefPubMed
  29. ↵
    1. Noonan MP,
    2. Chau BKH,
    3. Rushworth MFS,
    4. Fellows LK
    (2017) Contrasting effects of medial and lateral orbitofrontal cortex lesions on credit assignment and decision-making in humans. J Neurosci 37:7023–7035. https://doi.org/10.1523/JNEUROSCI.0692-17.2017 pmid:28630257
    OpenUrlAbstract/FREE Full Text
  30. ↵
    1. Parkes SL,
    2. Balleine BW
    (2013) Incentive memory: evidence the basolateral amygdala encodes and the insular cortex retrieves outcome values to guide choice between goal-directed actions. J Neurosci 33:8753–8763. https://doi.org/10.1523/JNEUROSCI.5071-12.2013 pmid:23678118
    OpenUrlAbstract/FREE Full Text
  31. ↵
    1. Paxinos G,
    2. Watson C
    (2005) The rat brain in stereotaxic coordinates, Ed 5. Amsterdam: Elsevier Academic Press.
  32. ↵
    1. Pizzagalli DA,
    2. Roberts AC
    (2022) Prefrontal cortex and depression. Neuropsychopharmacology 47:225–246. https://doi.org/10.1038/s41386-021-01101-7 pmid:34341498
    OpenUrlCrossRefPubMed
  33. ↵
    1. Rolls ET
    (2023) Emotion, motivation, decision-making, the orbitofrontal cortex, anterior cingulate cortex, and the amygdala. Brain Struct Funct 228:1201–1257. https://doi.org/10.1007/s00429-023-02644-9 pmid:37178232
    OpenUrlCrossRefPubMed
  34. ↵
    1. Rolls ET,
    2. McCabe C,
    3. Redoute J
    (2008) Expected value, reward outcome, and temporal difference error representations in a probabilistic decision task. Cereb Cortex 18:652–663. https://doi.org/10.1093/cercor/bhm097
    OpenUrlCrossRefPubMed
  35. ↵
    1. Runegaard AH, et al.
    (2018) Locomotor- and reward-enhancing effects of cocaine are differentially regulated by chemogenetic stimulation of Gi-signaling in dopaminergic neurons. eNeuro 5:ENEURO.0345-17.2018. https://doi.org/10.1523/ENEURO.0345-17.2018 pmid:29938215
    OpenUrlCrossRefPubMed
  36. ↵
    1. Sharpe MJ,
    2. Stalnaker T,
    3. Schuck NW,
    4. Killcross S,
    5. Schoenbaum G,
    6. Niv Y
    (2019) An integrated model of action selection: distinct modes of cortical control of striatal decision making. Annu Rev Psychol 70:53–76. https://doi.org/10.1146/annurev-psych-010418-102824 pmid:30260745
    OpenUrlCrossRefPubMed
  37. ↵
    1. Smith KS,
    2. Bucci DJ,
    3. Luikart BW,
    4. Mahler S V
    (2021) DREADDS: use and application in behavioral neuroscience. Behav Neurosci 135:89–107. https://doi.org/10.1037/bne0000433
    OpenUrlCrossRefPubMed
  38. ↵
    1. Sotres-Bayon F,
    2. Sierra-Mercado D,
    3. Pardilla-Delgado E,
    4. Quirk GJ
    (2012) Gating of fear in prelimbic cortex by hippocampal and amygdala inputs. Neuron 76:804–812. https://doi.org/10.1016/j.neuron.2012.09.028 pmid:23177964
    OpenUrlCrossRefPubMed
  39. ↵
    1. St. Onge JR,
    2. Floresco SB
    (2010) Prefrontal cortical contribution to risk-based decision making. Cereb Cortex 20:1816–1828. https://doi.org/10.1093/cercor/bhp250
    OpenUrlCrossRefPubMed
  40. ↵
    1. St. Onge JR,
    2. Stopper CM,
    3. Zahm DS,
    4. Floresco SB
    (2012) Separate prefrontal-subcortical circuits mediate different components of risk-based decision making. J Neurosci 32:2886–2899. https://doi.org/10.1523/JNEUROSCI.5625-11.2012 pmid:22357871
    OpenUrlAbstract/FREE Full Text
  41. ↵
    1. Stopper CM,
    2. Floresco SB
    (2014) What’s better for me? Fundamental role for lateral habenula in promoting subjective decision biases. Nat Neurosci 17:33–35. https://doi.org/10.1038/nn.3587 pmid:24270185
    OpenUrlCrossRefPubMed
  42. ↵
    1. Stopper CM,
    2. Green EB,
    3. Floresco SB
    (2014) Selective involvement by the medial orbitofrontal cortex in biasing risky, but not impulsive, choice. Cereb Cortex 24:154–162. https://doi.org/10.1093/cercor/bhs297
    OpenUrlCrossRefPubMed
  43. ↵
    1. Tsuchida A,
    2. Doll BB,
    3. Fellows LK
    (2010) Beyond reversal: a critical role for human orbitofrontal cortex in flexible learning from probabilistic feedback. J Neurosci 30:16868–16875. https://doi.org/10.1523/JNEUROSCI.1958-10.2010 pmid:21159958
    OpenUrlAbstract/FREE Full Text
  44. ↵
    1. Vertes RP
    (2004) Differential projections of the infralimbic and prelimbic cortex in the rat. Synapse 51:32–58. https://doi.org/10.1002/syn.10279
    OpenUrlCrossRefPubMed
  45. ↵
    1. Wassum KM
    (2022) Amygdala-cortical collaboration in reward learning and decision making. eLife 11:e80926. https://doi.org/10.7554/eLife.80926 pmid:36062909
    OpenUrlCrossRefPubMed
  46. ↵
    1. Xie C, et al.
    (2021) Reward versus nonreward sensitivity of the medial versus lateral orbitofrontal cortex relates to the severity of depressive symptoms. Biol Psychiatry Cogn Neurosci Neuroimaging 6:259–269. https://doi.org/10.1016/j.bpsc.2020.08.017
    OpenUrlPubMed
  47. ↵
    1. Zeeb FD,
    2. Winstanley CA
    (2013) Functional disconnection of the orbitofrontal cortex and basolateral amygdala impairs acquisition of a rat gambling task and disrupts animals’ ability to alter decision-making behavior after reinforcer devaluation. J Neurosci 33:6434–6443. https://doi.org/10.1523/JNEUROSCI.3971-12.2013 pmid:23575841
    OpenUrlAbstract/FREE Full Text
Back to top

In this issue

The Journal of Neuroscience: 45 (16)
Journal of Neuroscience
Vol. 45, Issue 16
16 Apr 2025
  • Table of Contents
  • About the Cover
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Medial Orbitofrontal, Prefrontal, and Amygdalar Circuits Support Dissociable Component Processes of Risk/Reward Decision-Making
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Medial Orbitofrontal, Prefrontal, and Amygdalar Circuits Support Dissociable Component Processes of Risk/Reward Decision-Making
Nicole L. Jenni, Debra A. Bercovici, Stan B. Floresco
Journal of Neuroscience 16 April 2025, 45 (16) e2147242025; DOI: 10.1523/JNEUROSCI.2147-24.2025

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Medial Orbitofrontal, Prefrontal, and Amygdalar Circuits Support Dissociable Component Processes of Risk/Reward Decision-Making
Nicole L. Jenni, Debra A. Bercovici, Stan B. Floresco
Journal of Neuroscience 16 April 2025, 45 (16) e2147242025; DOI: 10.1523/JNEUROSCI.2147-24.2025
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • Peer Review
  • PDF

Keywords

  • amydala
  • anterior cingulate
  • chemogenetics
  • decision-making
  • orbitofrontal cortex
  • risk

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Sex differences in histamine regulation of striatal dopamine
  • The Neurobiology of Cognitive Fatigue and Its Influence on Effort-Based Choice
  • Zooming in and out: Selective attention modulates color signals in early visual cortex for narrow and broad ranges of task-relevant features
Show more Research Articles

Behavioral/Cognitive

  • Zooming in and out: Selective attention modulates color signals in early visual cortex for narrow and broad ranges of task-relevant features
  • Target selection signals causally influence human perceptual decision making
  • The molecular substrates of second-order conditioned fear in the basolateral amygdala complex
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.