Abstract
Some of us fortunate enough to have published a paper in The Journal of Neuroscience in its inaugural year (1981) have been asked to write a Progressions article addressing our views on the significance of the original work and how ideas about the topic of that work have evolved over the last 40 years. These questions cannot be effectively considered without placing them in the context of the incredible growth of the overall field of neuroscience over these last four decades. For openers, in 1981, the Nobel Prize was awarded to three neuroscience superstars: Roger Sperry, David Hubel, and Torsten Wiesel. Not a bad year to launch the Journal. With this as a backdrop, I divide this Progressions article into two parts. First, I discuss our original (1981) paper describing classical conditioning in Aplysia californica, and place our results in the context of the state of the field at the time. Second, I fast forward to the present and consider some of remarkable progress in the broad field of learning and memory that has occurred in the last 40 years. Along the way, I also reflect briefly on some of the amazing advances, both technical and conceptual, that we in neuroscience have witnessed.
Introduction
The year is 1981. Three key events come to mind. The newly invented cell phone, affectionately called a “brick” (a Vodafone VM1 weighed 5 kg), was launched. The musical chart topper in the blues world was, appropriately enough, “Start me up” by the Rolling Stones. And Marina Picciotto was launching her neuroscience career as a high school intern. Oh wait, one more event comes to mind: The Journal of Neuroscience was launched by the Society for Neuroscience. Happy 40th Anniversary Journal of Neuroscience!
As an author of a paper published in the first volume of The Journal of Neuroscience (Carew et al., 1981), it is a genuine pleasure to be invited to provide a Progressions article for this anniversary year. In these articles, authors are asked to address their personal views on three interrelated issues: (1) why the original work was important, (2) the most important advances stemming from the work, and (3) how ideas about the topic have evolved since the work's publication. None of these issues can be effectively considered without placing them in the context of the incredible growth of the overall field of neuroscience. Thus, I will divide this Progressions article into two parts. In the first part, I will look back a bit and discuss our 1981 Journal of Neuroscience paper describing classical conditioning in a simple withdrawal reflex in Aplysia californica, and place those results in the context of the state of the field at the time. In the second part, I will return to the present and address some of remarkable advances in the broad field of learning and memory that have occurred in the last 40 years. Along the way, I will take a few steps back from my own field and reflect briefly on some of the amazing advances, both technical and conceptual, that we in neuroscience have witnessed. These advances have dramatically enhanced our ability to address pressing questions of brain function, ranging from molecular architecture of individual neurons to the interactions of complex systems of cells in higher-order processes, such as perception and mentation.
Learning and memory circa 1981
In 1981, I was a member of Eric Kandel's laboratory at Columbia University. Our research team included Terry Walters, Vince Castellucci, Bob Hawkins, and Tom Abrams. We studied a simple model system, the invertebrate marine mollusk A. californica. In prior years, we had made significant strides in understanding the cellular mechanisms underlying two ubiquitous forms of nonassociative learning, habituation and sensitization, which we found could exist across a wide range of temporal domains, lasting minutes to hours (short-term memory) to days, weeks, and even months (long-term memory) (Carew and Kandel, 1973; Pinsker et al., 1973). But our quest in the early 1980s was to examine the mechanisms underlying another form of learning commonly expressed throughout the animal kingdom: associative learning. To place this aspiration in historical context, our goal was to bring the power and utility of this experimental system to bear on achieving a mechanistic understanding of associative learning on synaptic and, ultimately, molecular levels.
A major feature of learning and memory in higher animals (including humans) is the ability to associate stimuli with either a positive or negative consequence. In our laboratory, we were extremely interested in examining whether Aplysia could indeed exhibit this type of associative learning. Perhaps the most familiar form of associative learning is called “classical” or “Pavlovian” conditioning. A fundamental goal of the laboratory for many years was to examine whether Aplysia were indeed capable of classical conditioning. Toward that end, Terry, Eric, and I attempted to produce associative learning in a very simple reflex, the defensive siphon and gill withdrawal reflex of Aplysia initiated by tactile stimulation of the siphon. We chose this behavior because it is controlled by a numerically limited and well-analyzed neural circuit in which a significant part of the reflex pathway consists of a set of monosynaptic connections between identified sensory and motor neurons. In a nutshell, here's what we did. As the conditioned stimulus (CS), we used a light tactile stimulus to the siphon, which produces weak siphon and gill withdrawal. As the unconditioned stimulus (US), we used an electric shock to the tail, which produces a strong withdrawal reflex. Specific temporal pairing of the CS and US endowed the CS with the ability to trigger enhanced withdrawal of both the siphon and the gill. Random or unpaired presentations of the CS and US, as well as presentations of the CS or US alone, produced either no enhancement or significantly less enhancement than paired presentations of the CS and US. The conditioning was acquired rapidly (within 15 trials, see Fig. 1) and was retained for several days. The bottom line: we did it. We found that Aplysia were indeed capable of acquiring and retaining a simple form of classical conditioning. And where better to publish these exciting (at least to us) new results than in the new journal on the scene, The Journal of Neuroscience!
Soon thereafter, Bob, Eric, and I discovered that Aplysia were, in addition, fully capable of a rather sophisticated form of associative learning called “differential classical conditioning” (Carew et al., 1983). In this form of conditioning, two CS are used: one (the CS+) is explicitly paired with the US, whereas the other (the CS–) is explicitly unpaired. We went on to study this phenomenon in behavioral detail; and subsequently, using microelectrodes to study the synaptic connections between identified sensory neurons and motor neurons in the reflex, Bob, Tom, Eric, and I were able to identify a potential synaptic mechanism for the differential conditioning, “activity-dependent facilitation.” This provided a powerful candidate mechanism that could account quite well for the features of behavioral conditioning exhibited by the animal (Hawkins et al., 1983). An interesting aside: Terry had recently joined Jack Byrne's laboratory in Houston as a postdoctoral fellow, and they too were very interested in cellular mechanisms of classical conditioning in Aplysia. They independently discovered an activity-dependent mechanism that was very similar to the one that Bob, Tom, Eric, and I had found. Both our laboratories decided to publish our results together in a series of three back-to-back papers (Carew et al., 1983; Hawkins et al., 1983; Walters and Byrne, 1983), providing a collegial outcome all around. And to foreshadow a point I will make later on, the cellular results from both of our laboratories provided an early example of creating a “synaptic memory” in the brain by pairing intracellular activation of a single sensory neuron with a tail-shock US.
At roughly the same time in the early 1980s, exciting progress was also being made in identifying mechanisms of learning and memory in a number of other simple invertebrate systems, including the honey bee, Hermissenda, Limax, Pleurobrachia, and Drosophila, to name just a few (for review, see Menzel et al., 1984; Carew et al., 1984). In addition, major strides in exploring neural mechanisms of learning and memory were being achieved in a wide range of vertebrate and mammalian systems, including the pioneering work of Jim McGaugh and his colleagues at the University of California, Irvine and Dick Thompson and his research team at the University of Southern California. Space does not permit an adequate review of these and several other amazing laboratories that collectively blazed the trails for a future generation of scientists interested in brain mechanisms of learning and memory. For a compressive review, let me recommend Howard Eichenbaum's wonderful book, Learning and memory (Eichenbaum, 2008). Howard himself was a pioneer in this field, and he left us all too soon.
Learning and memory: fast forward 40 years
It would be impossible to address the advances in the field of learning and memory over the last four decades without first acknowledging some of the remarkable progress, both technical and conceptual, that has taken place in our overall field of neuroscience. I will mention just a few of these advances, simply to illustrate the overall progress. Also, I will mention them without attribution, as true scholarship in discussing all of these areas would result in a tome that would rival the length of The Brothers Karamazov.
Technical and conceptual advances
First a brief personal recollection: Many years ago (20 plus), I was at a SfN meeting having a post-meeting beer with a Yale colleague and old friend, Pasko Rakic. Together, we were reflecting on the remarkable progress occurring in our respective areas in neuroscience. With his typical wry wit, Pasko opined: “Ten years ago I would not even have the vocabulary to describe what we are seeing in our fields today.” Spot on Pasko! And here we are now in 2021. Let's briefly think about just a few of the intellectual resources and experimental tools currently at our disposal.
•The decade of the brain (1990-1999): This extraordinary pronouncement made by U.S. president George H. W. Bush was dedicated “to enhance public awareness of the benefits to be derived from brain research.” The initiative highlighted the importance of brain research in investigating questions that would directly inform our understanding of the mechanisms underlying both normal memory function, as well as the tragic instances when those mechanisms go awry in a wide range of cognitive disorders.
•The Human Genome Project: This amazing enterprise catalyzed an international, collaborative research program dedicated to the complete mapping and understanding of all the genes of human beings.
•Brain-machine interface (BMI): BMIs provide a means of direct communication between a brain or brain region and an external device. This dynamic field was significantly propelled by critical advances in large-scale recording and stimulation techniques using electrode arrays. Often used in both nonhuman primates and in humans, BMIs can be directed at examining, and sometimes even repairing, cognitive or sensory-motor deficits.
•Large-scale brain imaging techniques: Beginning with critical breakthroughs in the 1990s, the use of fMRI, MEG, and other related imaging techniques have enabled extraordinary advances in our ability to study the formation and recall of human memories in real time.
•Optogenetics: Informed by the function of light-sensitive proteins in the retina, this extraordinary technique involves the use of light to control neurons that have been genetically modified to express light-sensitive ion channels.
•Transgenic innovation: Originally inspired by the power of reverse genetics in invertebrates, such as fruit flies, transgenic animals (typically mice) can have just about any gene of choice inserted into their genome.
•Single-cell RNA sequencing: The development of single-cell RNA sequencing (RNA-seq) technology has permitted exquisite resolution in the analysis of gene expression at the level of single cells.
• CRISPR: CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats) provides a powerful tool for gene editing. This approach significantly facilitates the alteration of DNA sequences, thereby advancing our understanding of the function of a specific gene in a specific cell at a specific time. And a brief aside: It likely has not escaped your notice that several of the techniques described above have been deeply informed by the not-so-quiet revolution in our understanding of the molecular biology of the gene.
With apologies for leaving out your favorite innovative approach, I'll stop here. These are but a few examples of the remarkable intellectual resources and technical advances currently available to neuroscientists in the year 2021. And just imagine this list 20 years from now…. Some of these amazing techniques may well be likened to the “brick cell phones” introduced around the time that The Journal of Neuroscience was fledged in 1981. That's how fast our field is developing.
A brief reflection on the possible impact of our early work
As mentioned at the outset, a second topic suggested for a Progressions article such as this is to indicate possible important advances stemming from the original work. Upon reflection, one important theme generated by our original 1981 paper (Carew et al., 1981), coupled with the papers that followed soon thereafter both from our laboratory (Carew et al., 1983; Hawkins et al., 1983) and Jack Byrne's laboratory (Walters and Byrne, 1983), was the notion that activity-dependent mechanisms can underlie a wide range of forms of plasticity in the brain. Over the ensuing decades, activity-dependent mechanisms have been identified as playing crucial roles in a wide range of processes, including synaptic plasticity (Lisman and Spruston, 2005; Jacob et al., 2007; Bai and Suzuki, 2020), molecular processing (Steward and Worley, 2001; Wayman et al., 2008), gene expression (Flavell and Greenberg, 2008), and development (Sala et al., 2008). For a thoughtful general overview of activity-dependent plasticity from bench to bedside, see Ganguly and Poo (2013). But truth in advertising: it would be hubris to suggest that the exploration of activity-dependent processing beginning in the early 1980s “stemmed from” our early papers. What our early papers did was add another dimension to this growing area by suggesting that activity-dependent processes can provide a powerful candidate mechanism for a general form of associative learning: classical conditioning.
Let's move on to the third topic. How has the field evolved since our 1981 publication? Wow…where to start?
New horizons in learning and memory
Over the last 40 years, several exciting areas of memory research have emerged. Some arose from longstanding observations that were significantly informed by new technical approaches (such as those discussed above). Other areas simply did not previously exist as cognate domains of inquiry; they emerged only as the field matured conceptually as well as technically. I will highlight just a few of both of these types of areas to provide a sense of some of the exciting progress currently underway. I should emphasize that this is by no means a comprehensive list. I have chosen just a few areas that simply serve as exemplars of the kinds of progress currently enjoyed in the broad field of learning and memory. At the end of each section, I provide a few key references that will guide the interested reader to further relevant literature.
Sleep and memory
The area of sleep research has a long and scholarly history. For example, it has long been known that there are several stages of sleep (most notably, REM and NREM). More recently, several seminal studies have revealed important roles of REM and NREM sleep in memory storage. Typical approaches include either the analysis of memory following sleep deprivation, or the analysis of sleep states on memory storage, retention, and/or retrieval (see Graves et al., 2001). Additional approaches use the analysis of neural activity both during training and during the various sleep states that follow training. All of these approaches have underscored the importance of sleep for memory consolidation, and several show that REM sleep in particular might play a particularly important role in memory consolidation. Model systems that have been examined in sleep and memory consolidation range from Drosophila to rodents to humans. In all cases, it is clear that sleep is not simply a passive restorative process. Sleep also contains an active series of states that play key roles in the consolidation of memories. For further exploration of this area, see Graves et al. (2001), Rasch and Born (2013), and Klinzing et al. (2019).
Reconsolidation
It has long been appreciated that memory takes time (Kukushkin and Carew, 2017), meaning that, following a particular experience, for the memory of that event to be stored in the brain, there is a period of vulnerability during which it can be disrupted (e.g., by distraction or some other sort of interference). This period is called the “consolidation” period, after which the memory is considered stable. Furthermore, a common view of memory storage assumes that, each time a past experience is brought back to mind (a process called “retrieval”), the original memory trace is retrieved. We now know that, at least in many instances, this is incorrect. Seminal studies challenging this view show that, on retrieval, the memory is once again labile and subject to disruption (see, e.g., Nader et al., 2000). A revised view, now commonly accepted, is that the retrieved memory is “reconsolidated” (see Alberini and LeDoux, 2013). Furthermore, subsequent studies showed that, in future retrieval trials, it is this reconsolidated memory that is called into action. Finally, reconsolidation is a highly conserved process, observed in animals ranging from invertebrates and rodents to humans. An exciting current domain of inquiry in this field is the exact neural mechanisms engaged to mediate the reconsolidation process. For further information in this arena, see Nader et al. (2000), Tronson and Taylor (2007), and Alberini and LeDoux (2013).
Creating memories in the brain
Recent studies using optogenetic approaches to artificially activate specific circuits in brains (typically of rodents) show that it is now possible to induce a “memory” in the brain in the absence of experience. This is often called a “false memory” in that it is not derived from a real-world experience; rather, it is “implanted” in the brain. In one kind of study (Ramirez et al., 2013), the investigators first recorded neurons that were activated in a specific context (call it A) and labeled them with a channel-rhodopsin molecule. These neurons were then later optically reactivated during fear conditioning in another context (call it B). Remarkably, when these animals were returned to the original context (A), they showed fear responses (freezing), although they never received a foot shock there. In another type of study (Vetere et al., 2019), olfactory classical conditioning was explored in transgenic mice. In this type of conditioning experiment, an odor (CS) is typically paired with a shock (US), and memory is reflected by the mice preferentially avoiding the odor that was paired with shock. But in this study, the CS and US were “created” in the brain with optogenetic stimulation of an olfactory area (CS) paired with optogenetic stimulation of a region that mediates shock (US). This process created a behaviorally expressed artificial memory that was remarkably similar to a “natural” memory produced by “real world” stimuli. This is a new and growing area in the field of learning and memory that offers significant promise for deeper insights into the circuit mechanisms engaged in memory formation. For further information and background in the fascinating area, see Loftus and Pickrell (1995), Ramirez et al. (2013), and Vetere et al. (2019).
Decision making
An exciting field that emerged in the last two decades or so is that of neuroeconomics, which builds bridges spanning the disciplines of neuroscience, economics, and psychology. Within this broad field, a theory emerged that makes important contact with several areas of learning and memory. This theory centers on the notion of a “reward prediction error” mediated by dopamine signaling in the brain (Schultz, 2016). It has greatly impacted the neuroscience of learning and decision-making. Under the broad umbrella of “reinforcement learning,” this theory makes an important distinction between “model-based” learning, which adjusts to reflect the current status of reward of an action or choice, and “model-free” forms of learning (akin to “habit” learning), in which decisions are guided mainly by previously realized outcomes (Doll et al., 2012). Several studies strongly implicate striatal dopaminergic processes in generating reward signals that deeply inform choice behavior in this system. In addition, recent studies implicate distinct brain circuitry (including orbitofrontal and striatal circuits) in mediating distinct components of the decision-making process (Groman et al., 2019). To explore this broad area further, see Glimcher (2011), Doll et al. (2012), Schultz (2016), and Groman et al. (2019).
Glia is more than glue
For years, glia have had a bad rap. Until the mid-20th century, glial cells were typically considered to be merely “glue” that held neurons together (a bit like an epoxy potting compound for our neural circuit boards). But things have dramatically changed. Recent studies show that a learning experience can induce the growth of myelin that ensheathe neurons in central neural circuits involved in memory formation. Specifically, these experiments reveal that neurotransmitters released from axons in circuits encoding a learning experience can give rise to the proliferation of specific kinds of glial cells (oligodendrocytes) that in turn increase myelin in those circuits (Pan et al., 2020). Other studies show that depleting a specific type of myelin in mice impairs their ability to form a spatial memory (Steadman et al., 2020). Finally, imaging studies in humans have revealed changes in the structure of specific myelinated tracts after a learning experience (Zatorre et al., 2012). All of these studies underline the importance of now considering glia in the brain as a potential form of “parallel circuitry” that collaborates with neuronal circuitry in forming and storing memories. For further exploration of this area, see Zatorre et al. (2012), Fields and Bukalo (2020), Wang et al. (2020), Pan et al. (2020), and Steadman et al. (2020).
Epigenetics
DNA plays hard to get. It is densely packed into chromatin and must be made structurally accessible, to be “read.” As an example, DNA activation can require the action of histone acetyltransferases to “relax” the DNA, making it accessible for transcription, while the reverse process, histone deacetylation, brokered by histone deacetylases, can be recruited to “tighten up” the DNA, making it less accessible for transcription. This is an example of altering the epigenetic status of a cell. Epigenetics refers to the study of phenotypic changes that do not involve alterations in the sequence of DNA, but rather, to changes that are “on top of” the genetic basis of inheritance (hence epigenetic). In the past two decades or so, a fascinating new field, that of neuro-epigenetics, has emerged (see, e.g., Campbell and Wood 2019). An exciting dimension of this field is the exploration of how the brain encodes information to form long-lasting memories. For example, learning can result in a rapid modification of specific histones on the promoters of genes upregulated during memory formation (Zovkic et al., 2014). It is now clear that activity-dependent molecular mechanisms, such as DNA methylation, histone modification, and nucleosome remodeling, can all be engaged to coordinate gene expression necessary for the formation of enduring memories. For further reading in this exciting arena, see Guan et al. (2002), Barrett and Wood (2008), Woldemichael et al. (2014), Zovkic et al. (2014), and Campbell and Wood (2019).
Space constraints require that I stop here. But I should hasten to add that the six emergent areas in the broad field of learning and memory that I have chosen to briefly highlight above are but a few examples of the remarkable progress in this field since the Journal's first birthday. And as luck would have it, in this year celebrating that 40 year milestone, my colleagues Anastasios Mirisis, Ashley Kopec and I just published a paper in The Journal of Neuroscience (Mirisis et al., 2021). Not to procrastinate, we are already gearing up for our 2061 submission.
So let me end this Progressions article as I began. The Journal of Neuroscience, launched by the Society for Neuroscience 40 years ago, has been a major influence in the field since its inception. We are collectively indebted both to the Society and its leadership and to the terrific editors over the years that have a sure hand on the Journal's tiller since day 1. So looking back and looking forward, The Journal of Neuroscience has been, and remains, a powerful engine driving innovation and advancement in our field.
And a final brief look back: cell phones are now faster and lighter, the Stones are still touring, and Marina has come a long way since high school.
Footnotes
This work was supported by National Institute of Mental Health R01 MH27 094792 to T.J.C. I thank my colleagues Nikolay Kukushkin, Paige Miranda, and Anastasios Mirisis for helpful comments on this manuscript; and two anonymous reviewers whose comments and suggestions were every helpful.
The author declares no competing financial interests.
- Correspondence should be addressed to Thomas J. Carew at tcarew{at}nyu.edu