Elsevier

Neural Networks

Volume 18, Issue 9, November 2005, Pages 1212-1228
Neural Networks

2005 Special issue
Methods for reducing interference in the Complementary Learning Systems model: Oscillating inhibition and autonomous memory rehearsal

https://doi.org/10.1016/j.neunet.2005.08.010Get rights and content

Abstract

The stability–plasticity problem (i.e. how the brain incorporates new information into its model of the world, while at the same time preserving existing knowledge) has been at the forefront of computational memory research for several decades. In this paper, we critically evaluate how well the Complementary Learning Systems theory of hippocampo–cortical interactions addresses the stability–plasticity problem. We identify two major challenges for the model: Finding a learning algorithm for cortex and hippocampus that enacts selective strengthening of weak memories, and selective punishment of competing memories; and preventing catastrophic forgetting in the case of non-stationary environments (i.e. when items are temporarily removed from the training set). We then discuss potential solutions to these problems: First, we describe a recently developed learning algorithm that leverages neural oscillations to find weak parts of memories (so they can be strengthened) and strong competitors (so they can be punished), and we show how this algorithm outperforms other learning algorithms (CPCA Hebbian learning and Leabra at memorizing overlapping patterns. Second, we describe how autonomous re-activation of memories (separately in cortex and hippocampus) during REM sleep, coupled with the oscillating learning algorithm, can reduce the rate of forgetting of input patterns that are no longer present in the environment. We then present a simple demonstration of how this process can prevent catastrophic interference in an AB–AC learning paradigm.

Introduction

Over the past several decades, neural theorists have converged on the idea that neocortex implements an internal, predictive model of the structure of the environment. This internal model must simultaneously maintain previously learned information and integrate new information. The problem of how to accomplish these goals simultaneously in a neural network architecture was labeled the stabilityplasticity dilemma by Carpenter and Grossberg (1988), and this problem has come to occupy a central position in computational neuroscience. The problem is hard to solve because, in most neural network models, memory traces overlap with one another. As such, learning new memories will incrementally degrade pre-existing memories. Several researchers have found that, when new learning is extensive (e.g. if the system has to memorize a new pattern based on a single learning trial), neural networks can show near-complete forgetting of pre-existing knowledge (catastrophic interference; French, 1999, French and French, 2003, McCloskey and Cohen, 1989).

There have been several attempts to solve this problem, e.g. Adaptive Resonance Theory (Carpenter & Grossberg, 2003]). In this paper, we focus on another framework for addressing stability–plasticity: The Complementary Learning Systems (CLS) model (McClelland et al., 1995, O'Reilly and Norman, 2002, O'Reilly and Rudy, 2001, Norman and O'Reilly, 2003). This model posits that cortex solves stability–plasticity with the assistance of a hippocampal system that can rapidly memorize events and play them back to cortex in an ‘off-line’ fashion. In Section 1.2, we describe the basic properties of CLS, and how it is meant to solve stability–plasticity.

We also briefly review some of the many ways in which CLS has been applied to episodic memory and animal learning data. However, while CLS has proved to be a very useful way for thinking about hippocampal and cortical learning processes, in recent years we have identified some issues with the model that we want to address:

  • The first issue involves finding a suitable algorithm for adjusting synapses in cortex and the hippocampus. Some of the learning algorithms that have been used in CLS implementations (e.g. CPCA Hebbian learning: Norman and O'Reilly, 2003, O'Reilly and Munakata, 2000) adjust synapses more than is necessary and, as such, show unacceptably high levels of interference. Other learning rules that have been used in CLS implementations (e.g. Leabra; O'Reilly & Munakata, 2000) are less prone to this problem, but have other problems of their own (e.g. both Leabra and CPCA Hebbian learning have difficulty in modeling data on how competitors are punished during retrieval).

  • The second issue involves the problem of non-stationary environments: What happens when patterns that were originally in the training set are removed from the training set? Even with the hippocampus and cortex working together, the standard form of the CLS model shows unacceptably high rates of forgetting of patterns once they are removed from the training set. This problem needs to be addressed before the CLS model can be viewed as a complete solution to the stability–plasticity problem.

In this paper, we present solutions to both of these problems:

  • In section 2, we describe a new learning algorithm developed by Norman, Newman, Detre, and Polyn (2005) that leverages regular oscillations in feedback inhibition to pinpoint weak parts of target memories (so they can be strengthened) and to pinpoint non-target memories that compete with target memories during retrieval (so they can be weakened). We show that the oscillating learning algorithm, applied to our cortical network, outperforms both CPCA Hebbian learning and Leabra on a pattern completion task. We also show that the oscillating learning algorithm's capacity for supporting familiarity discrimination greatly exceeds the capacity of the Hebbian cortical model from Norman and O'Reilly (2003).

  • In section 3, we show how the CLS model can be supplemented by a new kind of off-line learning where cortex and hippocampus separately rehearse stored memories, thereby repairing damage to these memories. We argue that this off-line learning reflects the functionality of REM sleep, and show that it can successfully prevent loss of knowledge in an AB–AC interference paradigm (where AB items are initially trained and then removed from the training set).

In summary: We will present an account of how inhibitory oscillations and off-line rehearsal of stored knowledge (during REM sleep) can both improve learning and retention. The ideas presented here apply to both hippocampus and cortex. For simplicity's sake, the simulations that we present will use the cortical model, which has a less differentiated architecture than the hippocampal model. After each simulation, we will discuss ways in which the same mechanism can be applied to the hippocampus.

The CLS framework (McClelland et al., 1995) incorporates several widely-held ideas about hippocampal and neocortical contributions to memory, that have been developed over many years by many different researchers (e.g. Aggleton and Brown, 1999, Burgess and O'Keefe, 1996, Eichenbaum et al., 1994, Grossberg, 1976, Hasselmo and Wyble, 1997, Marr, 1971, McNaughton and Morris, 1987, Moll and Miikkulainen, 1997, O'Keefe; and Nadel, 1978, Rolls and Rolls, 1989, Scoville and Milner, 1957, Sherry and Schacter, 1987, Squire, 1992, Sutherland and Rudy, 1989, Teyler and Discenna, 1986, Treves and Rolls, 1994, Wu et al., 1996, Yonelinas, 2002). According to the CLS framework, neocortex forms the substrate of our internal model of the structure of the environment. In contrast, hippocampus is specialized for rapidly and automatically memorizing patterns of cortical activity, so they can be recalled later (based on partial cues).

The CLS framework posits that neocortex learns incrementally; each training trial results in relatively small adaptive changes in synaptic weights. These small changes allow cortex to gradually adjust its internal model of the environment in response to new information. The other key property of neocortical learning is that it assigns similar (overlapping) representations to similar stimuli. Use of overlapping representations allows cortex to represent the shared structure of events, and therefore makes it possible for cortex to generalize to novel stimuli based on their similarity to previously experienced stimuli. In contrast, hippocampus is biased to assign distinct, pattern separated representations to stimuli, regardless of their similarity. This property allows hippocampus to rapidly memorize arbitrary patterns of cortical activity without suffering catastrophic levels of interference.

One of the key problems facing any account of stability–plasticity is how to incorporate rare (but significant) events into the cortical network. In the case of the CLS model, the incremental nature of cortical learning means that it can only retrieve memories if the stimulus is presented repeatedly.

However, infrequently-occurring events are sometimes very significant (e.g. if a pterodactyl eats your sister) and we need to be able to incorporate this information into our internal cortical model of how the world works, so we can properly generalize to new situations (e.g. future pterodactyl attacks). If the cortical network were left to its own devices, a person would have to experience several pterodactyl attacks before the cortical memory trace was strong enough to support appropriate recall and generalization. Furthermore, if the average interval between pterodactyl appearances were sufficiently long, one runs the risk that—in between appearances—interference from other memories would erode the original memory, in which case the person would be back to where they started with each new pterodactyl appearance.

The presence of the hippocampal network solves this problem. The hippocampus is specialized for rapid memorization; in a single trial, the hippocampus can latch on to pattern of cortical activity elicited by the pterodactyl, and re-play it to cortex repeatedly until it sinks in. In this respect, hippocampus can be viewed as a ‘training trial multiplier’. Over time, hippocampally-mediated replay of pterodactyl memories is interleaved with bottom-up learning about information in the environment. As discussed by McClelland et al. (1995), this kind of interleaved training, coupled with a learning mechanism that is sensitive to prediction error, forces cortex to develop representations that reconcile the properties of rare events and more common events (because this is the only way to avoid prediction error across the entire training set).1

CLS was originally formulated as a set of high-level principles for understanding hippocampal and cortical contributions to memory. More recently, O'Reilly and Rudy, 2001, Norman and O'Reilly, 2003 have developed working neural network models of hippocampus and neocortex that instantiate these principles, and these networks have been applied to modeling specific datasets.

In one application, Norman and O'Reilly (2003) implemented hippocampal and cortical networks that adhere to CLS principles, and showed how these networks (taken together) constitute a neural dual-process model of recognition memory. Learning was implemented in these simulations using a simple Hebbian rule (called instar learning by Grossberg, 1976, and CPCA Hebbian learning by O'Reilly & Munakata, 2000), whereby connections between active sending and receiving neurons are strengthened, and connections between active receiving neurons and inactive sending neurons are weakened. Norman and O'Reilly (2003) showed how the hippocampal model (using this simple Hebbian rule) can support recognition via recollection of specific studied details. The cortical model cannot support recollection of specific details from once-presented events, owing to its relatively low learning rate. However, Norman and O'Reilly (2003) showed that cortex can still support judgments of familiarity after a single study trial, based on the sharpness of representations in cortex.

The cortical model's ability to support familiarity discrimination is a simple consequence of Hebbian learning and inhibitory competition. When a stimulus is presented, Hebbian learning tunes a subset of the hidden units to respond more strongly to that stimulus. As these units respond more and more strongly to the stimulus, they start to inhibit other units. Thus, the neural response to a stimulus transitions from a diffuse overall response (where no units are tuned to respond strongly to the stimulus) to a more focused response where some units are strongly active and other units are suppressed. In the Norman and O'Reilly (2003) paper, cortical familiarity was operationalized in terms of the activation of the k most active units in the hidden layer (where k is a model parameter that defines the maximum number of units that are allowed to be strongly active at once), although other methods of operationalizing familiarity are possible.

Norman and O'Reilly (2003) showed how, taken together, the hippocampal network and cortical network could explain a wide range of recognition findings, including data on when hippocampal lesions affect recognition memory (as a function of how similar distractors are to studied items, and as a function of test format) and data from normal subjects on how interference manipulations affect recognition memory (e.g. list strength manipulations: how does repeatedly presenting some items on the study list affect memory for other items on the study list).

In another application, O'Reilly and Rudy (2001) used hippocampal and cortical networks instantiating CLS principles to explain findings from animal learning paradigms, including non-linear discrimination learning (e.g. negative patterning, transverse patterning), ‘transitive inference’ in discrimination learning, and contextual fear conditioning. The models in these simulations were largely identical to the models used in Norman and O'Reilly (2003), except the simulations used O'Reilly's Leabra learning rule instead of CPCA Hebbian learning. Leabra combines CPCA Hebbian learning with a simple form of error-driven learning (O'Reilly & Munakata, 2000). The key finding from these simulations was that cortex could solve non-linear discrimination problems on its own when the animal is given repeated exposure to the stimuli and appropriate feedback. In contrast, hippocampus is needed to show sensitivity to feature conjunctions on tasks where conjunctive learning is incidental (i.e. the animal does not have to learn the conjunction to respond correctly on the task) and the animal is given limited exposure to the conjunction. O'Reilly and Rudy (2001) discuss several findings that support the model's predictions.

Concrete applications of CLS (like those described in Norman and O'Reilly, 2003, O'Reilly and Rudy, 2001) have provided strong support for the validity of basic CLS principles (see also O'Reilly & Norman, 2002). However, the process of building working models that instantiate CLS principles has also highlighted some important challenges for the CLS framework.

One critical challenge is to develop a learning algorithm that is capable of storing an appropriately large database of knowledge (semantic knowledge, in the case of cortex, and episodic knowledge, in the case of the hippocampus). Norman and O'Reilly (2003) noted that the CPCA Hebbian learning rule used in that paper has a tendency to over-focus on prototypical features. When given a large set of correlated input patterns to memorize, the CPCA Hebbian algorithm is very good at learning what all of these patterns have in common, but it shows very poor memory for specific, non-prototypical features of individual items. This is less of a problem for the hippocampal model than for the cortical model, because of the hippocampal model's ability to assign relatively distinct representations to similar inputs. However, Norman and O'Reilly (2003) noted that the hippocampal model is still prone to ‘pattern separation collapse’ when given large numbers of overlapping patterns. When this occurs, the hippocampus recalls prototypical features in response to all input patterns (studied or non-studied).

From a psychological-modeling perspective, the mere fact that Hebbian learning over-focuses on prototypes is not problematic. Good memory for prototypes can be used to explain numerous categorization and memory phenomena (e.g. false recognition of non-studied items from studied categories; Koutstaal, Schacter, & Jackson, 1999). Also, as discussed by Norman and O'Reilly (2003), the model's tendency to forget individuating features of studied items can be used to explain memory interference effects on list learning paradigms.

However, the excessive degree of prototype-focusing exhibited by the model is more problematic. When the model is given a sufficiently large number of overlapping patterns, both the hippocampal and cortical networks exhibit virtually no memory for individuating features. In an important analysis, Bogacz and Brown (2003) set out to quantify the capacity of several different cortical models (including the Norman & O'Reilly, 2003 Hebbian cortical network) for supporting familiarity-based recognition: How many patterns can be stored in the network, in a manner that supports discrimination of studied vs. non-studied patterns? This analysis showed that, given overlapping input patterns, the capacity of the Hebbian cortical network from Norman and O'Reilly (2003) was very poor. Even in a brain-sized version of the network, the model's capacity is almost certainly not large enough to account for data on human recognition memory capacity (e.g. Standing, 1973) showed that people can discriminate between thousands of studied vs. non-studied pictures, and this is an extremely conservative estimate).

The essence of the problem with CPCA Hebbian learning is that it is insufficiently judicious in how it adjusts synaptic strengths. In neural networks, each synaptic weight is involved in storing multiple memories. As such, adjusting weights to improve recall of one memory interferes with other memories that are encoded in those weights. Given that there is a cost (in terms of interference) as well as a benefit to adjusting synaptic weights, it makes sense that strengthening of weights should stop once the target memory is strong enough to support recall and generalization. Likewise, learning algorithms should only weaken non-target memories that are actively competing with recall of the target memory. Any further strengthening (of the target memory) or weakening (of non-target memories) will cause interference without improving recall. CPCA Hebbian learning fails on both counts: It strengthens synapses between co-active units even if the target memory is already strong enough to support recall, and it weakens synapses between active receiving units and all sending units that are inactive at the end of the trial, even if these units did not actively compete with recall of the target memory.

In addition to being inefficient (from a functional standpoint), CPCA Hebbian learning's inability to selectively weaken competing memories also impedes its ability to account for empirical data on competitor punishment. Over the past decade, several studies have found that memory weakening is modulated by how strongly memories compete at retrieval: Non-target memories that compete strongly with the target memory (but subsequently lose the competition to be retrieved) are punished. However, if steps are taken to mitigate competition (e.g. by increasing the specificity of the retrieval cue), there is no punishment (see Anderson, 2003 for a review of these findings; see also Norman, Newman, & Detre, 2004 for a computational model of these findings). This pattern of results has been observed in both semantic memory tasks (e.g. Blaxton & Neely, 1983) and episodic memory tasks (e.g. Anderson and Bell, 2001, Ciranni and Shimamura, 1999), suggesting that selective competitor punishment occurs in both cortex and hippocampus. However, contrary to these findings, CPCA Hebbian learning predicts that all memories that overlap with the target memory should be weakened, regardless of the amount of competition at retrieval.

As mentioned earlier, some implementations of CLS (e.g. O'Reilly & Rudy, 2001) have used O'Reilly's Leabra learning algorithm instead of CPCA Hebbian learning. Because of its ability to learn based on pattern completion error, Leabra does a much better job than CPCA Hebbian learning at retaining the individuating features of studied items. However, as discussed in Norman et al. (2004), Leabra lacks a mechanism for selectively punishing memories that compete at retrieval. The essence of this problem is that competitor activity is transient (i.e. the competitor ‘pops up’ briefly and then goes away), but Leabra is only equipped for learning about representations that are active in the final settled state of the network. As such, Leabra also fails to account for the competitor-punishment data discussed above.

Because of the issues with CPCA Hebbian learning and Leabra outlined above, we set out to derive a new learning algorithm that meets the following two desiderata:

  • Limits on strengthening: The network should only strengthen memories when they are too weak to support recall.

  • Targeted punishment: The network should only weaken memories when they actively compete with successful recall of the target memory.

These properties, taken together, should reduce interference in the cortical and hippocampal models. The second property should help the networks account for data on competitor punishment.

Section snippets

The oscillating learning algorithm

To meet the desiderata outlined above, Norman et al. (2005) developed a new learning algorithm that selectively strengthens weak parts of target memories (vs. parts that are already strong), and selectively punishes strong competitors. The learning algorithm accomplishes this goal by oscillating the strength of feedback inhibition, and learning based on the resulting changes in activation. In this section, we first provide some background information on how inhibition was implemented in the

Model of memory protection during REM

The preceding section focused on problems with the learning rules used by CLS models, and how these problems might be addressed using the oscillating learning algorithm. However, there are other, deeper issues with the CLS framework that cannot be addressed simply by changing the learning rule. In this section, we discuss the problem of non-stationary environments: How does the network maintain a representation of stimuli that temporarily drop out from the training set? We discuss how existing

References (87)

  • D.H. Ackley et al.

    A learning algorithm for Boltzmann machines

    Cognitive Science

    (1985)
  • J.P. Aggleton et al.

    Episodic memory, amnesia, and the hippocampal-anterior thalamic axis

    Behavioral and Brain Sciences

    (1999)
  • P. Alvarez et al.

    Memory consolidation and the medial temporal lobe: A simple network model

    Proceedings of the National Academy of Sciences, USA

    (1994)
  • M.C. Anderson

    Rethinking interference theory: Executive control and the mechanisms of forgetting

    Journal of Memory and Language

    (2003)
  • M.C. Anderson et al.

    Forgetting our facts: the role of inhibitory processes in the loss of propositional knowledge

    Journal of Experimental Psychology: General

    (2001)
  • B. Ans et al.

    Avoiding catastrophic forgetting by coupling two reverberating neural networks

    Academie des Sciences, Sciences de la vie

    (1997)
  • B. Ans et al.

    Neural networks with a self-refreshing memory: knowledge transfer in sequential learning tasks without catastrophic forgetting

    Connection Science

    (2000)
  • T.A. Blaxton et al.

    Inhibition from semantically related primes: Evidence of a category-specific retrieval inhibition

    Memory and Cognition

    (1983)
  • R. Bogacz et al.

    Comparison of computational models of familiarity discrimination in the perirhinal cortex

    Hippocampus

    (2003)
  • N. Burgess et al.

    Neuronal computations underlying the firing of place cells and their role in navigation

    Hippocampus

    (1996)
  • G. Buzsaki

    Memory consolidation during sleep: A neurophysiological perspective

    Journal of Sleep Research

    (1998)
  • G. Buzsaki

    Theta oscillations in the hippocampus

    Neuron

    (2002)
  • J.L. Cantero et al.

    Sleep-dependent theta oscillations in the human hippocampus and neocortex

    Journal of Neuroscience

    (2003)
  • G.A. Carpenter et al.

    The art of adaptive pattern recognition by a self-organizing neural network

    Computer

    (1988)
  • G.A. Carpenter et al.

    A self-organizing neural network for supervised learning, recognition, and prediction

  • G.A. Carpenter et al.

    Adaptive resonance theory

  • M.A. Ciranni et al.

    Retrieval-induced forgetting in episodic memory

    Journal of Experimental Psychology: Learning, Memory, and Cognition

    (1999)
  • Y. Dudai et al.

    Rites of passage of the engram: Reconsolidation and the lingering consolidation hypothesis

    Neuron

    (2004)
  • H. Eichenbaum et al.

    Two functional components of the hippocampal memory system

    Behavioral and Brain Sciences

    (1994)
  • M. Eisenberg et al.

    Stability of retrieved memory: Inverse correlation with trace dominance

    Science

    (2003)
  • R.M. French

    Pseudo-recurrent connectionist networks: An approach to the ‘sensitivity-stability’ dilemma

    Connection Science

    (1997)
  • R.M. French

    Catastrophic forgetting in connectionist networks: Causes, consequences and solutions

    Trends in Cognitive Sciences

    (1999)
  • R.M. French

    Castrophic forgetting in connectionist networks

  • S. Gais et al.

    Declarative memory consolidation: Mechanisms acting during human sleep

    Learning and Memory

    (2004)
  • S.J. Gotts et al.

    The impact of synaptic depression following brain damage: A connectionist account of ‘access/refractory’ and ‘degraded-store’ semantic impairments

    Cognitive, Affective, and Behavioral Neuroscience

    (2002)
  • S. Grossberg

    Adaptive pattern classification and universal recoding. I. Parallel development and coding of neural feature detectors

    Biological Cybernetics

    (1976)
  • M.E. Hasselmo

    Neuromodulation: Acetylcholine and memory consolidation

    Trends in Cognitive Sciences

    (1999)
  • M.E. Hasselmo et al.

    A proposed function for hippocampal theta rhythm: Separate phases of encoding and retrieval enhance reversal of prior learning

    Neural Computation

    (2002)
  • M.E. Hasselmo et al.

    Free recall and recognition in a network model of the hippocampus: Simulating effects of scopolamine on human memory function

    Behavioural Brain Research

    (1997)
  • G.E. Hinton

    Deterministic Boltzmann learning performs steepest descent in weight-space

    Neural Computation

    (1989)
  • G.E. Hinton et al.

    Learning representations by recirculation

  • G.E. Hinton et al.

    Learning and relearning in Boltzmann machines

  • C. Holscher et al.

    Stimulation on the positive phase of hippocampal theta rhythm induces long-term potentiation that can be depotentiated by stimulation on the negative phase in area CA1 in vivo

    Journal of Neuroscience

    (1997)
  • D.E. Huber et al.

    Persistence and accommodation in short-term priming and other perceptual paradigms: temporal segregation through synaptic depression

    Cognitive Science

    (2003)
  • P.T. Huerta et al.

    Synaptic plasticity during the cholinergic theta-frequency oscillation in vitro

    Hippocampus

    (1996)
  • J.M. Hyman et al.

    Stimulation in hippocampal region CA1 in behaving rats yields long-term potentiation when delivered to the peak of theta and long-term depression when delivered to the trough

    Journal of Neuroscience

    (2003)
  • M.J. Kahana

    Theta returns

    Current Opinion in Neurobiology

    (2001)
  • S. Kali et al.

    Off-line replay maintains declarative memories in a model of hippocampal-neocortical interactions

    Nature Neuroscience

    (2004)
  • W. Koutstaal et al.

    Perceptually based false recognition of novel objects in amnesia: Effects of category size and similarity to category prototypes

    Cognitive Neuropsychology

    (1999)
  • K. Louie et al.

    Temporally structured replay of awake hippocampal ensemble activity during rapid eye movement sleep

    Neuron

    (2001)
  • D. Marr

    Simple memory: A theory for archicortex

    Philosophical Transactions of the Royal Society (London) B

    (1971)
  • J.L. McClelland et al.

    Why there are complementary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory

    Psychological Review

    (1995)
  • M. McCloskey et al.

    Catastrophic interference in connectionist networks

  • Cited by (72)

    • Improving episodic memory: Frontal-midline theta neurofeedback training increases source memory performance

      2020, NeuroImage
      Citation Excerpt :

      Thus, even though theta amplitudes were smaller for the training group after NFT, they allowed for enhanced item-context binding (Köster et al., 2014) and, consequently, for a better reinstatement of the respective temporal context information that alleviated episodic memory retrieval. Third, theta NFT could have supported protection from interference by coordinating inhibition over competing memory representations (Norman et al., 2005; Norman et al., 2006). This inhibition account proposes that theta oscillations reflect varying levels of inhibition strength that influence retrieval competition in a way that target representations are strengthened and related memories that compete for retrieval are suppressed.

    • Computational models of memory consolidation and long-term synaptic plasticity during sleep

      2019, Neurobiology of Learning and Memory
      Citation Excerpt :

      The decision that must be made by a learning algorithm when considering new information is whether the new memory should be allowed to affect the current structure, with the potential of disrupting old memories, or whether it should not, at the risk of not encoding something relevant. The use of CLS offers a practical solution for the dilemma when considering functionalities other than the off-line interleaved replay (Norman, Newman, & Perotte, 2005). The proposed system uses local oscillations based on feedback inhibition to identify components of a memory trace that are weakly stored, and should be strengthened, and components of other memory traces that compete with it and that should be weakened.

    View all citing articles on Scopus
    View full text