Methods for reducing interference in the Complementary Learning Systems model: Oscillating inhibition and autonomous memory rehearsal

Department of Psychology Princeton University, Green Hall, Princeton, NJ 08544, USA.
Neural Networks (Impact Factor: 2.71). 12/2005; 18(9):1212-28. DOI: 10.1016/j.neunet.2005.08.010
Source: PubMed


The stability-plasticity problem (i.e. how the brain incorporates new information into its model of the world, while at the same time preserving existing knowledge) has been at the forefront of computational memory research for several decades. In this paper, we critically evaluate how well the Complementary Learning Systems theory of hippocampo-cortical interactions addresses the stability-plasticity problem. We identify two major challenges for the model: Finding a learning algorithm for cortex and hippocampus that enacts selective strengthening of weak memories, and selective punishment of competing memories; and preventing catastrophic forgetting in the case of non-stationary environments (i.e. when items are temporarily removed from the training set). We then discuss potential solutions to these problems: First, we describe a recently developed learning algorithm that leverages neural oscillations to find weak parts of memories (so they can be strengthened) and strong competitors (so they can be punished), and we show how this algorithm outperforms other learning algorithms (CPCA Hebbian learning and Leabra at memorizing overlapping patterns. Second, we describe how autonomous re-activation of memories (separately in cortex and hippocampus) during REM sleep, coupled with the oscillating learning algorithm, can reduce the rate of forgetting of input patterns that are no longer present in the environment. We then present a simple demonstration of how this process can prevent catastrophic interference in an AB-AC learning paradigm.

Download full-text


Available from: Ehren L Newman,
  • Source
    • "Another account of FMT is that theta oscillations might have systematic effects on strong and weak representations in a neural network (Norman et al., 2005, 2006a, 2007). Inspired by the close link between inhibitory interneurons and theta oscillations (Buzsáki, 2002) and findings that LTP and long-term depression (LTD) operate at different phases of theta oscillations (Huerta and Lisman, 1996; Hyman et al., 2003), Norman et al. (2005, 2006a, 2007) developed a learning algorithm in which the strength of inhibition oscillates at theta rhythm such that weak target memories are strengthened and strong competitors are suppressed. According to the model, theta oscillations reflect oscillating levels of inhibition that bias the competition between representations in a network. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Neural oscillations in the theta band (4-8 Hz) are prominent in the human electroencephalogram (EEG), and many recent electrophysiological studies in animals and humans have implicated scalp-recorded frontal midline theta (FMT) in working memory and episodic memory encoding and retrieval processes. However, the functional significance of theta oscillations in human memory processes remains largely unknown. Here, we review studies in human and animals examining how scalp-recorded FMT relates to memory behaviors and also their possible neural generators. We also discuss models of the functional relevance of theta oscillations to memory processes and suggest promising directions for future research.
    NeuroImage 08/2013; 85. DOI:10.1016/j.neuroimage.2013.08.003 · 6.36 Impact Factor
  • Source
    • "In this model, the sign of synaptic plasticity changes during the different phases of the theta cycle, leading to a learning dynamic that ''stress tests'' representations and reinforces weak elements, while also differentiating from nearby competitors. This work was an important precursor to the computational modeling of memory equalization during sleep stages in (Norman et al., 2005) (discussed above). "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper reviews the fate of the central ideas behind the complementary learning systems (CLS) framework as originally articulated in McClelland, McNaughton, and O'Reilly (1995). This framework explains why the brain requires two differentially specialized learning and memory systems, and it nicely specifies their central properties (i.e., the hippocampus as a sparse, pattern-separated system for rapidly learning episodic memories, and the neocortex as a distributed, overlapping system for gradually integrating across episodes to extract latent semantic structure). We review the application of the CLS framework to a range of important topics, including the following: the basic neural processes of hippocampal memory encoding and recall, conjunctive encoding, human recognition memory, consolidation of initial hippocampal learning in cortex, dynamic modulation of encoding versus recall, and the synergistic interactions between hippocampus and neocortex. Overall, the CLS framework remains a vital theoretical force in the field, with the empirical data over the past 15 years generally confirming its key principles.
    Cognitive Science A Multidisciplinary Journal 12/2011; 38(6). DOI:10.1111/j.1551-6709.2011.01214.x · 2.59 Impact Factor
  • Source
    • "Because the Norman and O'Reilly (2003) cortical model only has feedforward connections (i.e., it lacks feedback connections from the upper layer to the lower layer, and it lacks recurrent connections within layers), the model lacks the ability to fill in missing pieces of well-learned input patterns. To remedy these deficits, we have explored a variant of the model that includes feedback and recurrent connections and uses a new learning algorithm [the Oscillating Learning Algorithm; Norman et al. (2005) (2006)]. The new version of the model has much better storage capacity than the original version, and it also has the ability to fill in missing pieces of stored patterns after extensive exposure to those patterns. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We describe how the Complementary Learning Systems neural network model of recognition memory (Norman and O'Reilly (2003) Psychol Rev 104:611-646) can shed light on current debates regarding hippocampal and cortical contributions to recognition memory. We review simulation results illustrating three critical differences in how (according to the model) hippocampus and cortex contribute to recognition memory, all of which derive from the hippocampus' use of pattern separated representations. Pattern separation makes the hippocampus especially well-suited for discriminating between studied items and related lures; it makes the hippocampus especially poorly suited for computing global match; and it imbues the hippocampal ROC curve with a Y-intercept > 0. We also describe a key boundary condition on these differences: When the average level of similarity between items in an experiment is very high, hippocampal pattern separation can fail, at which point the hippocampal model will start to behave like the cortical model. We describe the implications of these simulation results for extant debates over how to describe hippocampal versus cortical contributions and how to measure these contributions.
    Hippocampus 11/2010; 20(11):1217-27. DOI:10.1002/hipo.20855 · 4.16 Impact Factor
Show more