Methods for reducing interference in the Complementary Learning Systems model: oscillating inhibition and autonomous memory rehearsal.

Department of Psychology Princeton University, Green Hall, Princeton, NJ 08544, USA.
Neural Networks (Impact Factor: 2.08). 12/2005; 18(9):1212-28. DOI: 10.1016/j.neunet.2005.08.010
Source: PubMed

ABSTRACT The stability-plasticity problem (i.e. how the brain incorporates new information into its model of the world, while at the same time preserving existing knowledge) has been at the forefront of computational memory research for several decades. In this paper, we critically evaluate how well the Complementary Learning Systems theory of hippocampo-cortical interactions addresses the stability-plasticity problem. We identify two major challenges for the model: Finding a learning algorithm for cortex and hippocampus that enacts selective strengthening of weak memories, and selective punishment of competing memories; and preventing catastrophic forgetting in the case of non-stationary environments (i.e. when items are temporarily removed from the training set). We then discuss potential solutions to these problems: First, we describe a recently developed learning algorithm that leverages neural oscillations to find weak parts of memories (so they can be strengthened) and strong competitors (so they can be punished), and we show how this algorithm outperforms other learning algorithms (CPCA Hebbian learning and Leabra at memorizing overlapping patterns. Second, we describe how autonomous re-activation of memories (separately in cortex and hippocampus) during REM sleep, coupled with the oscillating learning algorithm, can reduce the rate of forgetting of input patterns that are no longer present in the environment. We then present a simple demonstration of how this process can prevent catastrophic interference in an AB-AC learning paradigm.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The prefrontal cortex exerts top-down influences on several aspects of higher-order cognition by functioning as a filtering mechanism that biases bottom-up sensory information toward a response that is optimal in context. However, research also indicates that not all aspects of complex cognition benefit from prefrontal regulation. Here we review and synthesize this research with an emphasis on the domains of learning and creative cognition, and outline how the appropriate level of cognitive control in a given situation can vary depending on the organism's goals and the characteristics of the given task. We offer a Matched Filter Hypothesis for cognitive control, which proposes that the optimal level of cognitive control is task-dependent, with high levels of cognitive control best suited to tasks that are explicit, rule-based, verbal or abstract, and can be accomplished given the capacity limits of working memory and with low levels of cognitive control best suited to tasks that are implicit, reward-based, non-verbal or intuitive, and which can be accomplished irrespective of working memory limitations. Our approach promotes a view of cognitive control as a tool adapted to a subset of common challenges, rather than an all-purpose optimization system suited to every problem the organism might encounter.
    Neuropsychologia 11/2013; · 3.45 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Distributed Connectionist networks have difficulty learning incrementally because the representations in the network overlap. Therefore, it is necessary to reduce the overlaps of representations for incremental learning. At the same time, the representational overlaps give these networks the ability to generalize. In this study, we use a modified multilayered neural network to numerically examine the trade-off between incremental learning and generalization abilities, and then we propose a novel network model with structural lateral inhibitions to reconcile the two abilities. We also analyze the behavior of the proposed model using Formal Concept Analysis, which reveals that the network implements “conceptualization”: differentiation and meditation between intensional and extensional representations. This study suggests a new paradigm for the traditional question, whether representations in the brain are distributed or not.
    Bio Systems 04/2014; · 1.27 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Declarative long-term memories are not created in an instant. Gradual stabilization and temporally shifting dependence of acquired declarative memories in different brain regions-called systems consolidation-can be tracked in time by lesion experiments. The observation of temporally graded retrograde amnesia (RA) following hippocampal lesions points to a gradual transfer of memory from hippocampus to neocortical long-term memory. Spontaneous reactivations of hippocampal memories, as observed in place cell reactivations during slow-wave-sleep, are supposed to drive neocortical reinstatements and facilitate this process. We propose a functional neural network implementation of these ideas and furthermore suggest an extended three-state framework that includes the prefrontal cortex (PFC). It bridges the temporal chasm between working memory percepts on the scale of seconds and consolidated long-term memory on the scale of weeks or months. We show that our three-stage model can autonomously produce the necessary stochastic reactivation dynamics for successful episodic memory consolidation. The resulting learning system is shown to exhibit classical memory effects seen in experimental studies, such as retrograde and anterograde amnesia (AA) after simulated hippocampal lesioning; furthermore the model reproduces peculiar biological findings on memory modulation, such as retrograde facilitation of memory after suppressed acquisition of new long-term memories-similar to the effects of benzodiazepines on memory.
    Frontiers in Computational Neuroscience 07/2014; 8:64. · 2.23 Impact Factor

Full-text (2 Sources)

Available from
Jun 4, 2014