Sleep promotes replay of both overlapping memory sequences during each Up state. (A) Change in synaptic weight over entire sleep period as a function of the number of Up states where a given synapse was preferentially replayed. Each star represents a synapse in the direction of S1 (left) or S1* (right). Dashed line indicates the threshold (66% of Up states) used to identify synapses that are replayed reliably for analysis in (B); purple line indicates the maximum number of Up states; blue line demarcates the 50% mark of the total number of Up states. (B) Thresholded connectivity matrix indicating synaptic connections showing reliable replays for S1 (blue) or S1* (red). Grey boxes highlight between group connections. (C) Number of replay events for inter-group synapses per Up state across all Up states (left) and a subset of Up states (right) for S1 (blue) and S1* (red). Note that Figure 6 continued on next page

Sleep promotes replay of both overlapping memory sequences during each Up state. (A) Change in synaptic weight over entire sleep period as a function of the number of Up states where a given synapse was preferentially replayed. Each star represents a synapse in the direction of S1 (left) or S1* (right). Dashed line indicates the threshold (66% of Up states) used to identify synapses that are replayed reliably for analysis in (B); purple line indicates the maximum number of Up states; blue line demarcates the 50% mark of the total number of Up states. (B) Thresholded connectivity matrix indicating synaptic connections showing reliable replays for S1 (blue) or S1* (red). Grey boxes highlight between group connections. (C) Number of replay events for inter-group synapses per Up state across all Up states (left) and a subset of Up states (right) for S1 (blue) and S1* (red). Note that Figure 6 continued on next page

Source publication
Article
Full-text available
Continual learning remains an unsolved problem in artificial neural networks. The brain has evolved mechanisms to prevent catastrophic forgetting of old knowledge during new training. Building upon data suggesting the importance of sleep in learning and memory, we tested a hypothesis that sleep protects old memories from being forgotten after new l...

Contexts in source publication

Context 1
... performed spike timing analysis similar to what we did for S1 alone (Figure 3), but we now analyzed separately synaptic connections in direction of S1 and S1*. Figure 6A plots, for each synapse in direction of S1 (left) and S1* (right), the net change in synaptic strength across the entire sleep period vs total number of Up states (slow-waves) where that synapse was preferentially replayed. As before, we found a strong positive correlation. ...
Context 2
... before, we found a strong positive correlation. We next plotted only those synapses which replayed reliably -more that 66% of all Up states ( Figure 6B). We found that such synapses exist between all neuronal groups and for both sequences (in Figure 6B blue color indicates synapses in the direction of S1 and red in the direction of S1*). ...
Context 3
... next plotted only those synapses which replayed reliably -more that 66% of all Up states ( Figure 6B). We found that such synapses exist between all neuronal groups and for both sequences (in Figure 6B blue color indicates synapses in the direction of S1 and red in the direction of S1*). This analysis revealed two important properties. ...
Context 4
... analysis revealed two important properties. First, after sleep, each pair of neurons preferentially supported only one sequence, S1 or S1* (note that the connectivity matrix in Figure 6B is strictly asymmetric). Second, individual neurons can be divided into two groups -those participating reliably in only one sequence replay (either S1 or performance for all three sequences after sleep (red bars). ...
Context 5
... supplement 1. Training of a new memory that interferes with previously consolidated old memory leads to forgetting that can be reversed by subsequent sleep. S1*) and those participating in both sequences replays (see Figure 6B, where some target neurons (X-axis) receive input from source neurons (Y-axis) in only one network 'direction', left (blue) or right (red), and others receive input from both 'directions'). ...
Context 6
... confirm that both memories are replayed within the same Up state (i.e., some synapses replay S1 and others replay S1* during a given Up state), we counted, for each Up state, the total number of individual replay events across all synapses that were identified to replay S1 and S1* reliably ( Figure 6C). This revealed fluctuations from one Up state to another, but the count remained high for both S1 and S1* confirming our prediction that partial replays of both sequences occur during the same Up state, that is, any given Up state participates in replay of both memories. ...
Context 7
... revealed fluctuations from one Up state to another, but the count remained high for both S1 and S1* confirming our prediction that partial replays of both sequences occur during the same Up state, that is, any given Up state participates in replay of both memories. Still, zoom-in to the replay count diagram ( Figure 6C, right) revealed an antiphase oscillation, that is, one Up state would replay more S1 synapses, while another one (commonly next one) would replay more S1* synapses. Note, our model predicts that partial sequences (specifically spike doubles) of both memories can be replayed during the same Up state and not that both are replayed simultaneously (at the same exact time). ...
Context 8
... in Figure 6D, we plotted all the synapses identified by the analysis in Figure 6B, that is, those involved in reliable (in more than 66% of all Up states) replay during sleep: top plot shows synapses in S1 direction (in blue) and bottom one shows synapses in S1* direction (in red). For each neuron we compared the number of such synapses it received from its left (S1 direction) vs right (S1* direction) neighboring population (e.g., for a neuron in group B, we compared if it received more synapses demonstrating reliable replay from group A or from group C). ...
Context 9
... in Figure 6D, we plotted all the synapses identified by the analysis in Figure 6B, that is, those involved in reliable (in more than 66% of all Up states) replay during sleep: top plot shows synapses in S1 direction (in blue) and bottom one shows synapses in S1* direction (in red). For each neuron we compared the number of such synapses it received from its left (S1 direction) vs right (S1* direction) neighboring population (e.g., for a neuron in group B, we compared if it received more synapses demonstrating reliable replay from group A or from group C). ...
Context 10
... each neuron we compared the number of such synapses it received from its left (S1 direction) vs right (S1* direction) neighboring population (e.g., for a neuron in group B, we compared if it received more synapses demonstrating reliable replay from group A or from group C). We then colored in blue (red) neurons receiving more synapses demonstrating reliable replay from its left (right) neighbors ( Figure 6D). In green we colored neurons receiving the same number of 'replayed' synapses from left and right. ...
Context 11
... the previous sections, we found that for overlapping memories sleep leads to segregation of the entire population of neurons into two subsets based on (a) asymmetric synaptic input from left/right neighboring groups (e.g., subset B i of neurons from group B receives stronger total synaptic input from group A compare with total input from group C; subset B j of neurons from group B receives stronger input from C than from A) ( Figure 5D,E); (b) preference to participate reliably in only one specific sequence replay during sleep (e.g., subset B k of neurons from group B receives more synapses demonstrating reliable replay from group A than from group C; this is reversed for subset B l of neurons from group B) ( Figure 6D). Here we tested if these groups of neurons, identified by synaptic strength and replay, overlap. ...

Similar publications

Article
Full-text available
Continual learning remains an unsolved problem in artificial neural networks. The brain has evolved mechanisms to prevent catastrophic forgetting of old knowledge during new training. Building upon data suggesting the importance of sleep in learning and memory, we tested a hypothesis that sleep protects old memories from being forgotten after new l...

Citations

... However, the brain continuously forms new memories, raising the question of how new memories are integrated without disrupting previously formed ones. This challenge, termed "catastrophic forgetting" in artificial neural networks (González et al., 2020;McNaughton, 2010), arises when new information overwrites existing memories. In contrast, biological systems exhibit remarkable resilience, enabling continual learning without significant loss of prior knowledge. ...
... In their model that simulated REM sleep, the neocortex was allowed to operate with no hippocampal influence, and replay focused on repairing the old, related information to allow for "graceful continual learning." Gonzalez and colleagues (2020) developed a biophysical model of thalamocortical architecture to examine how multiple competing memories can be reinstated during NREM to prevent catastrophic forgetting and that the dynamics of REM sleep could leveraged to rescue damaged memories from interference 173 . Together, these models suggest potential underlying mechanisms of REM sleep that refine memory representations and rescue weaker memories damaged by interference or age. ...
Article
Despite extensive evidence on the roles of non-rapid eye movement (NREM) and REM sleep in memory processing, a comprehensive model that integrates their complementary functions remains elusive due to a lack of mechanistic understanding of REM’s role in offline memory processing. We present the REM Refining and Rescuing (RnR) Hypothesis, which posits that the principal function of REM sleep is to increase the signal-to-noise ratio within and across memory representations. As such, REM sleep selectively enhances essential nodes within a memory representation while inhibiting the majority (Refine). Additionally, REM sleep modulates weak and strong memory representations so they fall within a similar range of recallability (Rescue). Across multiple NREM-REM cycles, tuning functions of individual memory traces get sharpened, allowing for integration of shared features across representations. We hypothesize that REM sleep’s unique cellular, neuromodulatory, and electrophysiological milieu, marked by greater inhibition and a mixed autonomic state of both sympathetic and parasympathetic activity, underpins these processes. The RnR Hypothesis offers a unified framework that explains diverse behavioral and neural outcomes associated with REM sleep, paving the way for future research and a more comprehensive model of sleep-dependent cognitive functions.
... However, as knowledge and tasks are provided incrementally over a lifetime, this term is often used interchangeably with LL, with many studies not making a strict distinction between the two 5,7,21 . Several biological processes in the brain serve as the basis for this capability; For example at the macroscopic level, the states of sleep and wakefulness are controlled by the interactions of the cholinergic and histaminergic neuromodulatory systems, promoting dynamic memory storage for continuous learning 26,27 At the mesoscopic level, metaplasticity processes mediated by neuromodulators and glial cells further enhance continuous learning by selectively modulating synaptic connections 4,11 . Similarly, dendritic spike-dependent plasticity also plays a critical role by selectively preserving and updating essential synaptic connections between neurons 28,29 . ...
Preprint
Full-text available
Recent progress in artificial intelligence (AI) has been driven by insights from neuroscience, particularly with the development of artificial neural networks (ANNs). This has significantly enhanced the replication of complex cognitive tasks such as vision and natural language processing. Despite these advances, ANNs struggle with continual learning, adaptable knowledge transfer, robustness, and resource efficiency - capabilities that biological systems handle seamlessly. Specifically, ANNs often overlook the functional and morphological diversity of the brain, hindering their computational capabilities. Furthermore, incorporating cell-type specific neuromodulatory effects into ANNs with neuronal heterogeneity could enable learning at two spatial scales: spiking behavior at the neuronal level, and synaptic plasticity at the circuit level, thereby potentially enhancing their learning abilities. In this article, we summarize recent bio-inspired models, learning rules and architectures and propose a biologically-informed framework for enhancing ANNs. Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors and dendritic compartments to simulate morphological and functional diversity of neuronal computations. Finally, we outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, balances bioinspiration and complexity, and provides scalable solutions for pressing AI challenges, such as continual learning, adaptability, robustness, and resource-efficiency.
... Two critical components which are believed to underlie memory consolidation during sleep are spontaneous replay of memory traces and local unsupervised synaptic plasticity that restricts synaptic changes to relevant memories only. During sleep, replay of recently learned memories along with relevant old memories enables the network to form stable long-term memory representations (Rasch and Born 2013) and reduces competition between memories (González et al. 2020;Golden et al. 2022). The idea of replay has been explored in machine learning to enable continual learning. ...
Article
The performance of artificial neural networks (ANNs) degrades when training data are limited or imbalanced. In contrast, the human brain can learn quickly from just a few examples. Here, we investigated the role of sleep in improving the performance of ANNs trained with limited data on the MNIST and Fashion MNIST datasets. Sleep was implemented as an unsupervised phase with local Hebbian type learning rules. We found a significant boost in accuracy after the sleep phase for models trained with limited data in the range of 0.5-10% of total MNIST or Fashion MNIST datasets. When more than 10% of the total data was used, sleep alone had a slight negative impact on performance, but this was remedied by fine-tuning on the original data. This study sheds light on a potential synaptic weight dynamics strategy employed by the brain during sleep to enhance memory performance when training data are limited or imbalanced.
... (These outputs represent the high-level sensory representations activated by hippocampal pattern completion, via return projections to the sensory cortex.) The noise input to the autoassociative network could potentially represent random activation during sleep [138][139][140] . Attributes such as reward salience might also influence which memories are replayed but were not modelled here 141 . ...
Article
Full-text available
Episodic memories are (re)constructed, share neural substrates with imagination, combine unique features with schema-based predictions and show schema-based distortions that increase with consolidation. Here we present a computational model in which hippocampal replay (from an autoassociative network) trains generative models (variational autoencoders) to (re)create sensory experiences from latent variable representations in entorhinal, medial prefrontal and anterolateral temporal cortices via the hippocampal formation. Simulations show effects of memory age and hippocampal lesions in agreement with previous models, but also provide mechanisms for semantic memory, imagination, episodic future thinking, relational inference and schema-based distortions including boundary extension. The model explains how unique sensory and predictable conceptual elements of memories are stored and reconstructed by efficiently combining both hippocampal and neocortical systems, optimizing the use of limited hippocampal storage for new and unusual information. Overall, we believe hippocampal replay training generative models provides a comprehensive account of memory construction, imagination and consolidation.
... Much of this work focuses on consolidation via hippocampal replay. Prior work has proposed that replay (or similar mechanisms) can prolong memory lifetimes (Shaham et al., 2021 ;Remme et al., 2021 ), alleviate the problem of catastrophic forgetting of previously learned information (van de Ven et al., 2020 ; González et al., 2020 ;Shaham et al., 2021 ), and facilitate generalization of learned information (McClelland et al., 1995 ;Sun et al., 2021 ). One prior theoretical study (Roxin and Fusi, 2013 ), which uses replayed activity to consolidate synaptic changes from short to long-term modules, explored how systems consolidation extends forgetting curves. ...
Preprint
Full-text available
In a variety of species and behavioral contexts, learning and memory formation recruits two neural systems, with initial plasticity in one system being consolidated into the other over time. Moreover, consolidation is known to be selective; that is, some experiences are more likely to be consolidated into long-term memory than others. Here, we propose and analyze a model that captures common computational principles underlying such phenomena. The key component of this model is a mechanism by which a long-term learning and memory system prioritizes the storage of synaptic changes that are consistent with prior updates to the short-term system. This mechanism, which we refer to as recall-gated consolidation, has the effect of shielding long-term memory from spurious synaptic changes, enabling it to focus on reliable signals in the environment. We describe neural circuit implementations of this model for different types of learning problems, including supervised learning, reinforcement learning, and autoassociative memory storage. These implementations involve learning rules modulated by factors such as prediction accuracy, decision confidence, or familiarity. We then develop an analytical theory of the learning and memory performance of the model, in comparison to alternatives relying only on synapse-local consolidation mechanisms. We find that recall-gated consolidation provides significant advantages, substantially amplifying the signal-to-noise ratio with which memories can be stored in noisy environments. We show that recall-gated consolidation gives rise to a number of phenomena that are present in behavioral learning paradigms, including spaced learning effects, task-dependent rates of consolidation, and differing neural representations in short- and long-term pathways.
... (These outputs represent the high-level sensory representations activated by hippocampal pattern completion, via return projections to sensory cortex.) The noise input to the autoassociative network could potentially represent random activation during sleep (González et al., 2020;Pezzulo et al., 2021;Stella et al., 2019). In other words, random inputs to the hippocampus result in the reactivation of memories, and this reactivation results in consolidation. ...
Preprint
Full-text available
Human episodic memories are (re)constructed, combining unique features with schema-based predictions, and share neural substrates with imagination. They also show systematic schema-based distortions that increase with consolidation. Here we present a computational model in which hippocampal replay (from an autoassociative network) trains generative models (variational autoencoders) in neocortex to (re)create sensory experiences via latent variable representations. Simulations using large image datasets reflect the effects of memory age and hippocampal lesions and the slow learning of statistical structure in agreement with previous models (Complementary Learning Systems and Multiple Trace Theory), but also explain schema-based distortions, imagination, inference, and continual representation learning in memory. Critically, the model suggests how unique and predictable elements of memories are stored and reconstructed by efficiently combining both hippocampal and neocortical systems, optimising the use of limited hippocampal storage. Finally, the model can be extended to sequential stimuli, including language, and multiple neocortical networks could be trained, including those with latent variable representations in entorhinal, medial prefrontal, and anterolateral temporal cortices. Overall, we believe hippocampal replay training neocortical generative models provides a comprehensive account of memory construction and consolidation.
... These changes are indexed by differences in behavior before versus after an offline period (Fig. 1b). For example, after learning a sequence of tasks, a period of sleep can repair memories that have been damaged due to interference in a preceding wakeful period [19,20,1,4] -an adaptive capacity that biologically detailed models of sleep have explored [7,22,31]. ...
Preprint
Full-text available
A remarkable capacity of the brain is its ability to autonomously reorganize memories during offline periods. Memory replay, a mechanism hypothesized to underlie biological offline learning, has inspired offline methods for reducing forgetting in artificial neural networks in continual learning settings. A memory-efficient and neurally-plausible method is generative replay, which achieves state of the art performance on continual learning benchmarks. However, unlike the brain, standard generative replay does not self-reorganize memories when trained offline on its own replay samples. We propose a novel architecture that augments generative replay with an adaptive, brain-like capacity to autonomously recover memories. We demonstrate this capacity of the architecture across several continual learning tasks and environments.
... Catastrophic forgetting should not occur when there is no interference between the images but, as the number of overlapping pixels increases, new task training can lead to forgetting. Our studies using biophysical models of the thalamocortical network 27,28 revealed that catastrophic forgetting occurs because the network connectivity becomes dominated by the most recently learned task, so input patterns for old tasks are insufficient to activate their respective output neurons; sleep replay can redistribute the network resources more equally between tasks to allow correct recall of the old and new memories. ...
... From a neuroscience perspective, sleep reduces interference by replaying activity of recently learned tasks and old relevant (interfering) tasks 20 . Using biophysical models of brain network and testing them for simplified task of learning overlapping memory sequences, we showed that sleep replay modifies the synaptic weights to create unique synaptic representation for each task 28 . Such differential allocation of resources leads to reduced representational overlap and therefore diminishes catastrophic forgetting. ...
... Some studies suggest net reductions of synaptic weights 32 , while others argue for net increase 60 . Our work (extending ideas of ref. 28) predicts that sleep replay leads to complex reorganization of synaptic connectivity, including potentiation of some synapses and pruning of others with the goal of increasing separation between memories. We found that sleep replay may increase the contrast between memory traces by enhancing lateral inhibition, such that activation of one memory inhibits other similar memories to avoid interference 61 . ...
Article
Full-text available
Artificial neural networks are known to suffer from catastrophic forgetting: when learning multiple tasks sequentially, they perform well on the most recent task at the expense of previously learned tasks. In the brain, sleep is known to play an important role in incremental learning by replaying recent and old conflicting memory traces. Here we tested the hypothesis that implementing a sleep-like phase in artificial neural networks can protect old memories during new training and alleviate catastrophic forgetting. Sleep was implemented as off-line training with local unsupervised Hebbian plasticity rules and noisy input. In an incremental learning framework, sleep was able to recover old tasks that were otherwise forgotten. Previously learned memories were replayed spontaneously during sleep, forming unique representations for each class of inputs. Representational sparseness and neuronal activity corresponding to the old tasks increased while new task related activity decreased. The study suggests that spontaneous replay simulating sleep-like dynamics can alleviate catastrophic forgetting in artificial neural networks.
... Artificial neural networks also suffer catastrophic forgetting as training for a new task causes the network to forget how to perform previous tasks (Masse et al., 2018). Several biological and artificial neural network models employ offline processing, which mimics hippocampal replay events during sleep, to reconsolidate old memory traces of previous experiences (Kirkpatrick et al., 2017;González et al., 2020;van de Ven et al., 2020;Hayes et al., 2021;Wang et al., 2022). Thus, spontaneous activity replay may bridge biological and artificial neural network studies. ...
Article
Studying the underlying neural mechanisms of cognitive functions of the brain is one of the central questions in modern biology. Moreover, it has significantly impacted the development of novel technologies in artificial intelligence. Spontaneous activity is a unique feature of the brain and is currently lacking in many artificially constructed intelligent machines. Spontaneous activity may represent the brain's idling states, which are internally driven by neuronal networks and possibly participate in offline processing during awake, sleep, and resting states. Evidence is accumulating that the brain's spontaneous activity is not mere noise but part of the mechanisms to process information about previous experiences. A bunch of literature has shown how previous sensory and behavioral experiences influence the subsequent patterns of brain activity with various methods in various animals. It seems, however, that the patterns of neural activity and their computational roles differ significantly from area to area and from function to function. In this article, I review the various forms of the brain's spontaneous activity, especially those observed during memory processing, and some attempts to model the generation mechanisms and computational roles of such activities.