Jean Erik Delanois’s research while affiliated with University of California, San Diego and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (12)


Fig. 1: Accuracy on MNIST (A) and FMNIST (B) with mean (lines) and standard deviation (error bars) across 10 trials. Xaxis -log of the relative amount of data used for training (e.g., 0.01=1% of data). Blue -baseline (after ANN training); Orange -baseline + sleep; Green -baseline + sleep + finetuning. Note significant gain in accuracy after the sleep phase on low data. The sleep phase reduced performance on high data but was largely recovered by fine-tuning.
Fig. 2: Confusion matrices before and after sleep for MNIST dataset. A 3% subset of the overall MNIST dataset was used in training. The value in each cell indicates the fraction of images of a given true label that were classified as a given predicted label by the model. (A) -before SRC, (B) -after SRC.
Fig. 4: Imbalanced class accuracy improvement due to sleep. Each row shows experiments with data reduction for one specific class (shown on the left), with the percentage of reduction shown on the horizontal axis. Each cell shows the class-wise accuracy of the underrepresented class before sleep (top value) and after sleep (bottom value). The color map is based on the change in accuracy, ∆ = After Sleep -Before Sleep. Reds indicate a positive difference (improvement), while blues indicate a negative difference (drop in accuracy). Note, many red squares showing class-wise improvement with only a few blue squares showing class-wise performance loss.
Fig. 5: Heatmaps showing accuracy changes after each phase of sequential learning and SRC (columns) on MNIST (A) and FMNIST (B). In each subplot, X-axis represents amount of T2 training data and Y-axis -amount of T1 training data. The rows show accuracy on T1, T2 and the mean accuracy.
Fig. 8: MNIST (left) and FMNIST (right) spike rasters for input and hidden layers (denoted by red lines). While generated input remains at a consistent level (left of first red lines), hidden layer activity begins high and slightly decays over the course of sleep (right of first red lines)

+1

Unsupervised Replay Strategies for Continual Learning with Limited Data
  • Preprint
  • File available

October 2024

·

31 Reads

Anthony Bazhenov

·

Pahan Dewasurendra

·

·

Jean Erik Delanois

Artificial neural networks (ANNs) show limited performance with scarce or imbalanced training data and face challenges with continuous learning, such as forgetting previously learned data after new tasks training. In contrast, the human brain can learn continuously and from just a few examples. This research explores the impact of 'sleep', an unsupervised phase incorporating stochastic activation with local Hebbian learning rules, on ANNs trained incrementally with limited and imbalanced datasets, specifically MNIST and Fashion MNIST. We discovered that introducing a sleep phase significantly enhanced accuracy in models trained with limited data. When a few tasks were trained sequentially, sleep replay not only rescued previously learned information that had been catastrophically forgetting following new task training but often enhanced performance in prior tasks, especially those trained with limited data. This study highlights the multifaceted role of sleep replay in augmenting learning efficiency and facilitating continual learning in ANNs.

Download

Emergent effects of synaptic connectivity on the dynamics of global and local slow waves in a large-scale thalamocortical network model of the human brain

July 2024

·

120 Reads

·

3 Citations

·

M. Gabriela Navas-Zuloaga

·

Burke Q. Rosen

·

[...]

·

Slow-wave sleep (SWS), characterized by slow oscillations (SOs, <1Hz) of alternating active and silent states in the thalamocortical network, is a primary brain state during Non-Rapid Eye Movement (NREM) sleep. In the last two decades, the traditional view of SWS as a global and uniform whole-brain state has been challenged by a growing body of evidence indicating that SO can be local and can coexist with wake-like activity. However, the mechanisms by which global and local SOs arise from micro-scale neuronal dynamics and network connectivity remain poorly understood. We developed a multi-scale, biophysically realistic human whole-brain thalamocortical network model capable of transitioning between the awake state and SWS, and we investigated the role of connectivity in the spatio-temporal dynamics of sleep SO. We found that the overall strength and a relative balance between long and short-range synaptic connections determined the network state. Importantly, for a range of synaptic strengths, the model demonstrated complex mixed SO states, where periods of synchronized global slow-wave activity were intermittent with the periods of asynchronous local slow-waves. An increase in the overall synaptic strength led to synchronized global SO, while a decrease in synaptic connectivity produced only local slow-waves that would not propagate beyond local areas. These results were compared to human data to validate probable models of biophysically realistic SO. The model producing mixed states provided the best match to the spatial coherence profile and the functional connectivity estimated from human subjects. These findings shed light on how the spatio-temporal properties of SO emerge from local and global cortical connectivity and provide a framework for further exploring the mechanisms and functions of SWS in health and disease.



Sleep-Like Unsupervised Replay Improves Performance When Data Are Limited or Unbalanced (Student Abstract)

March 2024

·

9 Reads

·

1 Citation

Proceedings of the AAAI Conference on Artificial Intelligence

The performance of artificial neural networks (ANNs) degrades when training data are limited or imbalanced. In contrast, the human brain can learn quickly from just a few examples. Here, we investigated the role of sleep in improving the performance of ANNs trained with limited data on the MNIST and Fashion MNIST datasets. Sleep was implemented as an unsupervised phase with local Hebbian type learning rules. We found a significant boost in accuracy after the sleep phase for models trained with limited data in the range of 0.5-10% of total MNIST or Fashion MNIST datasets. When more than 10% of the total data was used, sleep alone had a slight negative impact on performance, but this was remedied by fine-tuning on the original data. This study sheds light on a potential synaptic weight dynamics strategy employed by the brain during sleep to enhance memory performance when training data are limited or imbalanced.



Figure 8. Effect of cortical excitatory synaptic strength on the network dynamics. A) Reduction of all cortical connections by 3x. B) Reduction of only connections longer than 10 mm by 5x. All subpanels as in Figure 2. C) Summary of changes in frequency, amplitude, speed, participation, and onset/offset distribution spread when all connections are reduced as shown in (A). Increasing all weights shows little effect while decreasing all weights shows decreased amplitude & participation, with increases in onset/offset distributions (decreases in synchrony). The frequency of slow waves also becomes more variable. Note, Increasing all weights 5x results in the ablation of slow waves due to constant upstate, i.e., SO characteristics cannot be meaningfully quantified. D) Summary of changes in SO characteristics when only long range connections are reduced as shown in (B). Across all plots, 5 mm is seen to be an inflection point (or "elbow") where network activity changes.
Emergent effects of synaptic connectivity on cortical sleep slow wave amplitude, density and propagation in a large-scale thalamocortical network model of the human brain

October 2023

·

119 Reads

Slow-wave sleep (SWS), characterized by slow oscillations (SO, <1Hz) of alternating active and silent states in the thalamocortical network, is a primary brain state during Non-Rapid Eye Movement (NREM) sleep. In the last two decades, the traditional view of SWS as a global and uniform whole-brain state has been challenged by a growing body of evidence indicating that SO can be local and can coexist with wake-like activity. However, the understanding of how global and local SO emerges from micro-scale neuron dynamics and network connectivity remains unclear. We developed a multi-scale, biophysically realistic human whole-brain thalamocortical network model capable of transitioning between the awake state and slow-wave sleep, and we investigated the role of connectivity in the spatio-temporal dynamics of sleep SO. We found that the overall strength and a relative balance between long and short-range synaptic connections determined the network state. Importantly, for a range of synaptic strengths, the model demonstrated complex mixed SO states, where periods of synchronized global slow-wave activity were intermittent with the periods of asynchronous local slow-waves. Increase of the overall synaptic strength led to synchronized global SO, while decrease of synaptic connectivity produced only local slow-waves that would not propagate beyond local area. These results were compared to human data to validate probable models of biophysically realistic SO. The model producing mixed states provided the best match to the spatial coherence profile and the functional connectivity estimated from human subjects. These findings shed light on how the spatio-temporal properties of SO emerge from local and global cortical connectivity and provide a framework for further exploring the mechanisms and functions of SWS in health and disease. Author Summary Slow Wave Sleep (SWS) is a primary brain state displayed during Non-Rapid Eye Movement (NREM) sleep. While previously thought of as homogenous waves of activity that sweep across the entire brain, modern research has suggested a more nuanced pattern of activity that can vary between local and global slow wave activity. However, understanding how these states emerge from small scale neuronal dynamics and network connectivity remains unclear. We developed a biophysically realistic model of the human brain capable of generating SWS-like behavior, and investigated the role of connectivity in the spatio-temporal dynamics of these slow waves. We found that the overall strength and a relative balance between long and short-range synaptic connections determined the network behavior - specifically, models with relatively weaker long-range connectivity resulted in mixed states of global and local slow waves. These results were compared to human data, and we found that models producing mixed states provided the best match to the network behavior and functional connectivity of human subject data. These findings shed light on how the spatio-temporal properties of SWS emerge from local and global cortical connectivity and provide a framework for further exploring the mechanisms and functions of SWS in health and disease.


Sleep prevents catastrophic forgetting in spiking neural networks by forming a joint synaptic weight representation

November 2022

·

177 Reads

·

25 Citations

Artificial neural networks overwrite previously learned tasks when trained sequentially, a phenomenon known as catastrophic forgetting. In contrast, the brain learns continuously, and typically learns best when new training is interleaved with periods of sleep for memory consolidation. Here we used spiking network to study mechanisms behind catastrophic forgetting and the role of sleep in preventing it. The network could be trained to learn a complex foraging task but exhibited catastrophic forgetting when trained sequentially on different tasks. In synaptic weight space, new task training moved the synaptic weight configuration away from the manifold representing old task leading to forgetting. Interleaving new task training with periods of off-line reactivation, mimicking biological sleep, mitigated catastrophic forgetting by constraining the network synaptic weight state to the previously learned manifold, while allowing the weight configuration to converge towards the intersection of the manifolds representing old and new tasks. The study reveals a possible strategy of synaptic weights dynamics the brain applies during sleep to prevent forgetting and optimize learning.


30th Annual Computational Neuroscience Meeting: CNS*2021

December 2021

·

2,269 Reads

·

1 Citation

Journal of Computational Neuroscience

Foreword from the editors. We hosted four keynote speakers: Wolf Singer, Bill Bialek, Danielle Bassett, and Sonja Gruen. They enlightened us about computations in the cerebral cortex, the reduction of high-dimensional data, the emerging field of computational psychiatry, and the significance of spike patterns in motor cortex. From the submissions, we also selected four featured orals as particularly noteworthy. They discussed a new role for cortical oscillations as a tempering mechanism, branch-specific computations in Purkinje cells, low frequency entrainment in processing sign language, and decreasing neural heterogeneity as a unifying sign of epilepsy. An additional 16 submissions were selected for shorter oral presentation in the plenary sessions, touching subjects such a spike and population coding, neural computation and interaction, astrocytic and dopaminergic modulation of plasticity, several kinds of sensory processing, reward learning, respiratory and motor control, neural activity propagation and synchronization, and brain organization in epilepsy and schizophrenia. We were also very pleased by the quality of the 213 presented posters, which drew a strong attendance, and the resulting online interactions between presenters and attendees. The full breadth of computational neuroscience was represented, from theory and method development over data analysis to applications.


Can sleep protect memories from catastrophic forgetting?

August 2020

·

138 Reads

·

22 Citations

Continual learning remains an unsolved problem in artificial neural networks. The brain has evolved mechanisms to prevent catastrophic forgetting of old knowledge during new training. Building upon data suggesting the importance of sleep in learning and memory, we tested a hypothesis that sleep protects old memories from being forgotten after new learning. In the thalamocortical model, training a new memory interfered with previously learned old memories leading to degradation and forgetting of the old memory traces. Simulating sleep after new learning reversed the damage and enhanced old and new memories. We found that when a new memory competed for previously allocated neuronal/synaptic resources, sleep replay changed the synaptic footprint of the old memory to allow overlapping neuronal populations to store multiple memories. Our study predicts that memory storage is dynamic, and sleep enables continual learning by combining consolidation of new memory traces with reconsolidation of old memory traces to minimize interference.


Figure 7 -figure supplement 2
Can sleep protect memories from catastrophic forgetting?

August 2020

·

139 Reads

·

39 Citations

eLife

Continual learning remains to be an unsolved problem in artificial neural networks. The brain has evolved mechanisms to prevent catastrophic forgetting of old knowledge during new training. Building upon data suggesting importance of sleep in learning and memory, we tested a hypothesis that sleep protects old memories from forgetting. In the thalamocortical model, training a new memory interfered with previously learned old memories leading to degradation and forgetting of the old memory traces. Simulating sleep immediately after new learning reversed the damage and enhanced all memories. We found that when a new memory competed for previously allocated neuronal/synaptic resources, sleep replay changed the synaptic footprint of the old memory to allow overlapping neuronal populations to store multiple memories. Our study predicts that memory storage is dynamic, and sleep enables continual learning by combining consolidation of new memory traces with reconsolidation of old memory traces to minimize interference.


Citations (6)


... However, the advances in understanding the biological brain are being aided considerably by computational simulations. Such that we are understanding key neurophysiological systems by building computational simulations of them [17,32]. Furthermore, important advances have been made in neuromorphic emulations incorporating spiking models, achieving impressive cognitive capacities, such as inferring rules during intelligence assessment tasks, such as Raven's Progressive Matrices [33], or emulating the primate visual system in regard to color perception capabilities [34]. ...

Reference:

Feasibility of a Personal Neuromorphic Emulation
Emergent effects of synaptic connectivity on the dynamics of global and local slow waves in a large-scale thalamocortical network model of the human brain

... It is worth noting, that these methods should be applied independently from those developed to enable continual learning leading to complex models with many hyperparameters to be tuned. SRC has been proposed as an unsupervised method applicable to solving various tasks, including catastrophic forgetting [29], lack of generalization [44], and low accuracy with limited training data [45]. However, the advantages of SRC were reported in independent studies, each using different sets of hyperparameters tuned to optimize performance for a specific task. ...

Improving Robustness of Convolutional Networks Through Sleep-Like Replay
  • Citing Conference Paper
  • December 2023

... Norman and colleagues modeled the role of NREM sleep in the replay of recent experiences, but it additionally focused on the replay of already well-learned patterns of activity during REM sleep as a means for reduced forgetting. They proposed that a learning mechanism exists during REM sleep that allows oscillating inhibition in the network to identify weak or damaged memories so that the individual Several of these models have also focused on how sleep solves the problem of catastrophic forgetting, a problem that arises when artificial neural networks overwrite previously learned information when trained on a new task [170][171][172] . Modeling work has shown that when new learning is interleaved with REM sleep episodes, previously learned information is preserved 170 . ...

Sleep prevents catastrophic forgetting in spiking neural networks by forming a joint synaptic weight representation

... In their model that simulated REM sleep, the neocortex was allowed to operate with no hippocampal influence, and replay focused on repairing the old, related information to allow for "graceful continual learning." Gonzalez and colleagues (2020) developed a biophysical model of thalamocortical architecture to examine how multiple competing memories can be reinstated during NREM to prevent catastrophic forgetting and that the dynamics of REM sleep could leveraged to rescue damaged memories from interference 173 . Together, these models suggest potential underlying mechanisms of REM sleep that refine memory representations and rescue weaker memories damaged by interference or age. ...

Can sleep protect memories from catastrophic forgetting?

... To address this challenge and protect synaptic weights from being overwritten regularized gradient-based methods such as Elastic Weight Consolidation (Kirkpatrick et al., 2017), Synaptic Intelligence , Memory Aware Synapses (Aljundi et al., 2018), or Sliced Cramer Preservation (Kolouri et al., 2019) have been proposed. Moreover, alternative approaches seek to mitigate catastrophic forgetting by emulating sleep-like states of the brain to preserve memories (González et al., 2020;Krishnan et al., 2019). ...

Can sleep protect memories from catastrophic forgetting?

eLife

... A few factors that could explain this decrease in performance are: 1) Overlapped visual-motor association, 2) Forgetting, or 3) Lack of association between ball trajectory and some racket positions, since in our analysis we only considered unique ball trajectories and did not control for the racket positions. The first and second factors are not mutually exclusive and are extensively being investigated by the modeling community [48]. Surprisingly, the model could never learn to hit the ball for a few ball trajectories despite many repetitions (e.g. ...

Interleaved training prevents catastrophic forgetting in spiking neural networks