Jean Erik Delanois's research while affiliated with University of California and other places

Publications (7)

Preprint
Full-text available
Slow-wave sleep (SWS), characterized by slow oscillation (SO, <1Hz) of alternating active and silent states in the thalamocortical network, is a primary brain state during Non-Rapid Eye Movement (NREM) sleep. However, the understanding of how global SO emerges from micro-scale neuron dynamics and network connectivity remains unclear. We developed a...
Article
Full-text available
Artificial neural networks overwrite previously learned tasks when trained sequentially, a phenomenon known as catastrophic forgetting. In contrast, the brain learns continuously, and typically learns best when new training is interleaved with periods of sleep for memory consolidation. Here we used spiking network to study mechanisms behind catastr...
Article
Full-text available
Foreword from the editors. We hosted four keynote speakers: Wolf Singer, Bill Bialek, Danielle Bassett, and Sonja Gruen. They enlightened us about computations in the cerebral cortex, the reduction of high-dimensional data, the emerging field of computational psychiatry, and the significance of spike patterns in motor cortex. From the submissions,...
Article
Full-text available
Continual learning remains to be an unsolved problem in artificial neural networks. The brain has evolved mechanisms to prevent catastrophic forgetting of old knowledge during new training. Building upon data suggesting importance of sleep in learning and memory, we tested a hypothesis that sleep protects old memories from forgetting. In the thalam...
Article
Full-text available
Continual learning remains an unsolved problem in artificial neural networks. The brain has evolved mechanisms to prevent catastrophic forgetting of old knowledge during new training. Building upon data suggesting the importance of sleep in learning and memory, we tested a hypothesis that sleep protects old memories from being forgotten after new l...
Article
Full-text available
Continual learning remains an unsolved problem in artificial neural networks. The brain has evolved mechanisms to prevent catastrophic forgetting of old knowledge during new training. Building upon data suggesting the importance of sleep in learning and memory, we tested a hypothesis that sleep protects old memories from being forgotten after new l...
Preprint
Full-text available
Artificial neural networks suffer from the inability to learn new tasks sequentially without completely overwriting all memory of the previously learned tasks, a phenomenon known as catastrophic forgetting. However, biological neural networks are able to continuously learn many tasks over the course of the organism’s lifetime, and typically learn t...

Citations

... This idea is supported by DL and by neuroscience, e.g. [Golden et al., 2022], where it is stated that "Sleep helps reorganize memories and presents them in the most efficient way". ...
... (These outputs represent the high-level sensory representations activated by hippocampal pattern completion, via return projections to sensory cortex.) The noise input to the autoassociative network could potentially represent random activation during sleep (González et al., 2020;Pezzulo et al., 2021;Stella et al., 2019). In other words, random inputs to the hippocampus result in the reactivation of memories, and this reactivation results in consolidation. ...
... These results suggest that waves of neuronal activity could efficiently organize coherent cell assemblies during sleep. Other studies suggested the effects of slow waves on protecting against catastrophic forgetting in a biological setup [25]. When the model learns one direction of neural activity along layers and its reverse direction using the typical asymmetric STDP time window, their effects interfere with each other. ...
... A few factors that could explain this decrease in performance are: 1) Overlapped visual-motor association, 2) Forgetting, or 3) Lack of association between ball trajectory and some racket positions, since in our analysis we only considered unique ball trajectories and did not control for the racket positions. The first and second factors are not mutually exclusive and are extensively being investigated by the modeling community [48]. Surprisingly, the model could never learn to hit the ball for a few ball trajectories despite many repetitions (e.g. ...