The mean synaptic density  influences the membrane potential , avalanche distribution, and mean firing rate per time step .

The mean synaptic density influences the membrane potential , avalanche distribution, and mean firing rate per time step .

Source publication
Article
Full-text available
Recently evidence has accumulated that many neural networks exhibit self-organized criticality. In this state, activity is similar across temporal scales and this is beneficial with respect to information flow. If subcritical, activity can die out, if supercritical epileptiform patterns may occur. Little is known about how developing networks will...

Similar publications

Article
Full-text available
The spiking activity of single neurons can be well described by a nonlinear integrate-and-fire model that includes somatic adaptation. When exposed to fluctuating inputs sparsely coupled populations of these model neurons exhibit stochastic collective dynamics that can be effectively characterized using the Fokker-Planck equation. This approach, ho...
Article
Full-text available
In specific regions of the central nervous system (CNS), gap junctions have been shown to participate in neuronal synchrony. Amongst the CNS regions identified, some populations of brainstem motoneurons are known to be coupled by gap junctions. The application of various gap junction blockers to these motoneuron populations, however, has led to mix...
Article
Full-text available
Transfer entropy (TE) is an information-theoretic measure which has received recent attention in neuroscience for its potential to identify effective connectivity between neurons. Calculating TE for large ensembles of spiking neurons is computationally intensive, and has caused most investigators to probe neural interactions at only a single time d...

Citations

... The interaction between plasticity mechanisms is particularly important: excitatory STDP with an asymmetric time window destabilizes the network toward a bursty state, while inhibitory STDP with a symmetric time window stabilizes the network toward a critical state (Sadeh and Clopath, 2020). Structural changes, such as axonal elongation and synaptic pruning, also shape the network's critical dynamics (Tetzlaff et al., 2010;Kossio et al., 2018). Kuśmierz et al. (2020) demonstrated that networks with power-law distributed synaptic strengths exhibit a continuous transition to chaos. ...
Preprint
Full-text available
Dissociated neuronal cultures provide a simplified yet effective model system for investigating self-organized prediction and information processing in neural networks. This review consolidates current research demonstrating that these in vitro networks display fundamental computational capabilities, including predictive coding, adaptive learning, goal-directed behavior, and deviance detection. We examine how these cultures develop critical dynamics optimized for information processing, detail the mechanisms underlying learning and memory formation, and explore the relevance of the free energy principle within these systems. Building on these insights, we discuss how findings from dissociated neuronal cultures inform the design of neuromorphic and reservoir computing architectures, with the potential to enhance energy efficiency and adaptive functionality in artificial intelligence. The reduced complexity of neuronal cultures allows for precise manipulation and systematic investigation, bridging theoretical frameworks with practical implementations in bio-inspired computing. Finally, we highlight promising future directions, emphasizing advancements in three-dimensional culture techniques, multi-compartment models, and brain organoids that deepen our understanding of hierarchical and predictive processes in both biological and artificial systems. This review aims to provide a comprehensive overview of how dissociated neuronal cultures contribute to neuroscience and artificial intelligence, ultimately paving the way for biologically inspired computing solutions.
... We further discovered that the power law in PCA concurrently emerges with neuronal avalanches, a distinct power law in synchronized firing events. While the neuronal avalanches have been extensively studied as a self-organized criticality (SoC) in neural systems [18,19,20,21,22], their relationship to highdimensional activity has remained largely unknown, as they have been studied independently. Our results revealed the co-occurrence of the power law in PCA and neuronal avalanches, suggesting a potential link between SoC and the emergence of high-dimensional neural activity. ...
... Spontaneous activity became detectable approximately 2 -3 days in vitro (DIV). As reported in previous studies [22,20,27,28,29], bursts became observable as the culture matured (Fig. 1B). ...
Preprint
Full-text available
A vast number of neurons exhibit high-dimensional coordination for brain computation, both in processing sensory input and in generating spontaneous activity without external stimuli. Recent advancements in large-scale recordings have revealed that this high-dimensional population activity exhibits a scale-free structure, characterized by power law and distinct spatial patterns in principal components (PCs). However, the mechanisms underlying the formation of this high-dimensional neural coordination remain poorly understood. Specifically, it is unclear whether the characteristic high-dimensional structure of population activity emerges through self-organization or is shaped by the learning of sensory stimuli in animals. To address this question and clearly differentiate between these two possibilities, we investigated large-scale neural activity in dissociated neuronal culture using high-density multi-electrode arrays. Our findings demonstrate that the high-dimensional structure of neural activity self-organizes during network development in the absence of explicit sensory stimuli provided to animals. As the cultures mature, the PC variance exhibits a power-law decay, and the spatial structures of PCs transition from global to localized patterns, driven by the temporal correlations of neural activity. Furthermore, we observed an unexpected co-occurrence between the power-law decay in PCA and neuronal avalanches, suggesting a link between self-organized criticality and high-dimensional activity. Using a recurrent neural network model, we show that both phenomena can arise from biologically plausible heavy-tailed synaptic connectivity. By highlighting a developmental origin of the high-dimensional structure of neural activity, these findings deepen our understanding of how coordinated neural computations are achieved in the brain.
... How stability and flexibility is balanced at the level of neuronal populations, and the emergent dynamics that enable this to take place over development is largely unknown. Theoretical and in vitro work has suggested the selforganisation of population dynamics to a phase transition across development (Tetzlaff et al., 2010;Yada et al., 2017), but this has yet to be tested in living systems. Given that maladaptive attractors give rise to impaired dynamics in neurodevelopmental disorders (D. R. W. Burrows et al., 2020Burrows et al., , 2023, studying this flexibility-stability trade-off may be key to understanding the functional consequences of such developmental abnormalities. ...
Preprint
Full-text available
Neuronal networks must balance the need for stable yet flexible dynamics. This is evident during brain development, when synaptic plasticity during critical windows enables adaptability to changing environments whilst ensuring the stability of population dynamics. The emergence of population dynamics that balance stability and flexibility during development is poorly understood. Here, we investigated developmental brain dynamics in larval zebrafish, using in vivo 2-photon imaging to record single-cell activity across major brain regions from 3-8 days post-fertilisation, a highly plastic period in which hunting behaviours are established. Our findings revealed region-specific trajectories in the development of such dynamic regimes: the telencephalon exhibited increased neuronal excitability and long-range correlations, alongside the emergence of scale invariant avalanche dynamics indicative of enhanced flexibility. Conversely, while other regions showed increased state transitions over development, the telencephalon demonstrated a surprising rise in state stability, characterized by slightly longer dwell times and drastically reduced angular velocity in state space. Remarkably, such rotationally stable dynamics persisted up to 5 seconds into the future, indicating the emergence of strong attractors supporting stability over long timescales. Notably, we observed that telencephalon dynamics were maintained near to but not at a phase transition, thus allowing for robust responses while remaining adaptable to novel inputs. Our results highlight regionally-specific trajectories in the relationship between flexibility and stability, illustrating how developing neuronal populations can self-organize to balance these competing demands. Significance Statement Brain networks must balance the flexibility to adapt to new stimuli with the need for stability. This trade-off is particularly important during periods of high plasticity in brain development. Our study investigates this balance by recording single-cell activity across the entire brain of developing larval zebrafish. We discovered that brain dynamics become increasingly diverse, characterized by both short and long bursts of activity, reflecting increased flexibility. Simultaneously, we observed the emergence of stable dynamics, linked to consistent activity patterns over time. Using a modelling approach, we showed that this stability was driven by the formation of stable attractors that shape the dynamic trajectories. These findings highlight how population mechanisms can shape the dynamic interplay between flexibility and stability in regional networks in the developing brain.
... The neural networks in these studies were fixed, in the sense that they did not change their connectivity and weights. To overcome this limitation, various activity-dependent mechanisms have been introduced to automatically bring the network into a critical state, e.g., Hebbian (Bienenstock and Lehmann, 1998) and homeostatic rules (Levina et al., 2007;Tetzlaff et al., 2010), reinforcement learning (Bak and Chialvo, 2001;Chialvo and Bak, 1999;de Arcangelis and Herrmann, 2010), and activity-dependent rewiring (Bianconi and Marsili, 2004;Bornholdt and Röhl, 2003;Bornholdt and Rohlf, 2000). ...
Article
Full-text available
The nervous system, especially the human brain, is characterized by its highly complex network topology. The neurodevelopment of some of its features has been described in terms of dynamic optimization rules. We discuss the principle of adaptive rewiring, i.e., the dynamic reorganization of a network according to the intensity of internal signal communication as measured by synchronization or diffusion, and its recent generalization for applications in directed networks. These have extended the principle of adaptive rewiring from highly oversimplified networks to more neurally plausible ones. Adaptive rewiring captures all the key features of the complex brain topology: it transforms initially random or regular networks into networks with a modular small-world structure and a rich-club core. This effect is specific in the sense that it can be tailored to computational needs, robust in the sense that it does not depend on a critical regime, and flexible in the sense that parametric variation generates a range of variant network configurations. Extreme variant networks can be associated at macroscopic level with disorders such as schizophrenia, autism, and dyslexia, and suggest a relationship between dyslexia and creativity. Adaptive rewiring cooperates with network growth and interacts constructively with spatial organization principles in the formation of topographically distinct modules and structures such as ganglia and chains. At the mesoscopic level, adaptive rewiring enables the development of functional architectures, such as convergent-divergent units, and sheds light on the early development of divergence and convergence in, for example, the visual system. Finally, we discuss future prospects for the principle of adaptive rewiring.
... Such developmental events are likely to drive substantial changes in proximity to criticality, but this remains largely untested in intact animals. In vitro studies demonstrate that developmental maturation of neural circuits may converge towards critical dynamics as the network becomes competent [114][115][116] , but sometimes overshoot 116 . This endpoint, albeit in vitro, is consistent with theoretical descriptions of a computational optimum: neuronal cultures that learned to play the video game Pong were better performers when closer to criticality 117 . ...
Preprint
Full-text available
Brains face selective pressure to optimize computation, broadly defined. This optimization is achieved by myriad mechanisms and processes that influence the brain's computational state. These include development, plasticity, homeostasis, and more. Despite enormous variability over time and between individuals, do these diverse mechanisms converge on the same set-point? Is there a universal computational optimum around which the healthy brain tunes itself? The criticality hypothesis posits such a unified computational set-point. Criticality is a special dynamical brain state, defined by internally-generated multi-scale, marginally-stable dynamics which maximize many features of information processing. The first experimental support for this hypothesis emerged two decades ago, and evidence has accumulated at an accelerating pace, despite a contentious history. Here, we lay out the logic of criticality as a general computational end-point and systematically review experimental evidence for the hypothesis. We perform a meta-analysis of 143 datasets from manuscripts published between 2003 and 2024. To our surprise, we find that a long-standing controversy in the field is the product of a simple methodological choice that has no bearing on underlying dynamics. Our results suggest that a new generation of research can leverage the concept of criticality---as a unifying principle of brain function--to accelerate our understanding of behavior, cognition, and disease.
... In this study, we investigated the expression and effect of pharmacological modulation of PIEZO1 channels on the mouse embryonic-derived cortical networks. The use of murine neuronal networks cultured on substrate-integrated microelectrode arrays constitutes a common platform that has been the basis of a wide range of studies spanning neural dynamics/computation [42][43][44], biosensing [45], and neuropharmacology [20,31,46]. Our work is among the first to characterize the contribution of mechanosensitive ion channel activity to the spontaneous firing of cortical networks. ...
Article
Full-text available
PIEZO1 is a mechanosensitive ion channel expressed in various organs, including but not limited to the brain, heart, lungs, kidneys, bone, and skin. PIEZO1 has been implicated in astrocyte, microglia, capillary, and oligodendrocyte signaling in the mammalian cortex. Using murine embryonic frontal cortex tissue, we examined the protein expression and functionality of PIEZO1 channels in cultured networks leveraging substrate-integrated microelectrode arrays (MEAs) with additional quantitative results from calcium imaging and whole-cell patch-clamp electrophysiology. MEA data show that the PIEZO1 agonist Yoda1 transiently enhances the mean firing rate (MFR) of single units, while the PIEZO1 antagonist GsMTx4 inhibits both spontaneous activity and Yoda1-induced increase in MFR in cortical networks. Furthermore, calcium imaging experiments revealed that Yoda1 significantly increased the frequency of calcium transients in cortical cells. Additionally, in voltage clamp experiments, Yoda1 exposure shifted the cellular reversal potential towards depolarized potentials consistent with the behavior of PIEZO1 as a non-specific cation-permeable channel. Our work demonstrates that murine frontal cortical neurons express functional PIEZO1 channels and quantifies the electrophysiological effects of channel activation in vitro. By quantifying the electrophysiological effects of PIEZO1 activation in vitro, our study establishes a foundation for future investigations into the role of PIEZO1 in neurological processes and potential therapeutic applications targeting mechanosensitive channels in various physiological contexts.
... Despite the comparative simplicity and reproducibility of the in vitro experiments, which are also qualitatively reproduced in simulations by spiking neuronal network models [19][20][21][22], a detailed mechanism underlying the occurrence of steady spatial map of n-sites for spontaneous PSs remains elusive. Meanwhile, similar phenomena typically associated with the so-called neuronal avalanches subject to self-organized criticality (SOC) can also occur in vivo [23][24][25] (on SOC in brain slices and neuronal cultures in vitro, see [26,27] and [28][29][30]). In addition, as population spikes originating from a steady n-site mimic events of focal epilepsy, spontaneously formed n-sites might be used as a simplistic prototype of focal epilepsy foci on the cortical sheet [31,32]. ...
... Moreover, recent experiments indicate similar developmental-stage dependence even for the "classical" excitatory STDP implied in Eqs. (29) [110]. We therefore emphasize that our consideration of the STDP impact here is purely speculative. ...
... For the STDP model described by Eqs. (29), if parameters K pre and K post are small enough, the stationary n-sites are assumed to correspond to a stationary bell-shaped distribution of synaptic weights, as this distribution is typical for the STDP rule with multiplicative weight dependence [116,117]. STDP could also result in a finite lifetime for each n-site, which would blossom and then fade out, with or without further revival. ...
Preprint
Thin pancake-like neuronal networks cultured on top of a planar microelectrode array have been extensively tried out in neuroengineering, as a substrate for the mobile robot’s control unit, i.e., as a cyborg’s brain. Most of these attempts failed due to intricate self-organizing dynamics in the neuronal systems. In particular, the networks may exhibit an emergent spatial map of steady nucleation sites (“n-sites”) of spontaneous population spikes. Being unpredictable and independent of the surface electrode locations, the n-sites drastically change local ability of the network to generate spikes. Here, using a spiking neuronal network model with generative spatially-embedded connectome, we systematically show in simulations that the number, location, and relative activity of spontaneously formed n-sites (“the vitals”) crucially depend on the samplings of three distributions: 1) the network distribution of neuronal excitability, 2) the distribution of connections between neurons of the network, and 3) the distribution of maximal amplitudes of a single synaptic current pulse. Moreover, blocking the dynamics of a small fraction (about 4%) of non-pacemaker neurons having the highest excitability was enough to completely suppress the occurrence of population spikes and their n-sites. This key result is explained theoretically. Remarkably, the n-sites occur taking into account only short-term synaptic plasticity, i.e., without a Hebbian-type plasticity. As the spiking network model used in this study is strictly deterministic, all simulation results can be accurately reproduced. The model, which has already demonstrated a very high richness-to-complexity ratio, can also be directly extended into the three-dimensional case, e.g., for targeting peculiarities of spiking dynamics in cerebral (or brain) organoids. We recommend the model as an excellent illustrative tool for teaching network-level computational neuroscience, complementing a few benchmark models.
... This rule has been inspired by precursor models by Dammasch [42], van Ooyen & van Pelt [43] and van Ooyen [44]. This specific model was previously employed to show cortical reorganization after stroke [45] and lesion [46], emergent properties of developing neural networks [47] and neurogenesis in adult dentate gyrus [48,49]. However, we use a more recent implementation of this model in NEST [50] which does not include a distance-dependent kernel, previously used to demonstrate associative properties of homeostatic structural plasticity [35,41]. ...
Article
Full-text available
Repetitive transcranial magnetic stimulation (rTMS) is a non-invasive brain stimulation technique used to induce neuronal plasticity in healthy individuals and patients. Designing effective and reproducible rTMS protocols poses a major challenge in the field as the underlying biomechanisms of long-term effects remain elusive. Current clinical protocol designs are often based on studies reporting rTMS-induced long-term potentiation or depression of synaptic transmission. Herein, we employed computational modeling to explore the effects of rTMS on long-term structural plasticity and changes in network connectivity. We simulated a recurrent neuronal network with homeostatic structural plasticity among excitatory neurons, and demonstrated that this mechanism was sensitive to specific parameters of the stimulation protocol (i.e., frequency, intensity, and duration of stimulation). Particularly, the feedback-inhibition initiated by network stimulation influenced the net stimulation outcome and hindered the rTMS-induced structural reorganization, highlighting the role of inhibitory networks. These findings suggest a novel mechanism for the lasting effects of rTMS, i.e., rTMS-induced homeostatic structural plasticity, and highlight the importance of network inhibition in careful protocol design, standardization, and optimization of stimulation.
... It posits that quiet wakefulness is characterized by a self-organized critical state in which the repertoire of possible neuronal activity patterns is maximized [5][6][7][8]. Hallmarks of such a nonequilibrium phase transition include scale-free cascades of causally connected neuronal activity (dubbed neuronal avalanches) [9][10][11], and the presence of long-range spatial [12] and temporal [13][14][15] correlations, all of which have been observed in vivo [9,16], in vitro [17,18], and across species [19][20][21][22][23] and spatial scales [14,[23][24][25][26][27]. This suggests the existence of some universal mechanism or * anja.rabus@ucalgary.ca ...
Article
Does the brain optimize itself for storage and transmission of information, and if so, how? The critical brain hypothesis is based in statistical physics and posits that the brain self-tunes its dynamics to a critical point or regime to maximize the repertoire of neuronal responses. Yet, the robustness of this regime, especially with respect to changes in the functional connectivity, remains an unsolved fundamental challenge. Here, we show that both scale-free neuronal dynamics and self-similar features of behavioral dynamics persist following significant changes in functional connectivity. Specifically, we find that the psychedelic compound ibogaine that is associated with an altered state of consciousness fundamentally alters the functional connectivity in the retrosplenial cortex of mice. Yet, the scale-free statistics of movement and of neuronal avalanches among behaviorally related neurons remain largely unaltered. This indicates that the propagation of information within biological neural networks is robust to changes in functional organization of subpopulations of neurons, opening up a new perspective on how the adaptive nature of functional networks may lead to optimality of information transmission in the brain.
... In other words, neurons at high firing rates can lead to insufficient STS given external inputs and intrinsic non-zero neuronal activities [68,69], and this makes the liquid layer reside in a subcritical or a supercritical regime. In addition, the limited scale-invariant range can make the reservoirs lie in the subcritical regime [70,71]. The relatively higher neural firing rate of neurons in the Kaiser model displayed in Fig. 3 suggests that the Kaiser model may have a greater mélange effect compared to the Maass model because the neural network of the Kaiser model responds to external inputs with shorter STS. ...
Article
Full-text available
Reservoir computing (RC) is a relatively new machine-learning framework that uses an abstract neural network model, called reservoir. The reservoir forms a complex system with high dimensionality, nonlinearity, and intrinsic memory effect due to recurrent connections among individual neurons. RC manifests a best-in-class performance in processing information generated by complex dynamical systems, yet little is known about its microscopic/macroscopic dynamics underlying the computational capability. Here, we characterize the neuronal and network dynamics of liquid state machines (LSMs) using numerical simulations and Modified National Institute of Standards and Technology (MNIST) database classification tasks. The computational performance of LSMs largely depends on a dynamic range of neuronal avalanches whereby the avalanche patterns are determined by the neuron and network models. A larger dynamic range leads to higher performance—the MNIST classification accuracy is highest when the avalanche sizes follow a slowly decaying power-law distribution with an exponent of ∼1.5, followed by the power-law statistics with a larger exponent and the mixture of power-law/log-normal distributions. Network-theoretic analysis suggests that the formation of large functional clusters and the promotion of dynamic transitions between large and small clusters may contribute to the scale-invariant nature. This study provides new insight into our understanding of the computational principles of RC concerning the actions of the individual neurons and the system-level collective behavior.