ArticlePDF Available

Abstract and Figures

Spontaneous neural activity has been increasingly recognized as a subject of key relevance in neuroscience. It exhibits nontrivial spatiotemporal structure reflecting the organization of the underlying neural network and has proved to be closely intertwined with stimulus-induced activity patterns. As an additional contribution in this regard, we report computational studies that strongly suggest that a stimulus-free feature rules the behavior of an important psychophysical measure of the sensibility of a sensory system to a stimulus, the so-called dynamic range. Indeed in this paper we show that the entropy of the distribution of avalanche lifetimes (information efficiency, since it can be interpreted as the efficiency of the network seen as a communication channel) always accompanies the dynamic range in the benchmark model for sensory systems. Specifically, by simulating the Kinouchi-Copelli (KC) model on two broad families of model networks, we generically observed that both quantities always increase or decrease together as functions of the average branching ratio (the control parameter of the KC model) and that the information efficiency typically exhibits critical optimization jointly with the dynamic range (i.e., both quantities are optimized at the same value of that control parameter, that turns out to be the critical point of a nonequilibrium phase transition). In contrast with the practice of taking power laws to identify critical points in most studies describing measured neuronal avalanches, we rely on data collapses as more robust signatures of criticality to claim that critical optimization may happen even when the distribution of avalanche lifetimes is not a power law, as suggested by a recent experiment. Finally, we note that the entropy of the size distribution of avalanches (information capacity) does not always follow the dynamic range and the information efficiency when they are critically optimized, despite being more widely used than the latter to describe the computational capabilities of a neural network. This strongly suggests that dynamical rules allowing a proper temporal matching of the states of the interacting neurons is the key for achieving good performance in information processing, rather than increasing the number of available units.
Content may be subject to copyright.
A preview of the PDF is not available
... Why do we need more than avalanche distributions? It is known theoretically that PL avalanches may appear in systems that are not critical [40]; and systems that are known to be critical may lack a well-defined PL shape in avalanche distributions [17,41,42]. Nevertheless, critical systems obey scaling laws. ...
... plasticity [8][9][10], rare-region effects due to spatial heterogeneity [11,12], excitation/inhibition (E/I) synaptic balance [13,14], or simply synaptic noise [15]. In parallel, both theory and experiments also attempted to understand the consequences of brain criticality, such as optimizing information transmission [4], processing [16], and capacity [17], memory [18,19], and sensitivity to external stimuli [20,21]. ...
... A few things must be noted here: (i) out of equilibrium, the fluctuation-dissipation theorem is not generally valid [7], such that, now, χ = Var ρ in general, contrary to what is expected in equilibrium, thus, the susceptibility, χ = ∂ρ/∂h, and the variance each define a different critical exponent; (ii) one is certainly interested in calculating the averages in equations (17) and (18) at c = h c = 0, but it takes a very long time to reach the steady state as the critical point is approached, since ξ → ∞ as → 0 and h → 0 (this phenomenon is known as critical slowing down [77]); (iii) higher moments of ρ may be calculated, yielding cumulants that help us identify the critical point [7, see section 4.1.9.2] and [89]; (iv) error estimates for the variance can be computed via bootstrap or jackknife techniques [77]; (v) it is not trivial to measure these quantities in a computer simulation of a system undergoing an absorbing state phase transition, since the absorbing state with ρ = 0 will eventually be reached for any finite system [7,85], so similar issues might appear in experiments; thus, some simulation techniques were developed to optimize the calculations [92]. ...
Article
Full-text available
A homeostatic mechanism that keeps the brain highly susceptible to stimuli and optimizes many of its functions -- although this is a compelling theoretical argument in favor of the brain criticality hypothesis, the experimental evidence accumulated during the last two decades is still not entirely convincing, causing the idea to be seemingly unknown in the more clinically-oriented Neuroscience community. In this perspective review, we will briefly review the theoretical framework underlying such bold hypothesis, and point to where theory and experiments agree and disagree, highlighting potential ways to try and bridge the gap between them. Finally, we will discuss how the stand point of Statistical Physics could yield practical applications in Neuroscience and help with the interpretation of what is a healthy or unhealthy brain, regardless of being able to validate the critical brain hypothesis.
... Less clear are the results in other cases, when the KC model was used to study the dynamics of non random topologies. These include the deviations from mean-field behavior found in scale free networks, as reported by Copelli and Campos [40] or Mosqueiro and Maia [41]. We note in passing that there are multiple instances in which the KC model was (mis)named as "Greenberg and Hastings stochastic model," somewhat confusing according to the present results, as in the reports by Copelli and Campos [40], Wu et al., [42], Asis and Copelli [43], Mosqueiro and Maia [41], to name only a few. ...
... These include the deviations from mean-field behavior found in scale free networks, as reported by Copelli and Campos [40] or Mosqueiro and Maia [41]. We note in passing that there are multiple instances in which the KC model was (mis)named as "Greenberg and Hastings stochastic model," somewhat confusing according to the present results, as in the reports by Copelli and Campos [40], Wu et al., [42], Asis and Copelli [43], Mosqueiro and Maia [41], to name only a few. ...
Article
This report is concerned with the relevance of the microscopic rules that implement individual neuronal activation, in determining the collective dynamics, under variations of the network topology. To fix ideas we study the dynamics of two cellular automaton models, commonly used, rather in-distinctively, as the building blocks of large-scale neuronal networks. One model, due to Greenberg and Hastings (GH), can be described by evolution equations mimicking an integrate-and-fire process, while the other model, due to Kinouchi and Copelli (KC), represents an abstract branching process, where a single active neuron activates a given number of postsynaptic neurons according to a prescribed “activity” branching ratio. Despite the apparent similarity between the local neuronal dynamics of the two models, it is shown that they exhibit very different collective dynamics as a function of the network topology. The GH model shows qualitatively different dynamical regimes as the network topology is varied, including transients to a ground (inactive) state, continuous and discontinuous dynamical phase transitions. In contrast, the KC model only exhibits a continuous phase transition, independently of the network topology. These results highlight the importance of paying attention to the microscopic rules chosen to model the interneuronal interactions in large-scale numerical simulations, in particular when the network topology is far from a mean-field description. One such case is the extensive work being done in the context of the Human Connectome, where a wide variety of types of models are being used to understand the brain collective dynamics.
... Less clear are the results in other cases, when the KC model was used to study the dynamics of non random topologies. These include the deviations from mean field behavior found in scale free networks, as reported by Copelli & Campos [38] or Mosqueiro & Maia [39]. We note in passing that there are multiple instances in which the KC model was (mis)named as "Greenberg & Hastings stochastic model", somewhat confusing according to the present results, as in the reports by Copelli & Campos [38], Wu et al., [40], Asis & Copelli [41], Mosqueiro & Maia [39], to name only a few. ...
... These include the deviations from mean field behavior found in scale free networks, as reported by Copelli & Campos [38] or Mosqueiro & Maia [39]. We note in passing that there are multiple instances in which the KC model was (mis)named as "Greenberg & Hastings stochastic model", somewhat confusing according to the present results, as in the reports by Copelli & Campos [38], Wu et al., [40], Asis & Copelli [41], Mosqueiro & Maia [39], to name only a few. ...
Preprint
Full-text available
This report is concerned with the relevance of the microscopic rules, that implement individual neuronal activation, in determining the collective dynamics, under variations of the network topology. To fix ideas we study the dynamics of two cellular automaton models, commonly used, rather in-distinctively, as the building blocks of large scale neuronal networks. One model, due to Greenberg \& Hastings, (GH) can be described by evolution equations mimicking an integrate-and-fire process, while the other model, due to Kinouchi \& Copelli, (KC) represents an abstract branching process, where a single active neuron activates a given number of postsynaptic neurons according to a prescribed "activity" branching ratio. Despite the apparent similarity between the local neuronal dynamics of the two models, it is shown that they exhibit very different collective dynamics as a function of the network topology. The GH model shows qualitatively different dynamical regimes as the network topology is varied, including transients to a ground (inactive) state, continuous and discontinuous dynamical phase transitions. In contrast, the KC model only exhibits a continuous phase transition, independently of the network topology. These results highlight the importance of paying attention to the microscopic rules chosen to model the inter-neuronal interactions in large scale numerical simulations, in particular when the network topology is far from a mean field description. One such case is the extensive work being done in the context of the Human Connectome, where a wide variety of types of models are being used to understand the brain collective dynamics.
... Evidence for the widespread occurrence of criticality in nature, and its corresponding computational advantages, has triggered the interest of scientists in many different fields. The list of advantages associated with criticality spans many systems and different measurable quantities (Assis and Copelli, 2008;Boedecker et al., 2012;Deco et al., 2013;Gollo et al., 2013;Haldeman and Beggs, 2005;Hidalgo et al., 2014;Kastner et al., 2015;Legenstein and Maass, 2007;Livi et al., 2016;Mosqueiro and Maia, 2013;Publio et al., 2012;Schrauwen et al., 2009;Shew and Plenz, 2013). Despite the field traditionally developing outside of neuroscience, many of the most exciting findings now focus on brain dynamics. ...
Preprint
Cognitive function requires the coordination of neural activity across many scales, from neurons and circuits to large-scale networks. As such, it is unlikely that an explanatory framework focused upon any single scale will yield a comprehensive theory of brain activity and cognitive function. Modelling and analysis methods for neuroscience should aim to accommodate multiscale phenomena. Emerging research now suggests that multi-scale processes in the brain arise from so-called critical phenomena that occur very broadly in the natural world. Criticality arises in complex systems perched between order and disorder, and is marked by fluctuations that do not have any privileged spatial or temporal scale. We review the core nature of criticality, the evidence supporting its role in neural systems and its explanatory potential in brain health and disease.
... that information processing seems to be optimized at a secondorder absorbing phase transition [28][29][30][31][32][33][34][35][36][37][38][39][40][41][42]. This transition occurs between no activity (the absorbing phase) and nonzero steadystate activity (the active phase). ...
Article
Full-text available
The critical brain hypothesis states that there are information processing advantages for neuronal networks working close to the critical region of a phase transition. If this is true, we must ask how the networks achieve and maintain this critical state. Here, we review several proposed biological mechanisms that turn the critical region into an attractor of a dynamics in network parameters like synapses, neuronal gains, and firing thresholds. Since neuronal networks (biological and models) are not conservative but dissipative, we expect not exact criticality but self-organized quasicriticality, where the system hovers around the critical point.
... [7,[21][22][23][24][25][26]). One of the main results is that information processing seems to be optimized at a second order absorbing phase transition [27][28][29][30][31][32][33][34][35][36][37][38][39][40][41]. This transition occurs between no activity (the absorbing phase) and nonzero steady state activity (the active phase). ...
Preprint
The critical brain hypothesis states that there are information processing advantages for neuronal networks working close to the critical region of a phase transition. If this is true, we must ask how the networks achieve and maintain this critical state. Here we review several proposed biological mechanisms that turn the critical region into an attractor of a dynamics in network parameters like synapses, neuronal gains and firing thresholds. Since neuronal networks (biological and models) are nonconservative but dissipative, we expect not exact criticality but self-organized quasicriticality (SOqC), where the system hovers around the critical point.
... These firings may be organized in avalanches of action potentials that spread throughout the cortex. Critical avalanches are known to enable the propagation of fluctuations through local interactions due to longrange spatiotemporal correlations [3], generating optimized processing, and functional features [4][5][6][7][8]. ...
Preprint
Full-text available
Recent experiments suggested that homeostatic regulation of synaptic balance leads the visual system to recover and maintain a regime of power-law avalanches. Here we study an excitatory/inhibitory (E/I) mean-field neuronal network that has a critical point with power-law avalanches and synaptic balance. When short term depression in inhibitory synapses and firing threshold adaptation are added, the system hovers around the critical point. This homeostatically self-organized quasi-critical (SOqC) dynamics generates E/I synaptic current cancellation in fast time scales, causing fluctuation-driven asynchronous-irregular (AI) firing. We present the full phase diagram of the model without adaptation varying external input versus synaptic coupling. This system has a rich dynamical repertoire of spiking patterns: synchronous regular (SR), asynchronous regular (AR), synchronous irregular (SI), slow oscillations (SO) and AI. It also presents dynamic balance of synaptic currents, since inhibitory currents try and compensate excitatory currents over time, resulting in both of them scaling linearly with external input. Our model thus unifies two different perspectives on cortical spontaneous activity: both critical avalanches and fluctuation-driven AI firing arise from SOqC homeostatic adaptation, and are indeed two sides of the same coin.
... These firings may be organized in avalanches of action potentials that spread throughout the cortex. Critical avalanches are known to enable the propagation of fluctuations through local interactions due to longrange spatiotemporal correlations [3], generating optimized processing, and functional features [4][5][6][7][8]. ...
Article
Full-text available
Recent experiments suggested that a homeostatic regulation of synaptic balance leads the visual system to recover and maintain a regime of power-law avalanches. Here we study an excitatory/inhibitory (E/I) mean-field neuronal network that has a critical point with power-law avalanches and synaptic balance. When short-term depression in inhibitory synapses and firing threshold adaptation are added, the system hovers around the critical point. This homeostatically self-organized quasicritical (SOqC) dynamics generates E/I synaptic current cancellation in fast timescales, causing fluctuation-driven asynchronous-irregular (AI) firing. We present the full phase diagram of the model without adaptation varying external input versus synaptic coupling. This system has a rich dynamical repertoire of spiking patterns: synchronous regular (SR), asynchronous regular (AR), synchronous irregular (SI), slow oscillations (SO), and AI. It also presents dynamic balance of synaptic currents, since inhibitory currents try and compensate excitatory currents over time, resulting in both of them scaling linearly with external input. Our model thus unifies two different perspectives on cortical spontaneous activity: both critical avalanches and fluctuation-driven AI firing arise from SOqC homeostatic adaptation and are indeed two sides of the same coin.
... Critical avalanches are known to enable the propagation of fluctuations through local interactions due to long-range spatiotemporal correlations [3], generating optimized processing and functional features [4][5][6][7][8]. ...
Preprint
Full-text available
Asynchronous irregular (AI) and critical states are two competing frameworks proposed to explain spontaneous neuronal activity. Here, we propose a mean-field model with simple stochastic neurons that generalizes the integrate-and-fire network of Brunel (2000). We show that the point with balanced inhibitory/excitatory synaptic weight ratio gc4g_c \approx 4 corresponds to a second order absorbing phase transition usual in self-organized critical (SOC) models. At the synaptic balance point gcg_c, the network exhibits power-law neuronal avalanches with the usual exponents, whereas for nonzero external field the system displays the four usual synchronicity states of balanced networks. We add homeostatic inhibition and firing rate adaption and obtain a self-organized quasi-critical balanced state with avalanches and AI-like activity. Our model might explain why different inhibition levels are obtained in different experimental conditions and for different regions of the brain, since at least two dynamical mechanisms are necessary to obtain a truly balanced state, without which the network may hover in different regions of the presented theoretical phase diagram.
Article
Epilepsy is a common neurological disorder characterized by recurring seizures, but its underlying mechanisms remain poorly understood. Despite extensive research, there are still gaps in our knowledge about the relationship between brain dynamics and seizures. In this study, our aim is to address these gaps by proposing a novel approach to assess the role of brain network dynamics in the onset of seizures. Specifically, we investigate the relationship between brain dynamics and seizures by tracking the distance to criticality. Our hypothesis is that this distance plays a crucial role in brain state changes and that seizures may be related to critical transitions of this distance. To test this hypothesis, we develop a method to measure the evolution of the brain network's distance to the critical dynamic systems (i.e., the distance to the tipping point, DTP) using dynamic network biomarker theory and random matrix theory. The results show that the DTP of the brain decreases significantly immediately after onset of an epileptic seizure, suggesting that the brain loses its well-defined quasi-critical state during seizures. We refer to this phenomenon as the "criticality of the criticality" (COC). Furthermore, we observe that DTP exhibits a shape transition before and after the onset of the seizures. This phenomenon suggests the possibility of early warning signal (EWS) identification in the dynamic sequence of DTP, which could be utilized for seizure prediction. Our results show that the Hurst exponent, skewness, kurtosis, autocorrelation, and variance of the DTP sequence are potential EWS features. This study advances our understanding of the relationship between brain dynamics and seizures and highlights the potential for using criticality-based measures to predict and prevent seizures.
Article
Full-text available
Networks of living neurons exhibit diverse patterns of activity, including oscillations, synchrony, and waves. Recent work in physics has shown yet another mode of activity in systems composed of many nonlinear units interacting locally. For example, avalanches, earthquakes, and forest fires all propagate in systems organized into a critical state in which event sizes show no characteristic scale and are described by power laws. We hypothesized that a similar mode of activity with complex emergent properties could exist in networks of cortical neurons. We investigated this issue in mature organotypic cultures and acute slices of rat cortex by recording spontaneous local field potentials continuously using a 60 channel multielectrode array. Here, we show that propagation of spontaneous activity in cortical networks is described by equations that govern avalanches. As predicted by theory for a critical branching process, the propagation obeys a power law with an exponent of -3/2 for event sizes, with a branching parameter close to the critical value of 1. Simulations show that a branching parameter at this value optimizes information transmission in feedforward networks, while preventing runaway network excitation. Our findings suggest that “neuronal avalanches” may be a generic property of cortical networks, and represent a mode of activity that differs profoundly from oscillatory, synchronized, or wave-like network states. In the critical state, the network may satisfy the competing demands of information transmission and network stability.
Article
Full-text available
We propose a novel associative memory model wherein the neural activity without an input (i.e., spontaneous activity) is modified by an input to generate a target response that is memorized for recall upon the same input. Suitable design of synaptic connections enables the model to memorize input/output (I/O) mappings equaling 70% of the total number of neurons, where the evoked activity distinguishes a target pattern from others. Spontaneous neural activity without an input shows chaotic dynamics but keeps some similarity with evoked activities, as reported in recent experimental studies.
Article
Full-text available
The relation between large-scale brain structure and function is an outstanding open problem in neuroscience. We approach this problem by studying the dynamical regime under which realistic spatiotemporal patterns of brain activity emerge from the empirically derived network of human brain neuroanatomical connections. The results show that critical dynamics unfolding on the structural connectivity of the human brain allow the recovery of many key experimental findings obtained from functional magnetic resonance imaging, such as divergence of the correlation length, the anomalous scaling of correlation fluctuations, and the emergence of large-scale resting state networks.
Article
Full-text available
Cognitive neuroscience investigates how cognitive function is produced by the brain. Seen from a reverse angle, cognitive neuroscience studies how brain activity is modulated by the execution of cognitive tasks. In the former case, cognitive function is characterized in terms of neural properties associated with the execution of given cognitive tasks, while in the latter it can be thought of as a probe exposing information on brain dynamics. Brain activity displays dynamics independently of whether a particular task is carried out or not. The question is then: should cognitive neuroscience get interested in the properties of resting brain activity? And, if so, how and to what extent can studying resting brain activity help characterizing the neural correlates of cognitive processes?
Article
Full-text available
The activity of neural populations is determined not only by sensory inputs but also by internally generated patterns. During quiet wakefulness, the brain produces spontaneous firing events that can spread over large areas of cortex and have been suggested to underlie processes such as memory recall and consolidation. Here we demonstrate a different role for spontaneous activity in sensory cortex: gating of sensory inputs. We show that population activity in rat auditory cortex is composed of transient 50-100 ms packets of spiking activity that occur irregularly during silence and sustained tone stimuli, but reliably at tone onset. Population activity within these packets had broadly consistent spatiotemporal structure, but the rate and also precise relative timing of action potentials varied between stimuli. Packet frequency varied with cortical state, with desynchronized state activity consistent with superposition of multiple overlapping packets. We suggest that such packets reflect the sporadic opening of a "gate" that allows auditory cortex to broadcast a representation of external sounds to other brain regions.
Article
Given a sequence of nonnegative real numbers λ0, λ1… which sum to 1, we consider random graphs having approximately λin vertices of degree i. Essentially, we show that if Σ i(i - 2)λi > 0, then such graphs almost surely have a giant component, while if Σ i(i -2)λ. < 0, then almost surely all components in such graphs are small. We can apply these results to Gn,p,Gn.M, and other well-known models of random graphs. There are also applications related to the chromatic number of sparse random graphs. © 1995 Wiley Periodicals, Inc.
Article
Resting-state networks (RSNs), which have become a main focus in neuroimaging research, can be best simulated by large-scale cortical models in which networks teeter on the edge of instability. In this state, the functional networks are in a low firing stable state while they are continuously pulled towards multiple other configurations. Small extrinsic perturbations can shape task-related network dynamics, whereas perturbations from intrinsic noise generate excursions reflecting the range of available functional networks. This is particularly advantageous for the efficiency and speed of network mobilization. Thus, the resting state reflects the dynamical capabilities of the brain, which emphasizes the vital interplay of time and space. In this article, we propose a new theoretical framework for RSNs that can serve as a fertile ground for empirical testing.
Article
Conserved dynamical systems are generally considered to be critical. We study a class of critical routing models, equivalent to random maps, which can be solved rigorously in the thermodynamic limit. The information flow is conserved for these routing models and governed by cyclic attractors. We consider two classes of information flow, Markovian routing without memory and vertex routing involving a one-step routing memory. Investigating the respective cycle length distributions for complete graphs, we find log corrections to power-law scaling for the mean cycle length, as a function of the number of vertices, and a sub-polynomial growth for the overall number of cycles. When observing experimentally a real-world dynamical system one normally samples stochastically its phase space. The number and the length of the attractors are then weighted by the size of their respective basins of attraction. This situation is equivalent, for theory studies, to "on the fly" generation of the dynamical transition probabilities. For the case of vertex routing models, we find in this case power law scaling for the weighted average length of attractors, for both conserved routing models. These results show that the critical dynamical systems are generically not scale-invariant but may show power-law scaling when sampled stochastically. It is hence important to distinguish between intrinsic properties of a critical dynamical system and its behavior that one would observe when randomly probing its phase space.