Content uploaded by Leonardo Paulo Maia

Author content

All content in this area was uploaded by Leonardo Paulo Maia on Mar 10, 2014

Content may be subject to copyright.

A preview of the PDF is not available

Spontaneous neural activity has been increasingly recognized as a subject of key relevance in neuroscience. It exhibits nontrivial spatiotemporal structure reflecting the organization of the underlying neural network and has proved to be closely intertwined with stimulus-induced activity patterns. As an additional contribution in this regard, we report computational studies that strongly suggest that a stimulus-free feature rules the behavior of an important psychophysical measure of the sensibility of a sensory system to a stimulus, the so-called dynamic range. Indeed in this paper we show that the entropy of the distribution of avalanche lifetimes (information efficiency, since it can be interpreted as the efficiency of the network seen as a communication channel) always accompanies the dynamic range in the benchmark model for sensory systems. Specifically, by simulating the Kinouchi-Copelli (KC) model on two broad families of model networks, we generically observed that both quantities always increase or decrease together as functions of the average branching ratio (the control parameter of the KC model) and that the information efficiency typically exhibits critical optimization jointly with the dynamic range (i.e., both quantities are optimized at the same value of that control parameter, that turns out to be the critical point of a nonequilibrium phase transition). In contrast with the practice of taking power laws to identify critical points in most studies describing measured neuronal avalanches, we rely on data collapses as more robust signatures of criticality to claim that critical optimization may happen even when the distribution of avalanche lifetimes is not a power law, as suggested by a recent experiment. Finally, we note that the entropy of the size distribution of avalanches (information capacity) does not always follow the dynamic range and the information efficiency when they are critically optimized, despite being more widely used than the latter to describe the computational capabilities of a neural network. This strongly suggests that dynamical rules allowing a proper temporal matching of the states of the interacting neurons is the key for achieving good performance in information processing, rather than increasing the number of available units.

Figures - uploaded by Leonardo Paulo Maia

Author content

All figure content in this area was uploaded by Leonardo Paulo Maia

Content may be subject to copyright.

Content uploaded by Leonardo Paulo Maia

Author content

All content in this area was uploaded by Leonardo Paulo Maia on Mar 10, 2014

Content may be subject to copyright.

A preview of the PDF is not available

... It is known theoretically that PL avalanches may appear in systems that are not critical [39]; and systems that are known to be critical may lack a well-defined PL shape in avalanche distributions [17,40,41]. Nevertheless, critical systems obey scaling laws. ...

... A few things must be noted here: (i) out of equilibrium, the fluctuation-dissipation theorem is not generally valid [7], such that, now, χ = Var ρ in general, contrary to what is expected in equilibrium, thus, the susceptibility, χ = ∂ρ/∂h, and the variance each define a different critical exponent; (ii) one is certainly interested in calculating the averages in Eqs. (17) and (18) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 A c c e p t e d M a n u s c r i p t cumulants that help us identify the critical point [7, see Section 4.1.9.2] and [87]; (iv) error estimates for the variance can be computed via bootstrap or jackknife techniques [75]; (v) it is not trivial to measure these quantities in a computer simulation of a system undergoing an absorbing state phase transition, since the absorbing state with ρ = 0 will eventually be reached for any finite system [7,83], so similar issues might appear in experiments; thus, some simulation techniques were developed to optimize the calculations [90]. ...

... ρ and Var ρ [Eqs. (17) and (18)] as a function of the synaptic coupling for different system sizes L. The scaling forms from Eqs. (37) and (38) are shown, yielding the exponents β/ν * ⊥ and γ = 0. A zero exponent is frequently obtained for a quantity that has a discontinuous jump on the critical point = 0. We write ν * ⊥ with * because in this particular example, the phase transition is of the mean-field directed percolation universality class; this means that the scaling law ρ ∼ L β/ν * ⊥ holds with corrections [87]. ...

A homeostatic mechanism that keeps the brain highly susceptible to stimuli and optimizes many of its functions -- although this is a compelling theoretical argument in favor of the brain criticality hypothesis, the experimental evidence accumulated during the last two decades is still not entirely convincing, causing the idea to be seemingly unknown in the more clinically-oriented Neuroscience community. In this perspective review, we will briefly review the theoretical framework underlying such bold hypothesis, and point to where theory and experiments agree and disagree, highlighting potential ways to try and bridge the gap between them. Finally, we will discuss how the stand point of Statistical Physics could yield practical applications in Neuroscience and help with the interpretation of what is a healthy or unhealthy brain, regardless of being able to validate the critical brain hypothesis.

... Less clear are the results in other cases, when the KC model was used to study the dynamics of non random topologies. These include the deviations from mean-field behavior found in scale free networks, as reported by Copelli and Campos [40] or Mosqueiro and Maia [41]. We note in passing that there are multiple instances in which the KC model was (mis)named as "Greenberg and Hastings stochastic model," somewhat confusing according to the present results, as in the reports by Copelli and Campos [40], Wu et al., [42], Asis and Copelli [43], Mosqueiro and Maia [41], to name only a few. ...

... These include the deviations from mean-field behavior found in scale free networks, as reported by Copelli and Campos [40] or Mosqueiro and Maia [41]. We note in passing that there are multiple instances in which the KC model was (mis)named as "Greenberg and Hastings stochastic model," somewhat confusing according to the present results, as in the reports by Copelli and Campos [40], Wu et al., [42], Asis and Copelli [43], Mosqueiro and Maia [41], to name only a few. ...

This report is concerned with the relevance of the microscopic rules that implement individual neuronal activation, in determining the collective dynamics, under variations of the network topology. To fix ideas we study the dynamics of two cellular automaton models, commonly used, rather in-distinctively, as the building blocks of large-scale neuronal networks. One model, due to Greenberg and Hastings (GH), can be described by evolution equations mimicking an integrate-and-fire process, while the other model, due to Kinouchi and Copelli (KC), represents an abstract branching process, where a single active neuron activates a given number of postsynaptic neurons according to a prescribed “activity” branching ratio. Despite the apparent similarity between the local neuronal dynamics of the two models, it is shown that they exhibit very different collective dynamics as a function of the network topology. The GH model shows qualitatively different dynamical regimes as the network topology is varied, including transients to a ground (inactive) state, continuous and discontinuous dynamical phase transitions. In contrast, the KC model only exhibits a continuous phase transition, independently of the network topology. These results highlight the importance of paying attention to the microscopic rules chosen to model the interneuronal interactions in large-scale numerical simulations, in particular when the network topology is far from a mean-field description. One such case is the extensive work being done in the context of the Human Connectome, where a wide variety of types of models are being used to understand the brain collective dynamics.

... Less clear are the results in other cases, when the KC model was used to study the dynamics of non random topologies. These include the deviations from mean field behavior found in scale free networks, as reported by Copelli & Campos [38] or Mosqueiro & Maia [39]. We note in passing that there are multiple instances in which the KC model was (mis)named as "Greenberg & Hastings stochastic model", somewhat confusing according to the present results, as in the reports by Copelli & Campos [38], Wu et al., [40], Asis & Copelli [41], Mosqueiro & Maia [39], to name only a few. ...

... These include the deviations from mean field behavior found in scale free networks, as reported by Copelli & Campos [38] or Mosqueiro & Maia [39]. We note in passing that there are multiple instances in which the KC model was (mis)named as "Greenberg & Hastings stochastic model", somewhat confusing according to the present results, as in the reports by Copelli & Campos [38], Wu et al., [40], Asis & Copelli [41], Mosqueiro & Maia [39], to name only a few. ...

This report is concerned with the relevance of the microscopic rules, that implement individual neuronal activation, in determining the collective dynamics, under variations of the network topology. To fix ideas we study the dynamics of two cellular automaton models, commonly used, rather in-distinctively, as the building blocks of large scale neuronal networks. One model, due to Greenberg \& Hastings, (GH) can be described by evolution equations mimicking an integrate-and-fire process, while the other model, due to Kinouchi \& Copelli, (KC) represents an abstract branching process, where a single active neuron activates a given number of postsynaptic neurons according to a prescribed "activity" branching ratio. Despite the apparent similarity between the local neuronal dynamics of the two models, it is shown that they exhibit very different collective dynamics as a function of the network topology. The GH model shows qualitatively different dynamical regimes as the network topology is varied, including transients to a ground (inactive) state, continuous and discontinuous dynamical phase transitions. In contrast, the KC model only exhibits a continuous phase transition, independently of the network topology. These results highlight the importance of paying attention to the microscopic rules chosen to model the inter-neuronal interactions in large scale numerical simulations, in particular when the network topology is far from a mean field description. One such case is the extensive work being done in the context of the Human Connectome, where a wide variety of types of models are being used to understand the brain collective dynamics.

... that information processing seems to be optimized at a secondorder absorbing phase transition [28][29][30][31][32][33][34][35][36][37][38][39][40][41][42]. This transition occurs between no activity (the absorbing phase) and nonzero steadystate activity (the active phase). ...

The critical brain hypothesis states that there are information processing advantages for neuronal networks working close to the critical region of a phase transition. If this is true, we must ask how the networks achieve and maintain this critical state. Here, we review several proposed biological mechanisms that turn the critical region into an attractor of a dynamics in network parameters like synapses, neuronal gains, and firing thresholds. Since neuronal networks (biological and models) are not conservative but dissipative, we expect not exact criticality but self-organized quasicriticality, where the system hovers around the critical point.

... [7,[21][22][23][24][25][26]). One of the main results is that information processing seems to be optimized at a second order absorbing phase transition [27][28][29][30][31][32][33][34][35][36][37][38][39][40][41]. This transition occurs between no activity (the absorbing phase) and nonzero steady state activity (the active phase). ...

The critical brain hypothesis states that there are information processing advantages for neuronal networks working close to the critical region of a phase transition. If this is true, we must ask how the networks achieve and maintain this critical state. Here we review several proposed biological mechanisms that turn the critical region into an attractor of a dynamics in network parameters like synapses, neuronal gains and firing thresholds. Since neuronal networks (biological and models) are nonconservative but dissipative, we expect not exact criticality but self-organized quasicriticality (SOqC), where the system hovers around the critical point.

... These firings may be organized in avalanches of action potentials that spread throughout the cortex. Critical avalanches are known to enable the propagation of fluctuations through local interactions due to longrange spatiotemporal correlations [3], generating optimized processing, and functional features [4][5][6][7][8]. ...

Recent experiments suggested that homeostatic regulation of synaptic balance leads the visual system to recover and maintain a regime of power-law avalanches. Here we study an excitatory/inhibitory (E/I) mean-field neuronal network that has a critical point with power-law avalanches and synaptic balance. When short term depression in inhibitory synapses and firing threshold adaptation are added, the system hovers around the critical point. This homeostatically self-organized quasi-critical (SOqC) dynamics generates E/I synaptic current cancellation in fast time scales, causing fluctuation-driven asynchronous-irregular (AI) firing. We present the full phase diagram of the model without adaptation varying external input versus synaptic coupling. This system has a rich dynamical repertoire of spiking patterns: synchronous regular (SR), asynchronous regular (AR), synchronous irregular (SI), slow oscillations (SO) and AI. It also presents dynamic balance of synaptic currents, since inhibitory currents try and compensate excitatory currents over time, resulting in both of them scaling linearly with external input. Our model thus unifies two different perspectives on cortical spontaneous activity: both critical avalanches and fluctuation-driven AI firing arise from SOqC homeostatic adaptation, and are indeed two sides of the same coin.

... These firings may be organized in avalanches of action potentials that spread throughout the cortex. Critical avalanches are known to enable the propagation of fluctuations through local interactions due to longrange spatiotemporal correlations [3], generating optimized processing, and functional features [4][5][6][7][8]. ...

Recent experiments suggested that a homeostatic regulation of synaptic balance leads the visual system to recover and maintain a regime of power-law avalanches. Here we study an excitatory/inhibitory (E/I) mean-field neuronal network that has a critical point with power-law avalanches and synaptic balance. When short-term depression in inhibitory synapses and firing threshold adaptation are added, the system hovers around the critical point. This homeostatically self-organized quasicritical (SOqC) dynamics generates E/I synaptic current cancellation in fast timescales, causing fluctuation-driven asynchronous-irregular (AI) firing. We present the full phase diagram of the model without adaptation varying external input versus synaptic coupling. This system has a rich dynamical repertoire of spiking patterns: synchronous regular (SR), asynchronous regular (AR), synchronous irregular (SI), slow oscillations (SO), and AI. It also presents dynamic balance of synaptic currents, since inhibitory currents try and compensate excitatory currents over time, resulting in both of them scaling linearly with external input. Our model thus unifies two different perspectives on cortical spontaneous activity: both critical avalanches and fluctuation-driven AI firing arise from SOqC homeostatic adaptation and are indeed two sides of the same coin.

... Critical avalanches are known to enable the propagation of fluctuations through local interactions due to long-range spatiotemporal correlations [3], generating optimized processing and functional features [4][5][6][7][8]. ...

Asynchronous irregular (AI) and critical states are two competing frameworks proposed to explain spontaneous neuronal activity. Here, we propose a mean-field model with simple stochastic neurons that generalizes the integrate-and-fire network of Brunel (2000). We show that the point with balanced inhibitory/excitatory synaptic weight ratio $g_c \approx 4$ corresponds to a second order absorbing phase transition usual in self-organized critical (SOC) models. At the synaptic balance point $g_c$, the network exhibits power-law neuronal avalanches with the usual exponents, whereas for nonzero external field the system displays the four usual synchronicity states of balanced networks. We add homeostatic inhibition and firing rate adaption and obtain a self-organized quasi-critical balanced state with avalanches and AI-like activity. Our model might explain why different inhibition levels are obtained in different experimental conditions and for different regions of the brain, since at least two dynamical mechanisms are necessary to obtain a truly balanced state, without which the network may hover in different regions of the presented theoretical phase diagram.

... We now consider an ASOC version of a probabilistic cellular automata well studied in the literature [19,40,41,42,43]. It is a model for excitable media and an interpretation in terms of neuronal networks is natural. ...

In the last decade, several models with adaptive mechanisms (link deletion-creation, dynamical synapses, dynamical gains) have been proposed as examples of self-organized criticality (SOC). However, all these systems present hovering stochastic oscillations around the critical region and the origin of this behaviour is not clear. Here we make a linear stability analysis of the mean field fixed points of three adaptive SOC systems. We find that the fixed points correspond to barely stable spirals that turn out indifferent at criticality where a Neimark-Sacker bifurcation occurs. This near indifference means that, in real systems, finite-size fluctuations (as well as external noise) can excite both stochastic oscillations and avalanches. The coexistence of these two types of neuronal activity is an experimental prediction that differs from standard SOC models.

In the last decade, several models with network adaptive mechanisms (link deletion-creation, dynamic synapses, dynamic gains) have been proposed as examples of self-organized criticality (SOC) to explain neuronal avalanches. However, all these systems present stochastic oscillations hovering around the critical region that are incompatible with standard SOC. Here we make a linear stability analysis of the mean field fixed points of two self-organized quasi-critical systems: a fully connected network of discrete time stochastic spiking neurons with firing rate adaptation produced by dynamic neuronal gains and an excitable cellular automata with depressing synapses. We find that the fixed point corresponds to a stable focus that loses stability at criticality. We argue that when this focus is close to become indifferent, demographic noise can elicit stochastic oscillations that frequently fall into the absorbing state. This mechanism interrupts the oscillations, producing both power law avalanches and dragon king events, which appear as bands of synchronized firings in raster plots. Our approach differs from standard SOC models in that it predicts the coexistence of these different types of neuronal activity.

We propose a novel associative memory model wherein the neural activity without an input (i.e., spontaneous activity) is modified by an input to generate a target response that is memorized for recall upon the same input. Suitable design of synaptic connections enables the model to memorize input/output (I/O) mappings equaling 70% of the total number of neurons, where the evoked activity distinguishes a target pattern from others. Spontaneous neural activity without an input shows chaotic dynamics but keeps some similarity with evoked activities, as reported in recent experimental studies.

The relation between large-scale brain structure and function is an outstanding open problem in neuroscience. We approach this problem by studying the dynamical regime under which realistic spatiotemporal patterns of brain activity emerge from the empirically derived network of human brain neuroanatomical connections. The results show that critical dynamics unfolding on the structural connectivity of the human brain allow the recovery of many key experimental findings obtained from functional magnetic resonance imaging, such as divergence of the correlation length, the anomalous scaling of correlation fluctuations, and the emergence of large-scale resting state networks.

Cognitive neuroscience investigates how cognitive function is produced by the brain. Seen from a reverse angle, cognitive neuroscience studies how brain activity is modulated by the execution of cognitive tasks. In the former case, cognitive function is characterized in terms of neural properties associated with the execution of given cognitive tasks, while in the latter it can be thought of as a probe exposing information on brain dynamics.
Brain activity displays dynamics independently of whether a particular task is carried out or not. The question is then: should cognitive neuroscience get interested in the properties of resting brain activity? And, if so, how and to what extent can studying resting brain activity help characterizing the neural correlates of cognitive processes?

The activity of neural populations is determined not only by sensory inputs but also by internally generated patterns. During quiet wakefulness, the brain produces spontaneous firing events that can spread over large areas of cortex and have been suggested to underlie processes such as memory recall and consolidation. Here we demonstrate a different role for spontaneous activity in sensory cortex: gating of sensory inputs. We show that population activity in rat auditory cortex is composed of transient 50-100 ms packets of spiking activity that occur irregularly during silence and sustained tone stimuli, but reliably at tone onset. Population activity within these packets had broadly consistent spatiotemporal structure, but the rate and also precise relative timing of action potentials varied between stimuli. Packet frequency varied with cortical state, with desynchronized state activity consistent with superposition of multiple overlapping packets. We suggest that such packets reflect the sporadic opening of a "gate" that allows auditory cortex to broadcast a representation of external sounds to other brain regions.

Networks of living neurons exhibit diverse patterns of activity, including oscillations, synchrony, and waves. Recent work in physics has shown yet another mode of activity in systems composed of many nonlinear units interacting locally. For example, avalanches, earthquakes, and forest fires all propagate in systems organized into a critical state in which event sizes show no characteristic scale and are described by power laws. We hypothesized that a similar mode of activity with complex emergent properties could exist in networks of cortical neurons. We investigated this issue in mature organotypic cultures and acute slices of rat cortex by recording spontaneous local field potentials continuously using a 60 channel multielectrode array. Here, we show that propagation of spontaneous activity in cortical networks is described by equations that govern avalanches. As predicted by theory for a critical branching process, the propagation obeys a power law with an exponent of -3/2 for event sizes, with a branching parameter close to the critical value of 1. Simulations show that a branching parameter at this value optimizes information transmission in feedforward networks, while preventing runaway network excitation. Our findings suggest that “neuronal avalanches” may be a generic property of cortical networks, and represent a mode of activity that differs profoundly from oscillatory, synchronized, or wave-like network states. In the critical state, the network may satisfy the competing demands of information transmission and network stability.

Given a sequence of nonnegative real numbers λ0, λ1… which sum to 1, we consider random graphs having approximately λin vertices of degree i. Essentially, we show that if Σ i(i - 2)λi > 0, then such graphs almost surely have a giant component, while if Σ i(i -2)λ. < 0, then almost surely all components in such graphs are small. We can apply these results to Gn,p,Gn.M, and other well-known models of random graphs. There are also applications related to the chromatic number of sparse random graphs. © 1995 Wiley Periodicals, Inc.

Resting-state networks (RSNs), which have become a main focus in neuroimaging research, can be best simulated by large-scale cortical models in which networks teeter on the edge of instability. In this state, the functional networks are in a low firing stable state while they are continuously pulled towards multiple other configurations. Small extrinsic perturbations can shape task-related network dynamics, whereas perturbations from intrinsic noise generate excursions reflecting the range of available functional networks. This is particularly advantageous for the efficiency and speed of network mobilization. Thus, the resting state reflects the dynamical capabilities of the brain, which emphasizes the vital interplay of time and space. In this article, we propose a new theoretical framework for RSNs that can serve as a fertile ground for empirical testing.

Conserved dynamical systems are generally considered to be critical. We study a class of critical routing models, equivalent to random maps, which can be solved rigorously in the thermodynamic limit. The information flow is conserved for these routing models and governed by cyclic attractors. We consider two classes of information flow, Markovian routing without memory and vertex routing involving a one-step routing memory. Investigating the respective cycle length distributions for complete graphs, we find log corrections to power-law scaling for the mean cycle length, as a function of the number of vertices, and a sub-polynomial growth for the overall number of cycles. When observing experimentally a real-world dynamical system one normally samples stochastically its phase space. The number and the length of the attractors are then weighted by the size of their respective basins of attraction. This situation is equivalent, for theory studies, to "on the fly" generation of the dynamical transition probabilities. For the case of vertex routing models, we find in this case power law scaling for the weighted average length of attractors, for both conserved routing models. These results show that the critical dynamical systems are generically not scale-invariant but may show power-law scaling when sampled stochastically. It is hence important to distinguish between intrinsic properties of a critical dynamical system and its behavior that one would observe when randomly probing its phase space.