Preprint

BRAIN FRACTAL SLOPES DICTATE SPIKE FREQUENCIES, VIA INFORMATIONAL ENTROPY

Authors:
Preprints and early-stage research may not have been peer reviewed yet.
To read the file of this research, you can request a copy directly from the authors.

Abstract

Our paper is currently under review. If you want to quote it, please write: Tozzi A, Peters JF, Çankaya MN, Korbel J, Zare M, Papo D. 2016. Energetic Link Between Spike Frequencies and Brain Fractal Dimensions. viXra:1609.0105. Oscillations in brain activity exhibit a power law distribution which appears as a straight line when plotted on logarithmic scales in a log power versus log frequency plot. The line’s slope is given by a single constant, the power law exponent. Since a variation in slope may occur during different functional states, the brain waves are said to be multifractal, i.e., characterized by a spectrum of multiple possible exponents. A role for such non-stationary scaling properties has scarcely been taken into account. Here we show that changes in fractal slopes and oscillation frequencies, and in particular in electric spikes, are correlated. Taking into account techniques for parameter distribution estimates, which provide a foundation for the proposed approach, we show that modifications in power law exponents are associated with variations in the Rényi entropy, a generalization of Shannon informational entropy. Changes in Rényi entropy, in turn, are able to modify brain oscillation frequencies. Therefore, results point out that multifractal systems lead to different probability outcomes of brain activity, based solely on increases or decreases of the fractal exponents. Such approach may offer new insights in the characterization of neuroimaging diagnostic techniques and the forces required for transcranial stimulation, where doubts still exist as to the parameters that best characterize waveforms. SIGNIFICANCE STATEMENT The generalized informational entropy called “Rényi entropy” does not select the most appropriate probabilistic parameter as Shannon’s, rather it builds diversity profiles. By offering a continuum of possible diversity measures at many spatiotemporal levels, it is very useful in the evaluation of the fractal scaling occurring in the brain. Rényi entropy elucidates how power laws behaviours in cortical oscillations are able to modify electric spike frequencies. Through its links with the scale-free behavior of cortical fluctuations, Rényi entropy suggests that the brain changes its fractal exponents in order to control free-energy and scale the entropy of different functional states.

No file available

Request Full-text Paper PDF

To read the file of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
A remarkable yet mysterious property of black holes is that their entropy is proportional to the horizon area. This area law inspired the holographic principle, which was later realized concretely in gauge-gravity duality. In this context, entanglement entropy is given by the area of a minimal surface in a dual spacetime. However, discussions of area laws have been constrained to entanglement entropy, whereas a full understanding of a quantum state requires Rényi entropies. Here we show that all Rényi entropies satisfy a similar area law in holographic theories and are given by the areas of dual cosmic branes. This geometric prescription is a one-parameter generalization of the minimal surface prescription for entanglement entropy. Applying this we provide the first holographic calculation of mutual Rényi information between two disks of arbitrary dimension. Our results provide a framework for efficiently studying Rényi entropies and understanding entanglement structures in strongly coupled systems and quantum gravity.
Article
Full-text available
Spontaneous brain activity has received increasing attention as demonstrated by the exponential rise in the number of published article on this topic over the last 30 years. Such “intrinsic” brain activity, generated in the absence of an explicit task, is frequently associated with resting-state or default-mode networks (DMN)s. The focus on characterizing spontaneous brain activity promises to shed new light on questions concerning the structural and functional architecture of the brain and how they are related to “mind”. However, many critical questions have yet to be addressed. In this review, we focus on a scarcely explored area, specifically the energetic requirements and constraints of spontaneous activity, taking into account both thermodynamical and informational perspectives. We argue that the “classical” definitions of spontaneous activity do not take into account an important feature, that is, the critical thermodynamic energetic differences between spontaneous and evoked brain activity. Spontaneous brain activity is associated with slower oscillations compared with evoked, task-related activity, hence it exhibits lower levels of enthalpy and “free-energy” (i.e., the energy that can be converted to do work), thus supporting noteworthy thermodynamic energetic differences between spontaneous and evoked brain activity. Increased spike frequency during evoked activity has a significant metabolic cost, consequently, brain functions traditionally associated with spontaneous activity, such as mind wandering, require less energy that other nervous activities. We also review recent empirical observations in neuroscience, in order to capture how spontaneous brain dynamics and mental function can be embedded in a non-linear dynamical framework, which considers nervous activity in terms of phase spaces, particle trajectories, random walks, attractors and/or paths at the edge of the chaos. This takes us from the thermodynamic free-energy, to the realm of “variational free-energy”, a theoretical construct pertaining to probability and information theory which allows explanation of unexplored features of spontaneous brain activity.
Book
Full-text available
This book introduces computational proximity. Basically, computational proximity (CP) is an algorithmic approach to finding nonempty sets of points that are either close to each other or far apart. The basic notion of computational proximity draws its inspiration from the Preface written in 2009 by S.A. Naimpally in [1, pp. 23–28] and the Foreword in [2]. In CP, two types of near sets are considered, namely, spatially near sets and descriptively near sets. Spatially near sets contain points identified by their location and have at least one point in common. Descriptively near sets contain non-abstract points that have both locations and measurable features such as colour and gradient orientation. Connectedness, boundedness, mesh nerves, convexity, shapes and shape theory are principal topics in the study of nearness and separation of physical as well as abstract sets. CP has a hefty visual content. Applications of CP include computer vision, multimedia, brain activity, biology, social networks and cosmology.
Article
Full-text available
Symmetries are widespread invariances underlining countless systems, including the brain. A symmetry break occurs when the symmetry is present at one level of observation, but “hidden” at another level. In such a general framework, a concept from algebraic topology, namely the Borsuk-Ulam theorem (BUT), comes into play and sheds new light on the general mechanisms of nervous symmetries. BUT tells us that we can find, on an n-dimensional sphere, a pair of opposite points that have same encoding on an n-1 sphere. This mapping makes it possible to describe both antipodal points with a single real-valued vector on a lower dimensional sphere. Here we argue that this topological approach is useful in the evaluation of hidden nervous symmetries. This means that symmetries can be found when evaluating the brain in a proper dimension, while they disappear (are hidden or broken) when we evaluate the same brain in just one dimension lower. In conclusion, we provide a topological methodology for the evaluation of the most general features of brain activity, i.e., the symmetries, cast in a physical/biological fashion that has the potential to be operationalized.
Article
Full-text available
Current advances in neurosciences deal with the functional architecture of the central nervous system, paving the way for general theories that improve our understanding of brain activity. From topology, a strong concept comes into play in understanding brain functions, namely, the 4D space of a “hypersphere’s torus”, undetectable by observers living in a 3D world. The torus may be compared with a video game with biplanes in aerial combat: when a biplane flies off one edge of gaming display, it does not crash but rather it comes back from the opposite edge of the screen. Our thoughts exhibit similar behaviour, i.e. the unique ability to connect past, present and future events in a single, coherent picture as if we were allowed to watch the three screens of past-present-future “glued” together in a mental kaleidoscope. Here we hypothesize that brain functions are embedded in a imperceptible fourth spatial dimension and propose a method to empirically assess its presence. Neuroimaging fMRI series can be evaluated, looking for the topological hallmark of the presence of a fourth dimension. Indeed, there is a typical feature which reveal the existence of a functional hypersphere: the simultaneous activation of areas opposite each other on the 3D cortical surface. Our suggestion—substantiated by recent findings—that brain activity takes place on a closed, donut-like trajectory helps to solve long-standing mysteries concerning our psychological activities, such as mind-wandering, memory retrieval, consciousness and dreaming state.
Article
Full-text available
A bimodal extension of the generalized gamma distribution is proposed by using a mixing approach. Some distributional properties of the new distribution are investigated. The maximum likelihood (ML) estimators for the parameters of the new distribution are obtained. Real data examples are given to show the strength of the new distribution for modeling data.
Article
Full-text available
Space series analysis is understood in a broad sense to refer to logical sequences of changes or transformations applied to vegetational entities in both topographic and conceptual spaces. This term is suggested to replace spatial processes which have a multiple meaning. A literature survey illustrates that space series analysis includes commonly used pattern recognition methods associated with the real, topographic space, but also supports generalization to data, resemblance, ordination, classification space and derived variables. -from Author
Article
Full-text available
Cognitive Neurodynamics, 2015 Acknowledgements: I would like to thank Karl Friston for commenting upon an earlier version of this manuscript.----------------------------------------------------------------------- How does central nervous system process information? Current theories are based on two tenets: (a) information is transmitted by action potentials, the language by which neurons communicate with each other—and (b) homogeneous neuronal assemblies of cortical circuits operate on these neuronal messages where the operations are characterized by the intrinsic connectivity among neuronal populations. In this view, the size and time course of any spike is stereotypic and the information is restricted to the temporal sequence of the spikes; namely, the “neural code”. However, an increasing amount of novel data point towards an alternative hypothesis: (a) the role of neural code in information processing is overemphasized. Instead of simply passing messages, action potentials play a role in dynamic coordination at multiple spatial and temporal scales, establishing network interactions across several levels of a hierarchical modular architecture, modulating and regulating the propagation of neuronal messages. (b) Information is processed at all levels of neuronal infrastructure from macromolecules to population dynamics. For example, intra-neuronal (changes in protein conformation, concentration and synthesis) and extra-neuronal factors (extracellular proteolysis, substrate patterning, myelin plasticity, microbes, metabolic status) can have a profound effect on neuronal computations. This means molecular message passing may have cognitive connotations. This essay introduces the concept of “supramolecular chemistry”, involving the storage of information at the molecular level and its retrieval, transfer and processing at the supramolecular level, through transitory non-covalent molecular processes that are self-organized, self-assembled and dynamic. Finally, we note that the cortex comprises extremely heterogeneous cells, with distinct regional variations, macromolecular assembly, receptor repertoire and intrinsic microcircuitry. This suggests that every neuron (or group of neurons) embodies different molecular information that hands an operational effect on neuronal computation.
Article
Full-text available
We offer a formal treatment of choice behaviour based on the premise that agents minimise the expected free energy of future outcomes. Crucially, the negative free energy or quality of a policy can be decomposed into extrinsic and epistemic (or intrinsic) value. Minimising expected free energy is therefore equivalent to maximising extrinsic value or expected utility (defined in terms of prior preferences or goals), while maximising information gain or intrinsic value (reducing uncertainty about the causes of valuable outcomes). The resulting scheme resolves the exploration-exploitation dilemma: epistemic value is maximised until there is no further information gain, after which exploitation is assured through maximisation of extrinsic value. This is formally consistent with the Infomax principle, generalising formulations of active vision based upon salience (Bayesian surprise) and optimal decisions based on expected utility and risk-sensitive (Kullback-Leibler) control. Furthermore, as with previous active inference formulations of discrete (Markovian) problems, ad hoc softmax parameters become the expected (Bayes-optimal) precision of beliefs about, or confidence in, policies. This article focuses on the basic theory, illustrating the ideas with simulations. A key aspect of these simulations is the similarity between precision updates and dopaminergic discharges observed in conditioning paradigms.
Article
Full-text available
Article
Full-text available
The dynamics of many natural and artificial systems are well described as random walks on a network: the stochastic behaviour of molecules, traffic patterns on the internet, fluctuations in stock prices and so on. The vast literature on random walks provides many tools for computing properties such as steady-state probabilities or expected hitting times. Previously, however, there has been no general theory describing the distribution of possible paths followed by a random walk. Here, we show that for any random walk on a finite network, there are precisely three mutually exclusive possibilities for the form of the path distribution: finite, stretched exponential and power law. The form of the distribution depends only on the structure of the network, while the stepping probabilities control the parameters of the distribution. We use our theory to explain path distributions in domains such as sports, music, nonlinear dynamics and stochastic chemical kinetics.
Article
Full-text available
Individual differences in the structure of parietal and prefrontal cortex predict the stability of bistable visual perception. However, the mechanisms linking such individual differences in brain structures to behaviour remain elusive. Here we demonstrate a systematic relationship between the dynamics of brain activity, cortical structure and behaviour underpinning bistable perception. Using fMRI in humans, we find that the activity dynamics during bistable perception are well described as fluctuating between three spatially distributed energy minimums: visual-area-dominant, frontal-area-dominant and intermediate states. Transitions between these energy minimums predicted behaviour, with participants whose brain activity tend to reflect the visual-area-dominant state exhibiting more stable perception and those whose activity transits to frontal-area-dominant states reporting more frequent perceptual switches. Critically, these brain activity dynamics are correlated with individual differences in grey matter volume of the corresponding brain areas. Thus, individual differences in the large-scale dynamics of brain activity link focal brain structure with bistable perception.
Article
Full-text available
The relationship between synaptic excitation and inhibition (E/I ratio), two opposing forces in the mammalian cerebral cortex, affects many cortical functions such as feature selectivity and gain. Individual pyramidal cells show stable E/I ratios in time despite fluctuating cortical activity levels. This is because when excitation increases, inhibition increases proportionally through the increased recruitment of inhibitory neurons, a phenomenon referred to as excitation-inhibition balance. However, little is known about the distribution of E/I ratios across pyramidal cells. Through their highly divergent axons, inhibitory neurons indiscriminately contact most neighbouring pyramidal cells. Is inhibition homogeneously distributed or is it individually matched to the different amounts of excitation received by distinct pyramidal cells? Here we discover that pyramidal cells in layer 2/3 of mouse primary visual cortex each receive inhibition in a similar proportion to their excitation. As a consequence, E/I ratios are equalized across pyramidal cells. This matched inhibition is mediated by parvalbumin-expressing but not somatostatin-expressing inhibitory cells and results from the independent adjustment of synapses originating from individual parvalbumin-expressing cells targeting different pyramidal cells. Furthermore, this match is activity-dependent as it is disrupted by perturbing pyramidal cell activity. Thus, the equalization of E/I ratios across pyramidal cells reveals an unexpected degree of order in the spatial distribution of synaptic strengths and indicates that the relationship between the cortex's two opposing forces is stabilized not only in time but also in space.
Article
Full-text available
Cognitive functions are stored in the connectome, the wiring diagram of the brain, which exhibits non-random features, so-called motifs. In this work, we focus on bidirectional, symmetric motifs, i.e. two neurons that project to each other via connections of equal strength, and unidirectional, non-symmetric motifs, i.e. within a pair of neurons only one neuron projects to the other. We hypothesise that such motifs have been shaped via activity dependent synaptic plasticity processes. As a consequence, learning moves the distribution of the synaptic connections away from randomness. Our aim is to provide a global, macroscopic, single parameter characterisation of the statistical occurrence of bidirectional and unidirectional motifs. To this end we define a symmetry measure that does not require any a priori thresholding of the weights or knowledge of their maximal value. We calculate its mean and variance for random uniform or Gaussian distributions, which allows us to introduce a confidence measure of how significantly symmetric or asymmetric a specific configuration is, i.e. how likely it is that the configuration is the result of chance. We demonstrate the discriminatory power of our symmetry measure by inspecting the eigenvalues of different types of connectivity matrices. We show that a Gaussian weight distribution biases the connectivity motifs to more symmetric configurations than a uniform distribution and that introducing a random synaptic pruning, mimicking developmental regulation in synaptogenesis, biases the connectivity motifs to more asymmetric configurations, regardless of the distribution. We expect that our work will benefit the computational modelling community, by providing a systematic way to characterise symmetry and asymmetry in network structures. Further, our symmetry measure will be of use to electrophysiologists that investigate symmetry of network connectivity.
Article
Full-text available
Seizures can occur spontaneously and in a recurrent manner, which defines epilepsy; or they can be induced in a normal brain under a variety of conditions in most neuronal networks and species from flies to humans. Such universality raises the possibility that invariant properties exist that characterize seizures under different physiological and pathological conditions. Here, we analysed seizure dynamics mathematically and established a taxonomy of seizures based on first principles. For the predominant seizure class we developed a generic model called Epileptor. As an experimental model system, we used ictal-like discharges induced in vitro in mouse hippocampi. We show that only five state variables linked by integral-differential equations are sufficient to describe the onset, time course and offset of ictal-like discharges as well as their recurrence. Two state variables are responsible for generating rapid discharges (fast time scale), two for spike and wave events (intermediate time scale) and one for the control of time course, including the alternation between 'normal' and ictal periods (slow time scale). We propose that normal and ictal activities coexist: a separatrix acts as a barrier (or seizure threshold) between these states. Seizure onset is reached upon the collision of normal brain trajectories with the separatrix. We show theoretically and experimentally how a system can be pushed toward seizure under a wide variety of conditions. Within our experimental model, the onset and offset of ictal-like discharges are well-defined mathematical events: a saddle-node and homoclinic bifurcation, respectively. These bifurcations necessitate a baseline shift at onset and a logarithmic scaling of interspike intervals at offset. These predictions were not only confirmed in our in vitro experiments, but also for focal seizures recorded in different syndromes, brain regions and species (humans and zebrafish). Finally, we identified several possible biophysical parameters contributing to the five state variables in our model system. We show that these parameters apply to specific experimental conditions and propose that there exists a wide array of possible biophysical mechanisms for seizure genesis, while preserving central invariant properties. Epileptor and the seizure taxonomy will guide future modeling and translational research by identifying universal rules governing the initiation and termination of seizures and predicting the conditions necessary for those transitions.
Article
Full-text available
Behavioural studies have shown that human cognition is characterized by properties such as temporal scale invariance, heavy-tailed non-Gaussian distributions, and long-range correlations at long time scales, suggesting models of how (non observable) components of cognition interact. On the other hand, results from functional neuroimaging studies show that complex scaling and intermittency may be generic spatio-temporal properties of the brain at rest. Somehow surprisingly, though, hardly ever have the neural correlates of cognition been studied at time scales comparable to those at which cognition shows scaling properties. Here, we analyze the meanings of scaling properties and the significance of their task-related modulations for cognitive neuroscience. It is proposed that cognitive processes can be framed in terms of complex generic properties of brain activity at rest and, ultimately, of functional equations, limiting distributions, symmetries, and possibly universality classes characterizing them.
Article
Full-text available
As it has several features that optimize information processing, it has been proposed that criticality governs the dynamics of nervous system activity. Indications of such dynamics have been reported for a variety of in vitro and in vivo recordings, ranging from in vitro slice electrophysiology to human functional magnetic resonance imaging. However, there still remains considerable debate as to whether the brain actually operates close to criticality or in another governing state such as stochastic or oscillatory dynamics. A tool used to investigate the criticality of nervous system data is the inspection of power-law distributions. Although the findings are controversial, such power-law scaling has been found in different types of recordings. Here, we studied whether there is a power law scaling in the distribution of the phase synchronization derived from magnetoencephalographic recordings during executive function tasks performed by children with and without autism. Characterizing the brain dynamics that is different between autistic and non-autistic individuals is important in order to find differences that could either aid diagnosis or provide insights as to possible therapeutic interventions in autism. We report in this study that power law scaling in the distributions of a phase synchrony index is not very common and its frequency of occurrence is similar in the control and the autism group. In addition, power law scaling tends to diminish with increased cognitive load (difficulty or engagement in the task). There were indications of changes in the probability distribution functions for the phase synchrony that were associated with a transition from power law scaling to lack of power law (or vice versa), which suggests the presence of phenomenological bifurcations in brain dynamics associated with cognitive load. Hence, brain dynamics may fluctuate between criticality and other regimes depending upon context and behaviors.
Article
Full-text available
Recent research suggests that genetic interactions involving more than two loci may influence a number of complex traits. How these 'higher-order' interactions arise at the genetic and molecular levels remains an open question. To provide insights into this problem, we dissected a colony morphology phenotype that segregates in a yeast cross and results from synthetic higher-order interactions. Using backcrossing and selective sequencing of progeny, we found five loci that collectively produce the trait. We fine-mapped these loci to 22 genes in total and identified a single gene at each locus that caused loss of the phenotype when deleted. Complementation tests or allele replacements provided support for functional variation in these genes, and revealed that pre-existing genetic variants and a spontaneous mutation interact to cause the trait. The causal genes have diverse functions in endocytosis (END3), oxidative stress response (TRR1), RAS-cAMP signalling (IRA2), and transcriptional regulation of multicellular growth (FLO8 and MSS11), and for the most part have not previously been shown to exhibit functional relationships. Further efforts uncovered two additional loci that together can complement the non-causal allele of END3, suggesting that multiple genotypes in the cross can specify the same phenotype. Our work sheds light on the complex genetic and molecular architecture of higher-order interactions, and raises questions about the broader contribution of such interactions to heritable trait variation.
Article
Full-text available
Maintaining the ability of the nervous system to perceive, remember, process, and react to the outside world requires a continuous energy supply. Yet the overall power consumption is remarkably low, which has inspired engineers to mimic nervous systems in designing artificial cochlea, retinal implants, and brain–computer interfaces (BCIs) to improve the quality of life in patients. Such neuromorphic devices are both energy efficient and increasingly able to emulate many functions of the human nervous system. We examine the energy constraints of neuronal signaling within biology, review the quantitative tradeoff between energy use and information processing, and ask whether the biophysics and design of nerve cells minimizes energy consumption.
Article
Full-text available
Data assimilation is a fundamental issue that arises across many scales in neuroscience - ranging from the study of single neurons using single electrode recordings to the interaction of thousands of neurons using fMRI. Data assimilation involves inverting a generative model that can not only explain observed data but also generate predictions. Typically, the model is inverted or fitted using conventional tools of (convex) optimization that invariably extremise some functional - norms, minimum descriptive length, variational free energy, etc. Generally, optimisation rests on evaluating the local gradients of the functional to be optimised. In this paper, we compare three different gradient estimation techniques that could be used for extremising any functional in time - (i) finite differences, (ii) forward sensitivities and a method based on (iii) the adjoint of the dynamical system. We demonstrate that the first-order gradients of a dynamical system, linear or non-linear, can be computed most efficiently using the adjoint method. This is particularly true for systems where the number of parameters is greater than the number of states. For such systems, integrating several sensitivity equations - as required with forward sensitivities - proves to be most expensive, while finite-difference approximations have an intermediate efficiency. In the context of neuroimaging, adjoint based inversion of dynamical causal models (DCMs) can, in principle, enable the study of models with large numbers of nodes and parameters.
Article
Full-text available
This paper combines recent formulations of self-organization and neuronal processing to provide an account of cognitive dynamics from basic principles. We start by showing that inference (and autopoiesis) are emergent features of any (weakly mixing) ergodic random dynamical system. We then apply the emergent dynamics to action and perception in a way that casts action as the fulfillment of (Bayesian) beliefs about the causes of sensations. More formally, we formulate ergodic flows on global random attractors as a generalized descent on a free energy functional of the internal states of a system. This formulation rests on a partition of states based on a Markov blanket that separates internal states from hidden states in the external milieu. This separation means that the internal states effectively represent external states probabilistically. The generalized descent is then related to classical Bayesian (e.g., Kalman–Bucy) filtering and predictive coding—of the sort that might be implemented in the brain. Finally, we present two simulations. The first simulates a primordial soup to illustrate the emergence of a Markov blanket and (active) inference about hidden states. The second uses the same emergent dynamics to simulate action and action observation.
Article
Full-text available
During rest, the human brain performs essential functions such as memory maintenance, which are associated with resting-state brain networks (RSNs) including the default-mode network (DMN) and frontoparietal network (FPN). Previous studies based on spiking-neuron network models and their reduced models, as well as those based on imaging data, suggest that resting-state network activity can be captured as attractor dynamics, i.e., dynamics of the brain state toward an attractive state and transitions between different attractors. Here, we analyze the energy landscapes of the RSNs by applying the maximum entropy model, or equivalently the Ising spin model, to human RSN data. We use the previously estimated parameter values to define the energy landscape, and the disconnectivity graph method to estimate the number of local energy minima (equivalent to attractors in attractor dynamics), the basin size, and hierarchical relationships among the different local minima. In both of the DMN and FPN, low-energy local minima tended to have large basins. A majority of the network states belonged to a basin of one of a few local minima. Therefore, a small number of local minima constituted the backbone of each RSN. In the DMN, the energy landscape consisted of two groups of low-energy local minima that are separated by a relatively high energy barrier. Within each group, the activity patterns of the local minima were similar, and different minima were connected by relatively low energy barriers. In the FPN, all dominant local minima were separated by relatively low energy barriers such that they formed a single coarse-grained global minimum. Our results indicate that multistable attractor dynamics may underlie the DMN, but not the FPN, and assist memory maintenance with different memory states.
Chapter
Full-text available
This indispensable sourcebook covers conceptual and practical issues in research design in the field of social and personality psychology. Key experts address specific methods and areas of research, contributing to a comprehensive overview of contemporary practice. This updated and expanded second edition offers current commentary on social and personality psychology, reflecting the rapid development of this dynamic area of research over the past decade. With the help of this up-to-date text, both seasoned and beginning social psychologists will be able to explore the various tools and methods available to them in their research as they craft experiments and imagine new methodological possibilities.
Article
Full-text available
Rhythmic neuronal activity is ubiquitous in the human brain. These rhythms originate from a variety of different network mechanisms, which give rise to a wide-ranging spectrum of oscillation frequencies. In the last few years an increasing number of clinical research studies have explored transcranial alternating current stimulation (tACS) with weak current as a tool for affecting brain function. The premise of these interventions is that tACS will interact with ongoing brain oscillations. However, the exact mechanisms by which weak currents could affect neuronal oscillations at different frequency bands are not well known and this, in turn, limits the rational optimization of human experiments. Here we review the available in vitro and in vivo animal studies that attempt to provide mechanistic explanations. The findings can be summarized into a few generic principles, such as periodic modulation of excitability, shifts in spike timing, modulation of firing rate, and shifts in the balance of excitation and inhibition. These effects result from weak but simultaneous polarization of a large number of neurons. Whether this can lead to an entrainment or a modulation of brain oscillations, or whether AC currents have no effect at all, depends entirely on the specific dynamic that gives rise to the different brain rhythms, as discussed here for slow wave oscillations (∼1 Hz) and gamma oscillations (∼30 Hz). We conclude with suggestions for further experiments to investigate the role of AC stimulation for other physiologically relevant brain rhythms.
Article
Full-text available
To further advance our understanding of the brain, new concepts and theories are needed. In particular, the ability of the brain to create information flows must be reconciled with its propensity for synchronization and mass action. The framework of Coordination Dynamics and the theory of metastability are presented as a starting point to study the interplay of integrative and segregative tendencies that are expressed in space and time during the normal course of brain function. Some recent shifts in perspective are emphasized, that may ultimately lead to a better understanding of brain complexity.
Article
Full-text available
A breaking of symmetry involves an abrupt change in the set of microstates a system can explore. This change has unavoidable thermodynamic implications: a shrinkage of the microstate set results in an entropy decrease, which eventually needs to be compensated by heat dissipation and hence requires work. On the other hand, in a spontaneous symmetry breaking, the available phase-space volume changes without the need for work, yielding an apparent entropy decrease. Here we show that this entropy decrease is a key ingredient of a Szilard engine and Landauer/'s principle, and perform a direct measurement of the entropy change along symmetry-breaking transitions for a Brownian particle subject to a bistable potential realized through two optical traps. The experiment confirms theoretical results based on fluctuation theorems, enables the construction of a Szilard engine extracting energy from a single thermal bath, and shows that a signature of a symmetry breaking in a system/'s energetics is observable.
Article
Full-text available
A balance between excitatory and inhibitory synaptic currents is thought to be important for several aspects of information processing in cortical neurons in vivo, including gain control, bandwidth and receptive field structure. These factors will affect the firing rate of cortical neurons and their reliability, with consequences for their information coding and energy consumption. Yet how balanced synaptic currents contribute to the coding efficiency and energy efficiency of cortical neurons remains unclear. We used single compartment computational models with stochastic voltage-gated ion channels to determine whether synaptic regimes that produce balanced excitatory and inhibitory currents have specific advantages over other input regimes. Specifically, we compared models with only excitatory synaptic inputs to those with equal excitatory and inhibitory conductances, and stronger inhibitory than excitatory conductances (i.e. approximately balanced synaptic currents). Using these models, we show that balanced synaptic currents evoke fewer spikes per second than excitatory inputs alone or equal excitatory and inhibitory conductances. However, spikes evoked by balanced synaptic inputs are more informative (bits/spike), so that spike trains evoked by all three regimes have similar information rates (bits/s). Consequently, because spikes dominate the energy consumption of our computational models, approximately balanced synaptic currents are also more energy efficient than other synaptic regimes. Thus, by producing fewer, more informative spikes approximately balanced synaptic currents in cortical neurons can promote both coding efficiency and energy efficiency.
Article
Full-text available
In systems biology, questions concerning the molecular and cellular makeup of an organism are of utmost importance, especially when trying to understand how unreliable components-like genetic circuits, biochemical cascades, and ion channels, among others-enable reliable and adaptive behaviour. The repertoire and speed of biological computations are limited by thermodynamic or metabolic constraints: an example can be found in neurons, where fluctuations in biophysical states limit the information they can encode-with almost 20-60% of the total energy allocated for the brain used for signalling purposes, either via action potentials or by synaptic transmission. Here, we consider the imperatives for neurons to optimise computational and metabolic efficiency, wherein benefits and costs trade-off against each other in the context of self-organised and adaptive behaviour. In particular, we try to link information theoretic (variational) and thermodynamic (Helmholtz) free-energy formulations of neuronal processing and show how they are related in a fundamental way through a complexity minimisation lemma.
Book
From the reviews: "This book provides a very readable introduction to Riemannian geometry and geometric analysis. The author focuses on using analytic methods in the study of some fundamental theorems in Riemannian geometry,e.g., the Hodge theorem, the Rauch comparison theorem, the Lyusternik and Fet theorem and the existence of harmonic mappings. With the vast development of the mathematical subject of geometric analysis, the present textbook is most welcome. It is a good introduction to Riemannian geometry. The book is made more interesting by the perspectives in various sections, where the author mentions the history and development of the material and provides the reader with references." Math. Reviews. The second edition contains a new chapter on variational problems from quantum field theory, in particular the Seiberg-Witten and Ginzburg-Landau functionals. These topics are carefully and systematically developed, and the new edition contains a thorough treatment of the relevant background material, namely spin geometry and Dirac operators. The new material is based on a course "Geometry and Physics" at the University of Leipzig that was attented by graduate students, postdocs and researchers from other areas of mathematics. Much of the material is included here for the first time in a textbook, and the book will lead the reader to some of the hottest topics of contemporary mathematical research.
Article
The neural basis and cognitive functions of various spontaneous thought processes, particularly mind-wandering, are increasingly being investigated. Although strong links have been drawn between the occurrence of spontaneous thought processes and activation in brain regions comprising the default mode network (DMN), spontaneous thought also appears to recruit other, non-DMN regions just as consistently. Here we present the first quantitative meta-analysis of neuroimaging studies of spontaneous thought and mind-wandering in order to address the question of their neural correlates. Examining 24 functional neuroimaging studies of spontaneous thought processes, we conducted a meta-analysis using activation likelihood estimation (ALE). A number of key DMN areas showed consistent recruitment across studies, including medial prefrontal cortex, posterior cingulate cortex, medial temporal lobe, and bilateral inferior parietal lobule. Numerous non-DMN regions, however, were also consistently recruited, including rostrolateral prefrontal cortex, dorsal anterior cingulate cortex, insula, temporopolar cortex, secondary somatosensory cortex, and lingual gyrus. These meta-analytic results indicate that DMN activation alone is insufficient to adequately capture the neural basis of spontaneous thought; frontoparietal control network areas, and other non-DMN regions, appear to be equally central. We conclude that further progress in the cognitive and clinical neuroscience of spontaneous thought will therefore require a re-balancing of our view of the contributions of various regions and networks throughout the brain, and beyond the DMN. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Book
This monograph presents a short course in computational geometry and topology. In the first part the book covers Voronoi diagrams and Delaunay triangulations, then it presents the theory of alpha complexes which play a crucial role in biology. The central part of the book is the homology theory and their computation, including the theory of persistence which is indispensable for applications, e.g. shape reconstruction. The target audience comprises researchers and practitioners in mathematics, biology, neuroscience and computer science, but the book may also be beneficial to graduate students of these fields.
Article
We formulate the basic principles of multi-agents complex system dynamics following the lessons from experimental neuro-and cognitive science: 1) the cognitive dynamics in a changing environment is transient and can be considered as a temporal sequence of metastable states; 2) the available resources for the information processing are limited; 3) the transient dynamics is robust against noise and at the same time sensitive to informational signals. We suggest the basic dy-namical models that describe the evolution of cooperative modes. We focused on two limited cases: a) the unstable manifold of metastable states has one leading direction and many others that characterized by small positive eigenvalues (system on the edge of instability), and b) the unstable manifold is characterized by small number of positive eigenvalues having the same range (integration of different flows-binding).
Article
Background: Multifractal analysis quantifies the time-scale-invariant properties in data by describing the structure of variability over time. By applying this analysis to hippocampal interspike interval sequences recorded during performance of a working memory task, a measure of long-range temporal correlations and multifractal dynamics can reveal single neuron correlates of information processing. New method: Wavelet leaders-based multifractal analysis (WLMA) was applied to hippocampal interspike intervals recorded during a working memory task. WLMA can be used to identify neurons likely to exhibit information processing relevant to operation of brain-computer interfaces and nonlinear neuronal models. Results: Neurons involved in memory processing ("Functional Cell Types" or FCTs) showed a greater degree of multifractal firing properties than neurons without task-relevant firing characteristics. In addition, previously unidentified FCTs were revealed because multifractal analysis suggested further functional classification. The cannabinoid type-1 receptor (CB1R) partial agonist, tetrahydrocannabinol (THC), selectively reduced multifractal dynamics in FCT neurons compared to non-FCT neurons. Comparison with existing methods: WLMA is an objective tool for quantifying the memory-correlated complexity represented by FCTs that reveals additional information compared to classification of FCTs using traditional z-scores to identify neuronal correlates of behavioral events. Conclusion: z-Score-based FCT classification provides limited information about the dynamical range of neuronal activity characterized by WLMA. Increased complexity, as measured with multifractal analysis, may be a marker of functional involvement in memory processing. The level of multifractal attributes can be used to differentially emphasize neural signals to improve computational models and algorithms underlying brain-computer interfaces.
Article
Uncertainty relations based on information theory for both discrete and continuous distribution functions are briefly reviewed. We extend these results to account for (differential) R\'{e}nyi entropy and its related entropy power. This allows us to find a new class of information-theoretic uncertainty relations (ITURs). The potency of such uncertainty relations in quantum mechanics is illustrated with a simple two-energy-level model where they outperform both the usual Robertson-Schr\"{o}dinger uncertainty relation and Kraus-Maassen Shannon entropy based uncertainty relation. In the continuous case the ensuing entropy power uncertainty relations are discussed in the context of heavy tailed wave functions and Schr\"odinger cat states. Again, improvement over both the Robertson-Schr\"{o}dinger uncertainty principle and Shannon ITUR is demonstrated in these cases. Further salient issues such as the proof of a generalized entropy power inequality and a geometric picture of information-theoretic uncertainty relations are also discussed.
Article
DOI:https://doi.org/10.1103/PhysRevLett.13.508
Article
Shannon's and Simpson's indices have been the most widely accepted measures of ecological diversity for the past fifty years, even though neither statistic accounts for species abundances across geographic locales (“patches”). An abundant species that is endemic to a single patch can be as much of a conservation concern as a rare cosmopolitan species. I extend Shannon's and Simpson's indices to simultaneously account for species richness and relative abundances – i.e. extend them to multispecies metacommunities – by making the inputs to each index a matrix, rather than a vector. The Shannon's index analogue of diversity is mutual entropy of species and patches divided by marginal entropy of the individual geographic patches. The Simpson's index analogue of diversity is a modification of mutual entropy, with the logarithm moved to the outside of the summation, divided by Simpson's index of the patches. Both indices are normalized for number of patches, with the result being inversely proportional to biodiversity. These methods can be extended to account for time-series of such matrices and average age-classes of each species within each patch, as well as provide a measure of spatial coherence of communities.