Entropy

Published by MDPI
Online ISSN: 1099-4300
Publications
Plot of the redundancy rate versus D 2
Inclusion of additional sequences breaks down the segregation observed by Gatlin.  
The logo of a number of sequences at the beginning of a gene. The start codon ATG is immediately apparent. The logo was constructed using the software at http://weblogo.threeplusone.com/.
A block diagram depicting the basic steps involved with a grammar-based compression scheme.
Article
Data compression at its base is concerned with how information is organized in data. Understanding this organization can lead to efficient ways of representing the information and hence data compression. In this paper we review the ways in which ideas and approaches fundamental to the theory and practice of data compression have been used in the area of bioinformatics. We look at how basic theoretical ideas from data compression, such as the notions of entropy, mutual information, and complexity have been used for analyzing biological sequences in order to discover hidden patterns, infer phylogenetic relationships between organisms and study viral populations. Finally, we look at how inferred grammars for biological sequences have been used to uncover structure in biological sequences.
 
This figure provides a schematic overview of the message passing scheme implied by Equation 21. In this scheme, neurons are divided into prediction (black) and prediction error (red) units that pass messages to each other, within and between hierarchical levels (macrocolumns). Superficial pyramidal cells (red) send forward prediction errors to deep pyramidal cells (black), which reciprocate with predictions that are conveyed by (polysynaptic) backward connections. This process continues until the amplitude of prediction error has been minimized and the predictions are optimized in a Bayesian sense. The prediction errors are the (precision-weighted) difference between conditional expectations encoded at any level and top-down or lateral predictions. The Roman numerals designate the cortical layers in which neurons are situated.  
Article
This paper describes a free energy principle that tries to explain the ability of biological systems to resist a natural tendency to disorder. It appeals to circular causality of the sort found in synergetic formulations of self-organization (e.g., the slaving principle) and models of coupled dynamical systems, using nonlinear Fokker Planck equations. Here, circular causality is induced by separating the states of a random dynamical system into external and internal states, where external states are subject to random fluctuations and internal states are not. This reduces the problem to finding some (deterministic) dynamics of the internal states that ensure the system visits a limited number of external states; in other words, the measure of its (random) attracting set, or the Shannon entropy of the external states is small. We motivate a solution using a principle of least action based on variational free energy (from statistical physics) and establish the conditions under which it is formally equivalent to the information bottleneck method. This approach has proved useful in understanding the functional architecture of the brain. The generality of variational free energy minimisation and corresponding information theoretic formulations may speak to interesting applications beyond the neurosciences; e.g., in molecular or evolutionary biology.
 
Article
Biosemiotics and cybernetics are closely related, yet they are separated by the boundary between life and non-life: biosemiotics is focused on living organisms, whereas cybernetics is applied mostly to non-living artificial devices. However, both classes of systems are agents that perform functions necessary for reaching their goals. I propose to shift the focus of biosemiotics from living organisms to agents in general, which all belong to a pragmasphere or functional universe. Agents should be considered in the context of their hierarchy and origin because their semiosis can be inherited or induced by higher-level agents. To preserve and disseminate their functions, agents use functional information - a set of signs that encode and control their functions. It includes stable memory signs, transient messengers, and natural signs. The origin and evolution of functional information is discussed in terms of transitions between vegetative, animal, and social levels of semiosis, defined by Kull. Vegetative semiosis differs substantially from higher levels of semiosis, because signs are recognized and interpreted via direct code-based matching and are not associated with ideal representations of objects. Thus, I consider a separate classification of signs at the vegetative level that includes proto-icons, proto-indexes, and proto-symbols. Animal and social semiosis are based on classification, and modeling of objects, which represent the knowledge of agents about their body (Innenwelt) and environment (Umwelt).
 
Cont. 60 waters n s = 40 
Article
A mobile loop changes its conformation from "open" (free enzyme) to "closed" upon ligand binding. The difference in the Helmholtz free energy, ΔF(loop) between these states sheds light on the mechanism of binding. With our "hypothetical scanning molecular dynamics" (HSMD-TI) method ΔF(loop) = F(free) - F(bound) where F(free) and F(bound) are calculated from two MD samples of the free and bound loop states; the contribution of water is obtained by a thermodynamic integration (TI) procedure. In previous work the free and bound loop structures were both attached to the same "template" which was "cut" from the crystal structure of the free protein. Our results for loop 287-290 of AcetylCholineEsterase agree with the experiment, ΔF(loop)~ -4 kcal/mol if the density of the TIP3P water molecules capping the loop is close to that of bulk water, i.e., N(water) = 140 - 180 waters in a sphere of a 18 Å radius. Here we calculate ΔF(loop) for the more realistic case, where two templates are "cut" from the crystal structures, 2dfp.pdb (bound) and 2ace.pdb (free), where N(water) = 40 - 160; this requires adding a computationally more demanding (second) TI procedure. While the results for N(water) ≤ 140 are computationally sound, ΔF(loop) is always positive (18 ± 2 kcal/mol for N(water) = 140). These (disagreeing) results are attributed to the large average B-factor, 41.6 of 2dfp (23.4 Å(2) for 2ace). While this conformational uncertainty is an inherent difficulty, the (unstable) results for N(water) = 160 suggest that it might be alleviated by applying different (initial) structural optimizations to each template.
 
Schematic representations of a large loop of N -bonds deforming with fluctuating cross-linking hydrogen bonds. The number of cross-links can range from 0 to ≈ N/4. Note that because all cross-links are independent for these flexible structures, the DCM prediction for the entropy reduction only depends on the number of cross-links. The exact formula for entropy reduction completely accounts for the location of the cross-links and all accessible atomic geometries consistent with the fixed topology.  
Article
We present a novel analytical method to calculate conformational entropy of ideal cross-linking polymers from the configuration integral by employing a Mayer series expansion. Mayer-functions describing chemical bonds within the chain and for cross-links are sharply peaked over the temperature range of interest, and, are well approximated as statistically weighted Dirac delta-functions that enforce distance constraints. All geometrical deformations consistent with a set of distance constraints are integrated over. Exact results for a contiguous series of connected loops are employed to substantiate the validity of a previous phenomenological distance constraint model that describes protein thermodynamics successfully based on network rigidity.
 
Article
The differential Shannon entropy of information theory can change under a change of variables (coordinates), but the thermodynamic entropy of a physical system must be invariant under such a change. This difference is puzzling, because the Shannon and Gibbs entropies have the same functional form. We show that a canonical change of variables can, indeed, alter the spatial component of the thermodynamic entropy just as it alters the differential Shannon entropy. However, there is also a momentum part of the entropy, which turns out to undergo an equal and opposite change when the coordinates are transformed, so that the total thermodynamic entropy remains invariant. We furthermore show how one may correctly write the change in total entropy for an isothermal physical process in any set of spatial coordinates.
 
Example results for test case 3. This re-plots one of the results from Figure 3 that was shown as magenta for the 1048576 random samples. Here, we can see the accuracy better using a semi-log scale.  
Article
The maximum entropy method is a theoretically sound approach to construct an analytical form for the probability density function (pdf) given a sample of random events. In practice, numerical methods employed to determine the appropriate Lagrange multipliers associated with a set of moments are generally unstable in the presence of noise due to limited sampling. A robust method is presented that always returns the best pdf, where tradeoff in smoothing a highly varying function due to noise can be controlled. An unconventional adaptive simulated annealing technique, called funnel diffusion, determines expansion coefficients for Chebyshev polynomials in the exponential function.
 
Stack plots of energy consumption ( left panel ) and population ( right panel ) in USA, China, India, and the rest of the world from 1980–2010. 
(Left panel) Parametric plots of the empirical y emp (x) versus exponential y exp (x) cumulative fractions of global CO 2 emissions using the population fraction x as a parameter. (Right panel) Parametric plots of the empirical x emp (y) versus exponential x exp (y) cumulative population fractions using the CO 2 emissions fraction y as a parameter. 
Historical evolution of the global Gini coefficients for natural gas, coal, and petroleum consumption per capita. 
Article
We study the global probability distribution of energy consumption per capita around the world using data from the U.S. Energy Information Administration (EIA) for 1980-2010. We find that the Lorenz curves have moved up during this time period, and the Gini coefficient G has decreased from 0.66 in 1980 to 0.55 in 2010, indicating a decrease in inequality. The global probability distribution of energy consumption per capita in 2010 is close to the exponential distribution with G=0.5. We attribute this result to the globalization of the world economy, which mixes the world and brings it closer to the state of maximal entropy. We argue that global energy production is a limited resource that is partitioned among the world population. The most probable partition is the one that maximizes entropy, thus resulting in the exponential distribution function. A consequence of the latter is the law of 1/3: the top 1/3 of the world population consumes 2/3 of produced energy. We also find similar results for the global probability distribution of CO2 emissions per capita.
 
Article
We develop the general reconstruction scheme in two scalar model. The quintom-like theory which may describe (different) non-singular Little Rip or de Sitter cosmology is reconstructed. The number of scalar phantom dark energy models (with Little Rip cosmology or asymptotically de Sitter evolution) is presented. Stability issue of such dark energy cosmologies as well as the flow to fixed points are studied. The stability of Little Rip universe which leads to dissolution of bound objects sometime in future indicates that no classical transition to de Sitter space occurs. The possibility of unification of inflation with Little Rip dark energy in two scalar theory is briefly mentioned.
 
Article
We review a selection of methods for performing enhanced sampling in molecular dynamics simulations. We consider methods based on collective variable biasing and on tempering, and offer both historical and contemporary perspectives. In collective-variable biasing, we first discuss methods stemming from thermodynamic integration that use mean force biasing, including the adaptive biasing force algorithm and temperature acceleration. We then turn to methods that use bias potentials, including umbrella sampling and metadynamics. We next consider parallel tempering and replica-exchange methods. We conclude with a brief presentation of some combination methods.
 
Article
This first article of a series formulates the thermodynamics of ideal gases in a constant gravitational field in terms of an action principle that is closely integrated with thermodynamics. The theory, in its simplest form, does not deviate from standard practice, but it lays the foundations for a more systematic approach to the various extensions, such as the incorporation of radiation, the consideration of mixtures and the integration with General Relativity. We study the interaction between an ideal gas and the photon gas, and propose a new approach to this problem. We study the propagation of sound in a vertical, isothermal column and are led to suggest that the theory is incomplete, and to ask whether the true equilibrium state of an ideal gas may turn out be adiabatic, in which case the role of solar radiation is merely to compensate for the loss of energy by radiation into the cosmos. An experiment with a centrifuge is proposed, to determine the influence of gravitation on the equilibrium distribution with a very high degree of precision.
 
Calculation of translational and rotational action (@). Mean translational action @ t (A) is estimated as explained in the text from average separation of a = 2r by allocating each molecule space of a 3 = V/N where V is total volume and N is total number of diatomic molecules like dinitrogen (N 2 ). Relative angular motion dӨ/dt = ω is estimated for molecules exhibiting the root-mean-square velocity, taking 3kT = mv 2 = mr 2 ω 2. Then translational action is equal to (3kTI t ) 1/2. Rotational action @ r (B) for linear molecules such as N 2, O 2 and CO 2 is estimated similarly, equated to (2kTI r ) 1/2. . 
Flow diagram for computing absolute entropy and Gibbs energy. A fully annotated description of the relevant algorithms and subroutines to compute entropy and free energy is available on request to the corresponding author. 
also shows the specific frequencies of infrared radiation from the Earth that 
Article
A convenient model for estimating the total entropy ({\Sigma}Si) of atmospheric gases based on physical action is proposed. This realistic approach is fully consistent with statistical mechanics, but uses the properties of translational, rotational and vibrational action to partition the entropy. When all sources of action are computed as appropriate non-linear functions, the total input of thermal energy ({\Sigma}SiT) required to sustain a chemical system at specific temperatures (T) and pressures (p) can be estimated, yielding results in close agreement with published experimental third law values. Thermodynamic properties of gases including enthalpy, Gibbs energy and Helmholtz energy can be easily calculated from simple molecular and physical properties. We propose that these values for entropy are employed both chemically for reactions and physically for computing atmospheric profiles, the latter based on steady state heat flow equilibrating thermodynamics with gravity. We also predict that this application of action thermodynamics may soon provide superior understanding of reaction rate theory, morphogenesis and emergent or self-organising properties of many natural or evolving systems.
 
Article
Searching for the dynamical foundations of the Havrda-Charv\'{a}t/Dar\'{o}czy/Cressie-Read/Tsallis non-additive entropy, we come across a covariant quantity called, alternatively, a generalized Ricci curvature, an $N$-Ricci curvature or a Bakry-\'{E}mery-Ricci curvature in the configuration/phase space of a system. We explore some of the implications of this tensor and its associated curvature and present a connection with the non-additive entropy under investigation. We present an isoperimetric interpretation of the non-extensive parameter and comment on further features of the system that can be probed through this tensor.
 
Article
Since euclidean global AdS_2 space represented as a strip has two boundaries, the state / operator correspondence in the dual CFT_1 reduces to the standard map from the operators acting on a single copy of the Hilbert space to states in the tensor product of two copies of the Hilbert space. Using this picture we argue that the corresponding states in the dual string theory living on AdS_2 x K are described by twisted version of the Hartle-Hawking states, the twists being generated by a large unitary group of symmetries that this string theory must possess. This formalism makes natural the dual interpretation of the black hole entropy, -- as the logarithm of the degeneracy of ground states of the quantum mechanics describing the low energy dynamics of the black hole, and also as an entanglement entropy between the two copies of the same quantum theory living on the two boundaries of global AdS_2 separated by the event horizon.
 
Entropies parametrized in the (c, d)-plane, with their associated distribution functions. BG entropy corresponds to (1, 1), Tsallis entropy to (c, 0), and entropies for stretched exponentials to (1, d > 0). Entropies leading to distribution functions with compact support, belong to equivalence class (1, 0). Figure from [3]. 
Example for an auto-correlated random walk that persistently walks in the same direction for ∝ n 1−α steps (α = 0.5). 
Article
Complex systems are often inherently non-ergodic and non-Markovian for which Shannon entropy loses its applicability. In particular accelerating, path-dependent, and aging random walks offer an intuitive picture for these non-ergodic and non-Markovian systems. It was shown that the entropy of non-ergodic systems can still be derived from three of the Shannon-Khinchin axioms, and by violating the fourth -- the so-called composition axiom. The corresponding entropy is of the form $S_{c,d} \sim \sum_i \Gamma(1+d,1-c\ln p_i)$ and depends on two system-specific scaling exponents, $c$ and $d$. This entropy contains many recently proposed entropy functionals as special cases, including Shannon and Tsallis entropy. It was shown that this entropy is relevant for a special class of non-Markovian random walks. In this work we generalize these walks to a much wider class of stochastic systems that can be characterized as `aging' systems. These are systems whose transition rates between states are path- and time-dependent. We show that for particular aging walks $S_{c,d}$ is again the correct extensive entropy. Before the central part of the paper we review the concept of $(c,d)$-entropy in a self-contained way.
 
Structure of TDMA scheme, n t,1 = 2, n t,2 = 1, L = 4 and T = 2.
Article
We study the information rates of non-coherent, stationary, Gaussian, multiple-input multiple-output (MIMO) flat-fading channels that are achievable with nearest neighbor decoding and pilot-aided channel estimation. In particular, we investigate the behavior of these achievable rates in the limit as the signal- to-noise ratio (SNR) tends to infinity by analyzing the capacity pre-log, which is defined as the limiting ratio of the capacity to the logarithm of the SNR as the SNR tends to infinity. We demonstrate that a scheme estimating the channel using pilot symbols and detecting the message using nearest neighbor decoding (while assuming that the channel estimation is perfect) essentially achieves the capacity pre-log of non-coherent multiple-input single-output flat-fading channels, and it essentially achieves the best so far known lower bound on the capacity pre-log of non-coherent MIMO flat-fading channels. We then extend our analysis to the multiple-access channel.
 
Article
The dynamics of dissipative fluids in Eulerian variables may be derived from an algebra of Leibniz brackets of observables, the metriplectic algebra, that extends the Poisson algebra of the frictionless limit of the system via a symmetric semidefinite component, encoding dissipative forces. The metriplectic algebra includes the conserved total Hamiltonian H, generating the non-dissipative part of dynamics, and the entropy S of those microscopic degrees of freedom draining energy irreversibly, which generates dissipation. This S is a Casimir invariant of the Poisson algebra to which the metriplectic algebra reduces in the frictionless limit. The role of S is as paramount as that of H, but this fact may be underestimated in the Eulerian formulation because S is not the only Casimir of the symplectic non-canonical part of the algebra. Instead, when the dynamics of the non-ideal fluid is written through the parcel variables of the Lagrangian formulation, the fact that entropy is symplectically invariant clearly appears to be related to its dependence on the microscopic degrees of freedom of the fluid, that are themselves in involution with the position and momentum of the parcel.
 
Article
Starting from a very general trace-form entropy, we introduce a pair of algebraic structures endowed by a generalized sum and a generalized product. These algebras form, respectively, two Abelian fields in the realm of the complex numbers isomorphic each other. We specify our results to several entropic forms related to distributions recurrently observed in social, economical, biological and physical systems including the stretched exponential, the power-law and the interpolating Bosons-Fermions distributions. Some potential applications in the study of complex systems are advanced.
 
Authorship attribution. Comparison with other compression-based methods.
Article
Information content and compression are tightly related concepts that can be addressed through both classical and algorithmic information theories, on the basis of Shannon entropy and Kolmogorov complexity, respectively. The definition of several entities in Kolmogorov’s framework relies upon ideas from classical information theory, and these two approaches share many common traits. In this work, we expand the relations between these two frameworks by introducing algorithmic cross-complexity and relative complexity, counterparts of the cross-entropy and relative entropy (or Kullback-Leibler divergence) found in Shannon’s framework. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when adopting such a description for x, compared to the shortest representation of x. Properties of analogous quantities in classical information theory hold for these new concepts. As these notions are incomputable, a suitable approximation based upon data compression is derived to enable the application to real data, yielding a divergence measure applicable to any pair of strings. Example applications are outlined, involving authorship attribution and satellite image classification, as well as a comparison to similar established techniques.
 
Article
E.T. Jaynes, originator of the maximum entropy interpretation of statistical mechanics, emphasized that there is an inevitable trade-off between the conflicting requirements of robustness and accuracy for any inferencing algorithm. This is because robustness requires discarding of information in order to reduce the sensitivity to outliers. The principal of nonlinear statistical coupling, which is an interpretation of the Tsallis entropy generalization, can be used to quantify this trade-off. The coupled-surprisal, -ln_k (p)=-(p^k-1)/k, is a generalization of Shannon surprisal or the logarithmic scoring rule, given a forecast p of a true event by an inferencing algorithm. The coupling parameter k=1-q, where q is the Tsallis entropy index, is the degree of nonlinear coupling between statistical states. Positive (negative) values of nonlinear coupling decrease (increase) the surprisal information metric and thereby biases the risk in favor of decisive (robust) algorithms relative to the Shannon surprisal (k=0). We show that translating the average coupled-surprisal to an effective probability is equivalent to using the generalized mean of the true event probabilities as a scoring rule. The metric is used to assess the robustness, accuracy, and decisiveness of a fusion algorithm. We use a two-parameter fusion algorithm to combine input probabilities from N sources. The generalized mean parameter 'alpha' varies the degree of smoothing and raising to a power N^beta with beta between 0 and 1 provides a model of correlation.
 
Article
In this paper, we review and extend a family of log-det divergences for symmetric positive definite (SPD) matrices and discuss their fundamental properties. We show how to generate from parameterized Alpha-Beta (AB) and Gamma Log-det divergences many well known divergences, for example, the Stein's loss, S-divergence, called also Jensen-Bregman LogDet (JBLD) divergence, the Logdet Zero (Bhattacharryya) divergence, Affine Invariant Riemannian Metric (AIRM) as well as some new divergences. Moreover, we establish links and correspondences among many log-det divergences and display them on alpha-beta plain for various set of parameters. Furthermore, this paper bridges these divergences and shows also their links to divergences of multivariate and multiway Gaussian distributions. Closed form formulas are derived for gamma divergences of two multivariate Gaussian densities including as special cases the Kullback-Leibler, Bhattacharryya, R\'enyi and Cauchy-Schwartz divergences. Symmetrized versions of the log-det divergences are also discussed and reviewed. A class of divergences is extended to multiway divergences for separable covariance (precision) matrices.
 
Seasonal effects in hosting. Shown here, for the discrete sampling points, is the probability that an observed pair is a hosting pair. A lull in late June separates two more active periods; the May period is overall the most active, followed by the July and August season. 
Article
Reciprocity is a vital feature of social networks, but little is known about its structure in real world behavior, or the mechanisms underlying its persistence. In pursuit of these two questions, we study the stationary and dynamical signals of reciprocity in a network of manioc beer drinking events in a Tsimane' village in lowland Bolivia. At the stationary level, our analysis reveals that social exchange within the community is heterogenous, even when controlling for kinship. A positive relationship between the frequencies at which two families host each other provides evidence for stationary reciprocity. Our analysis of the dynamical structure of this network presents a novel method for the study of conditional, or non-stationary, reciprocity effects. We find that levels of cooperation seen over the sixteen-week study period can be accounted for on the stationary hypothesis alone. This suggests that `tit for tat' effects do not play an observable role in maintaining the high levels of stationary reciprocation seen in this system.
 
Article
Since Renes et al. [J. Math. Phys. 45, 2171 (2004)], there has been much effort in the quantum information community to prove (or disprove) the existence of symmetric informationally complete (SIC) sets of quantum states in arbitrary finite dimension. This paper strengthens the urgency of this question by showing that if SIC-sets exist: 1) by a natural measure of orthonormality, they are as close to being an orthonormal basis for the space of density operators as possible, and 2) in prime dimensions, the standard construction for complete sets of mutually unbiased bases and Weyl-Heisenberg covariant SIC-sets are intimately related: The latter represent minimum uncertainty states for the former in the sense of Wootters and Sussman. Finally, we contribute to the question of existence by conjecturing a quadratic redundancy in the equations for Weyl-Heisenberg SIC-sets. Comment: 4 pages, no figures, revtex. Version 2: new title, result in Section 1 strengthened, otherwise minor changes only
 
Article
A stochastic nonlinear dynamical system generates information, as measured by its entropy rate. Some---the ephemeral information---is dissipated and some---the bound information---is actively stored and so affects future behavior. We derive analytic expressions for the ephemeral and bound informations in the limit of small-time discretization for two classical systems that exhibit dynamical equilibria: first-order Langevin equations (i) where the drift is the gradient of a potential function and the diffusion matrix is invertible and (ii) with a linear drift term (Ornstein-Uhlenbeck) but a noninvertible diffusion matrix. In both cases, the bound information is sensitive only to the drift, while the ephemeral information is sensitive only to the diffusion matrix and not to the drift. Notably, this information anatomy changes discontinuously as any of the diffusion coefficients vanishes, indicating that it is very sensitive to the noise structure. We then calculate the information anatomy of the stochastic cusp catastrophe and of particles diffusing in a heat bath in the overdamped limit, both examples of stochastic gradient descent on a potential landscape. Finally, we use our methods to calculate and compare approximations for the so-called time-local predictive information for adaptive agents.
 
Article
A directed acyclic graph (DAG) partially represents the conditional independence structure among observations of a system if the local Markov condition holds, that is, if every variable is independent of its non-descendants given its parents. In general, there is a whole class of DAGs that represents a given set of conditional independence relations. We are interested in properties of this class that can be derived from observations of a subsystem only. To this end, we prove an information theoretic inequality that allows for the inference of common ancestors of observed parts in any DAG representing some unknown larger system. More explicitly, we show that a large amount of dependence in terms of mutual information among the observations implies the existence of a common ancestor that distributes this information. Within the causal interpretation of DAGs our result can be seen as a quantitative extension of Reichenbach's Principle of Common Cause to more than two variables. Our conclusions are valid also for non-probabilistic observations such as binary strings, since we state the proof for an axiomatized notion of mutual information that includes the stochastic as well as the algorithmic version. Comment: 18 pages, 4 figures
 
Article
In this paper, new techniques that allow conditional entropy to estimate the combinatorics of symbols are applied to animal communication studies relying on information theory. By using the conditional entropy estimates at multiple orders, the paper estimates the total repertoire sizes for animal communication across bottlenose dolphins, humpback whales, and several species of birds for N-grams of one, two, and three combined units. How this can influence our estimates and ideas about the complexity of animal communication is also discussed.
 
Article
Previous studies modelled the origin of life and the emergence of photosynthesis on the early Earth-i.e. the origin of plants-in terms of biological heat engines that worked on thermal cycling caused by suspension in convecting water. In this new series of studies, heat engines using a more complex mechanism for thermal cycling are invoked to explain the origin of animals as well. Biological exploitation of the thermal gradient above a submarine hydrothermal vent is hypothesized, where a relaxation oscillation in the length of a protein 'thermotether' would have yielded the thermal cycling required for thermosynthesis. Such a thermal transition driven movement is not impeded by the low Reynolds number of a small scale. In the model the thermotether together with the protein export apparatus evolved into a 'flagellar proton pump' that turned into today's bacterial flagellar motor after the acquisition of the proton-pumping respiratory chain. The flagellar pump resembles Feynman's ratchet, and the 'flagellar computer' that implements chemotaxis a Turing machine: the stator would have functioned as Turing's paper tape and the stator's proton-transferring subunits with their variable conformation as the symbols on the tape. The existence of a cellular control centre in the cilium of the eukaryotic cell is proposed that would have evolved from the prokaryotic flagellar computer. Comment: Correction of text on Turing machines; Appendix added with journal reviewer's criticism, and reaction to that criticism; minor typos
 
Example of positive ternary interaction with Q = 0 [2].
Forty-eight title words in rotated vector space [29].
Triads and higher-order coauthorship patterns in Social Studies of Science (2004-2008).
Map based on bibliographic coupling of 395 references in the 102 articles from Social Networks; cosine ≥ 0.5; [30]. For the sake of readability a selection of 136 nodes (for the partitions 4 ≤ k ≤ 10) is indicated with legends.
Interaction information (I ABC→AB:AC:BC ) and remaining redundancy (-μ* or Q) among the three main components in different dimensions and combinations of dimension on the basis of Social Studies of Science (2004-2008).
Article
Mutual information among three or more dimensions (mu-star = - Q) has been considered as interaction information. However, Krippendorff (2009a, 2009b) has shown that this measure cannot be interpreted as a unique property of the interactions and has proposed an alternative measure of interaction information based on iterative approximation of maximum entropies. Q can then be considered as a measure of the difference between interaction information and redundancy generated in a model entertained by an observer. I argue that this provides us with a measure of the imprint of a second-order observing system -- a model entertained by the system itself -- on the underlying information processing. The second-order system communicates meaning hyper-incursively; an observation instantiates this meaning-processing within the information processing. The net results may add to or reduce the prevailing uncertainty. The model is tested empirically for the case where textual organization can be expected to contain intellectual organization in terms of distributions of title words, author names, and cited references.
 
CP and T transformations of thermodynamics systems. The fill levels correspond to the intrinsic temperatures. Common-sense interpretation "cold" and "hot" are indicated when applicable.
Thermodynamic interactions of a system (right) and antisystem (left) where a limited amount of thermal energy δQ < δQ max is allowed through the time window.
Increase of the phase space volume in irrevesible evolutions and causality.
Article
Conventional thermodynamics, which is formulated for our world populated by radiation and matter, can be extended to describe physical properties of antimatter in two mutually exclusive ways: CP-invariant or CPT-invariant. Here we refer to invariance of physical laws under charge (C), parity (P) and time reversal (T) transformations. While in quantum field theory CPT invariance is a theorem confirmed by experiments, the symmetry principles applied to macroscopic phenomena or to the whole of the Universe represent only hypotheses. Since both versions of thermodynamics are different only in their treatment of antimatter, but are the same in describing our world dominated by matter, making a clear experimentally justified choice between CP invariance and CPT invariance in context of thermodynamics is not possible at present. This work investigates the comparative properties of the CP- and CPT-invariant extensions of thermodynamics (focusing on the latter, which is less conventional than the former) and examines conditions under which these extensions can be experimentally tested.
 
Article
Stirling approximation of the factorials and multinominal coefficients are generalized based on the one-parameter ($\kappa$) deformed functions introduced by Kaniadakis [Phys. Rev. E \textbf{66} (2002) 056125]. We have obtained the relation between the $\kappa$-generalized multinominal coefficients and the $\kappa$-entropy by introducing a new $\kappa$-product operation.
 
Quantum circuit for implementing remote state preparation (RSP) of arbitrary two-qubit entangled states. |Mij 14 denotes a two-qubit projective measurement on Qubits 1 and 4 under a set of complete orthogonal basis vectors {|Mij 14}; ˆ U ij 25 denotes Alice's appropriate collective unitary transformation on bipartite (2,5); ˆ U36A denotes Bob's collective three-qubit unitary transformation on his Qubits 3, 6 and A, andˆUandˆ andˆU ijrs 36 denotes Bob's appropriate  
Quantum circuit for implementing RSP of arbitrary three-qubit entangled states. |M ijk 147 denotes a threequbit projective measurement on Qubits 1, 4 and 7 under a set of complete orthogonal basis vectors {|M ijk 147}; ˆ U ijk 258 denotes Alice's appropriate triplet collective unitary transformation on triplet (2,5,8); ˆ U369A denotes Bob's collective four-qubit unitary transformation on his Qubits 3, 6, 9 and A andˆUandˆ andˆU ijkrst 369 denotes Bob's appropriate  
Article
Herein, we present a feasible, general protocol for quantum communication within a network via generalized remote preparation of an arbitrary $m$-qubit entangled state designed with genuine tripartite Greenberger--Horne--Zeilinger-type entangled resources. During the implementations, we construct novel collective unitary operations; these operations are tasked with performing the necessary phase transfers during remote state preparations. We have distilled our implementation methods into a five-step procedure, which can be used to faithfully recover the desired state during transfer. Compared to previous existing schemes, our methodology features a greatly increased success probability. After the consumption of auxiliary qubits and the performance of collective unitary operations, the probability of successful state transfer is increased four-fold and eight-fold for arbitrary two- and three-qubit entanglements when compared to other methods within the literature, respectively. We conclude this paper with a discussion of the presented scheme for state preparation, including: success probabilities, reducibility and generalizability.
 
HMM for the parametrized SNS (inset) and example interevent distributions F (n) from Eq. (15) for three parameter settings.
(Top) Information anatomy of the SNS as a function of p with parameters p = q. The single-measurement entropy H[X0] is the solid red line, entropy rate hµ the solid green line, the bound information bµ the solid blue line. Thus, the blue area corresponds to bµ, the green area to the ephemeral information rµ = hµ − bµ, and the red area to the singlesymbol redundancy ρµ = H[X0] − hµ. (Bottom) The components of the predictable information-the excess entropy E = σµ + bµ + qµ in bits-also as a function of p with p = q. The blue line is qµ; the green line is qµ + bµ so that the green area denotes bµ's contribution to E. The red line is E so that the red area denotes elusive information σµ in E. Note that for a range of p the co-information qµ is (slightly) negative. 
Article
Renewal processes are broadly used to model stochastic behavior consisting of isolated events separated by periods of quiescence, whose durations are specified by a given probability law. Here, we identify the minimal sufficient statistic for their prediction (the set of causal states), calculate the historical memory capacity required to store those states (statistical complexity), delineate what information is predictable (excess entropy), and decompose the entropy of a single measurement into that shared with the past, future, or both. The causal state equivalence relation defines a new subclass of renewal processes with a finite number of causal states despite having an unbounded interevent count distribution. We apply our new formulae for information measures to analyze the output of the parametrized simple nonunifilar source, a simple two-state machine with an infinite-state epsilon-machine presentation. All in all, the results lay the groundwork for analyzing processes with divergent statistical complexity and divergent excess entropy.
 
Autonomous vs. nonautonomous dynamics. Top: Autonomous evolution of a gas from a non-equilibrium state to an equilibrium state (Minus-First Law). Bottom: Nonautonomous evolution of a thermally isolated gas between two equilibrium states. The piston moves according to a pre-determined protocol specifying its position λ t in time. The entropy 
Article
The recent development of the theory of fluctuation relations has led to new insights into the ever-lasting question of how irreversible behavior emerges from time-reversal symmetric microscopic dynamics. We provide an introduction to fluctuation relations, examine their relation to dissipation and discuss their impact on the arrow of time question.
 
Four random two-time boundary solutions (forming a narrow bundle in the diagram) are compared with two special ones, selected by trial and error for their slightly lower entropy values at t 0 = 200 or t 0 = −200. Values for t < 0 are identical with those at t f − t = 200.000 − t, although the final condition is actually irrelevant in the range shown. Entropy scattering around t = 1300 is accidental. (See Appendix of [7] for details of the model and an elementary Mathematica program for your convenience.)
Article
I argue that opposite arrows of time, while being logically possible, cannot realistically be assumed to exist during one and the same epoch of this universe.
 
Transfer entropy from the first electronic circuit to- wards the second. The upper figure shows time-varying TE versus the lag introduced in the temporal activation of the first circuit. Clearly, there is a directional flow of information time-locked at lag τ = 20 samples, which is significant for all time-instants ( p < 0 . 01). On the other hand, the flow of information in the opposite direction was much smaller ( T 1 ← 2 < 0 . 0795 nats ∀ ( t, τ )) and only reached significance ( p < 0 . 01) for about 1% of the tuplas ( n, τ ). The lower fig- 
Article
Finding interdependency relations between (possibly multivariate) time series provides valuable knowledge about the processes that generate the signals. Information theory sets a natural framework for non-parametric measures of several classes of statistical dependencies. However, a reliable estimation from information-theoretic functionals is hampered when the dependency to be assessed is brief or evolves in time. Here, we show that these limitations can be overcome when we have access to an ensemble of independent repetitions of the time series. In particular, we gear a data-efficient estimator of probability densities to make use of the full structure of trial-based measures. By doing so, we can obtain time-resolved estimates for a family of entropy combinations (including mutual information, transfer entropy, and their conditional counterparts) which are more accurate than the simple average of individual estimates over trials. We show with simulated and real data that the proposed approach allows to recover the time-resolved dynamics of the coupling between different subsystems.
 
Article
We study a system represented by a Bose-Einstein condensate interacting with a cavity field in presence of a strong off-resonant pumping laser. This system can be described by a three-mode Gaussian state, where two are the atomic modes corresponding to atoms populating upper and lower momentum sidebands and the third mode describes the scattered cavity field light. We show that, as a consequence of the collective atomic recoil instability, these modes possess a genuine tripartite entanglement that increases unboundedly with the evolution time and is larger than the bipartite entanglement in any reduced two-mode bipartition. We further show that the state of the system exhibits genuine tripartite nonlocality, which can be revealed by a robust violation of the Svetlichny inequality when performing displaced parity measurements. Our exact results are obtained by exploiting the powerful machinery of phase-space informational measures for Gaussian states, which we briefly review in the opening sections of the paper.
 
Search area discretisation: the complete grid, with the length of each link equal 1. The centre of the search area is in (0, 0), its radius is R 0 = 9.
A model of search area with obstacles: the missing links of the complete graph of Fig.1 represent blocked passages (due to the walls, closed doors, etc) for moving particles. This incomplete grid is obtained by removing fraction p = 0.35 of the links from the complete graph.
Mean concentration of tracer particles for the search area modelled by incomplete graph of Fig.2 with source placed at (X, Y ) = (0, 7) with A 0 = 12 (darker cells indicate higher concentration)
The dynamic Bayesian network representing the dependency between the random variables which feature in the described inference problem
Article
The paper presents an approach to olfactory search for a diffusive emitting source of tracer (e.g. aerosol, gas) in an environment with unknown map of randomly placed and shaped obstacles. The measurements of tracer concentration are sporadic, noisy and without directional information. The search domain is discretised and modelled by a finite two-dimensional lattice. The links is the lattice represent the traversable paths for emitted particles and for the searcher. A missing link in the lattice indicates a blocked paths, due to the walls or obstacles. The searcher must simultaneously estimate the source parameters, the map of the search domain and its own location within the map. The solution is formulated in the sequential Bayesian framework and implemented as a Rao-Blackwellised particle filter with information-driven motion control. The numerical results demonstrate the concept and its performance.
 
Article
Over the last decade it has been found that nonlinear laws of composition of momenta are predicted by some alternative approaches to "real" 4D quantum gravity, and by all formulations of dimensionally-reduced (3D) quantum gravity coupled to matter. The possible relevance for rather different quantum-gravity models has motivated several studies, but this interest is being tempered by concerns that a nonlinear law of addition of momenta might inevitably produce a pathological description of the total momentum of a macroscopic body. I here show that such concerns are unjustified, finding that they are rooted in failure to appreciate the differences between two roles for laws composition of momentum in physics. Previous results relied exclusively on the role of a law of momentum composition in the description of spacetime locality. However, the notion of total momentum of a multi-particle system is not a manifestation of locality, but rather reflects translational invariance. By working within an illustrative example of quantum spacetime I show explicitly that spacetime locality is indeed reflected in a nonlinear law of composition of momenta, but translational invariance still results in an undeformed linear law of addition of momenta building up the total momentum of a multi-particle system.
 
A (3 × 3)-grid.
Article
We consider questions posed in a recent paper of Mandayam, Bandyopadhyay, Grassl and Wootters [10] on the nature of "unextendible mutually unbiased bases." We describe a conceptual framework to study these questions, using a connection proved by the author in [19] between the set of nonidentity generalized Pauli operators on the Hilbert space of $N$ $d$-level quantum systems, $d$ a prime, and the geometry of non-degenerate alternating bilinear forms of rank $N$ over finite fields $\mathbb{F}_d$. We then supply alternative and short proofs of results obtained in [10], as well as new general bounds for the problems considered in loc. cit. In this setting, we also solve Conjecture 1 of [10], and speculate on variations of this conjecture.
 
Article
Real-world social and economic networks typically display a number of particular topological properties, such as a giant connected component, a broad degree distribution, the small-world property and the presence of communities of densely interconnected nodes. Several models, including ensembles of networks also known in social science as Exponential Random Graphs, have been proposed with the aim of reproducing each of these properties in isolation. Here we define a generalized ensemble of graphs by introducing the concept of graph temperature, controlling the degree of topological optimization of a network. We consider the temperature-dependent version of both existing and novel models and show that all the aforementioned topological properties can be simultaneously understood as the natural outcomes of an optimized, low-temperature topology. We also show that seemingly different graph models, as well as techniques used to extract information from real networks, are all found to be particular low-temperature cases of the same generalized formalism. One such technique allows us to extend our approach to real weighted networks. Our results suggest that a low graph temperature might be an ubiquitous property of real socio-economic networks, placing conditions on the diffusion of information across these systems.
 
Article
Hawking radiation and Bekenstein--Hawking entropy are the two robust predictions of a yet unknown quantum theory of gravity. Any theory which fails to reproduce these predictions is certainly incorrect. While several approaches lead to Bekenstein--Hawking entropy, they all lead to different sub-leading corrections. In this article, we ask a question that is relevant for any approach: Using simple techniques, can we know whether an approach contains quantum or semi-classical degrees of freedom? Using naive dimensional analysis, we show that the semi-classical black-hole entropy has the same dimensional dependence as the gravity action. Among others, this provides a plausible explanation for the connection between Einstein's equations and thermodynamic equation of state, and that the quantum corrections should have a different scaling behavior.
 
Article
We challenge claims that the principle of maximum entropy production produces physical phenomenological relations between conjugate currents and forces, even beyond the linear regime, and that currents in networks arrange themselves to maximize entropy production as the system approaches the steady state. In particular: (1) we show that Ziegler's principle of thermodynamic orthogonality leads to stringent reciprocal relations for higher order response coe?cients, and in the framework of stochastic thermodynamics, we exhibit a simple explicit model that does not satisfy them; (2) on a network, enforcing Kirchhoff's current law, we show that maximization of the entropy production prescribes reciprocal relations between coarse-grained observables, but is not responsible for the onset of the steady state, which is rather due to the minimum entropy production principle.
 
Article
In thermodynamics one considers thermal systems and the maximization of entropy subject to the conservation of energy. A consequence is Landauer's erasure principle, which states that the erasure of 1 bit of information requires a minimum energy cost equal to $kT\ln(2)$ where $T$ is the temperature of a thermal reservoir used in the process and $k$ is Boltzmann's constant. Jaynes, however, argued that the maximum entropy principle could be applied to any number of conserved quantities which would suggest that information erasure may have alternative costs. Indeed we showed recently that by using a reservoir comprising energy degenerate spins and subject to conservation of angular momentum, the cost of information erasure is in terms of angular momentum rather than energy. Here we extend this analysis and derive the minimum cost of information erasure for systems where different conservation laws operate. We find that, for each conserved quantity, the minimum resource needed to erase 1 bit of memory is $\lambda^{-1}\ln(2)$ where $\lambda$ is related to the average value of the conserved quantity. The costs of erasure depend, fundamentally, on both the nature of the physical memory element and the reservoir with which it is coupled.
 
Article
Nanothermodynamics extends standard thermodynamics to facilitate finite-size effects on the scale of nanometers. A key ingredient is Hill's subdivision potential that accommodates the non-extensive energy of independent small systems, similar to how Gibbs' chemical potential accommodates distinct particles. Nanothermodynamics is essential for characterizing the thermal equilibrium distribution of independently relaxing regions inside bulk samples, as is found for the primary response of most materials using various experimental techniques. The subdivision potential ensures strict adherence to the laws of thermodynamics: total energy is conserved by including an instantaneous contribution from the entropy of local configurations, and total entropy remains maximized by coupling to a thermal bath. A unique feature of nanothermodynamics is the completely-open nanocanonical ensemble. Another feature is that particles within each region become statistically indistinguishable, which avoids non-extensive entropy, and mimics quantum-mechanical behavior. Applied to mean-field theory, nanothermodynamics gives a heterogeneous distribution of regions that yields stretched-exponential relaxation and super-Arrhenius activation. Applied to Monte Carlo simulations, there is a nonlinear correction to Boltzmann's factor that improves agreement between the Ising model and measured non-classical critical scaling in magnetic materials. Nanothermodynamics also provides a fundamental mechanism for the 1/f noise found in many materials.
 
Plot of bounds in a "Pe vs. H(T |Y )" diagram.
Plot of bounds in a "P E vs. H(T |Y )" diagram.
Article
The existing upper and lower bounds between entropy and error are mostly derived through an inequality means without linking to joint distributions. In fact, from either theoretical or application viewpoint, there exists a need to achieve a complete set of interpretations to the bounds in relation to joint distributions. For this reason, in this work we propose a new approach of deriving the bounds between entropy and error from a joint distribution. The specific case study is given on binary classifications, which can justify the need of the proposed approach. Two basic types of classification errors are investigated, namely, the Bayesian and non-Bayesian errors. For both errors, we derive the closed-form expressions of upper bound and lower bound in relation to joint distributions. The solutions show that Fano's lower bound is an exact bound for any type of errors in a relation diagram of "Error Probability vs. Conditional Entropy". A new upper bound for the Bayesian error is derived with respect to the minimum prior probability, which is generally tighter than Kovalevskij's upper bound.
 
Article
Maximum entropy models are increasingly being used to describe the collective activity of neural populations with measured mean neural activities and pairwise correlations, but the full space of probability distributions consistent with these constraints has not been explored. We provide upper and lower bounds on the entropy for the {\em minimum} entropy distribution over arbitrarily large collections of binary units with any fixed set of mean values and pairwise correlations. We also construct specific low-entropy distributions for several relevant cases. Surprisingly, the minimum entropy solution has entropy scaling logarithmically with system size for any set of first- and second-order statistics consistent with arbitrarily large systems. We further demonstrate that some sets of these low-order statistics can only be realized by small systems. Our results show how only small amounts of randomness are needed to mimic low-order statistical properties of highly entropic distributions, and we discuss some applications for engineered and biological information transmission systems.
 
Article
Free energy and entropy are examined in detail from the standpoint of classical thermodynamics. The approach is logically based on the fact that thermodynamic work is mediated by thermal energy through the tendency for nonthermal energy to convert spontaneously into thermal energy and for thermal energy to distribute spontaneously and uniformly within the accessible space. The fact that free energy is a Second-Law, expendable energy that makes it possible for thermodynamic work to be done at finite rates is emphasized. Entropy, as originally defined, is pointed out to be the capacity factor for thermal energy that is hidden with respect to temperature; it serves to evaluate the practical quality of thermal energy and to account for changes in the amounts of latent thermal energies in systems maintained at constant temperature. A major objective was to clarify the means by which free energy is transferred and conserved in sequences of biological reactions coupled by freely diffusible intermediates. In achieving this objective it was found necessary to distinguish between a 'characteristic free energy' possessed by all First-Law energies in amounts equivalent to the amounts of the energies themselves and a 'free energy of concentration' that is intrinsically mechanical and relatively elusive in that it can appear to be free of First-Law energy. The findings in this regard serve to clarify the fact that the transfer of chemical potential energy from one repository to another along sequences of biological reactions of the above sort occurs through transfer of the First-Law energy as thermal energy and transfer of the Second-Law energy as free energy of concentration. Comment: 18-page PDF; major correction in APPENDIX; minor corrections elsewhere
 
A bandpass-filtered (n sm1 = 5, n sm2 = 7) version of the original image rendered in Fig. 1 is shown (greyscale), with automated loop tracings overlaid (red curves). Cumulative size distributions N(> L) of loop lengths are also shown (bottom right panel), comparing the automated tracing (red distribution) with visually/manually traced loops (black distribution). The maximum lengths L m (in pixels) are listed for the longest loops detected with each method. 
High-resolution image of the solar Active Region 10380, recorded on 2003 June 16 with the Swedish 1-m Solar Telescope (SST) on La Palma Spain (top panel) and automated tracing of curvi-linear structures with a lowpass filter of n sm1 = 3 pixels, a highpass filter of n sm2 = 5 pixels, and a minimum curvature radius of r min = 30 pixels, tracing out 1757 curvi-linear segments (bottom panel). 
Geometry of curvature radii centers (xr m , yr m ) located on a line at angle β (dashdotted line), perpendicular to the tangent at angle α (solid line) that intersects a curvi-linear feature (thick solid curve) at position (x 0 , y 0 ). The angle γ indicates the half angular range of the curved guiding segment (thick solid line). 
Example of loop tracing in the pixel area [525:680, 453:607] of the image shown in Fig. 2. Loop #19 is traced (blue crosses) over a length of 115 pixels (orange numbers), crossing another structure at a small angle. The curves at position 115 indicate the three curved segments that have been used in the tracing of the last loop point. The black contours indicate the bandpass-fitlered difference image, and the red contours indicate the previously traced and erased structures in the residual difference image. 
Article
We developed an automated pattern recognition code that is particularly well suited to extract one-dimensional curvi-linear features from two-dimensional digital images. A former version of this {\sl Oriented Coronal CUrved Loop Tracing (OCCULT)} code was applied to spacecraft images of magnetic loops in the solar corona, recorded with the NASA spacecraft {\sl Transition Region And Coronal Explorer (TRACE)} in extreme ultra-violet wavelengths. Here we apply an advanced version of this code ({\sl OCCULT-2}) also to similar images from the {\sl Solar Dynamics Observatory (SDO)}, to chromospheric H-$\alpha$ images obtained with the {\sl Swedish Solar Telescope (SST)}, and to microscopy images of microtubule filaments in live cells in biophysics. We provide a full analytical description of the code, optimize the control parameters, and compare the automated tracing with visual/manual methods. The traced structures differ by up to 16 orders of magnitude in size, which demonstrates the universality of the tracing algorithm.
 
Geodesics of the Poincaré half-plane
One step of GIGO update
Median number of function calls to reach 10 −8 fitness on 24 runs for: Sphere function, Cigar-tablet function and Rosenbrock function. Initial position θ 0 = N (x 0 , I), with x 0 uniformly distributed on the circle of center 0 and radius 10. We recall that the "CMA-ES" algorithm here is using the so-called pure rank-µ CMA update.
Trajectories of GIGO, CMA and xNES optimizing x → x 2 in dimension 1 with δt = 0.5, sample size 5000, weights w i = 4.1 i1250 , and learning rates η µ = 1, η Σ = 1.8. One dot every 2 steps. Stronger differences. Notice that after one step, the lowest mean is still GIGO (∼ 8.5, whereas xNES is around 8.75), but from the second step, GIGO has the highest mean because of the lower variance.
Trajectories of GIGO, CMA and xNES optimizing x → x 2 in dimension 1 with δt = 1, sample size 5000, weights w i = 4.1 i1250 , and learning rates η µ = 1, η Σ = 1.8. One dot per step. The CMA-ES algorithm fails here, because at the fourth step, the covariance matrix is not positive definite anymore (It is easy to see that the CMA-ES update is always defined if δtη Σ < 1, but this is not the case here). Also notice (see also Proposition 6.2) that at the first step, GIGO decreases the variance, whereas the σ-component of the IGO speed is positive. 35
Article
Information geometric optimization (IGO) is a general framework for stochastic optimization problems aiming at limiting the influence of arbitrary parametrization choices. The initial problem is transformed into the optimization of a smooth function on a Riemannian manifold, defining a parametrization-invariant first order differential equation. However, in practice, it is necessary to discretize time, and then, parametrization invariance holds only at first order in the step size. We define the Geodesic IGO update (GIGO), which uses the Riemannian manifold structure to obtain an update entirely independent from the parametrization of the manifold. We test it with classical objective functions. Thanks to Noether's theorem from classical mechanics, we find a reasonable way to compute the geodesics of the statistical manifold of Gaussian distributions, and thus the corresponding GIGO update. We then compare GIGO, pure rank-$\mu$ CMA-ES and xNES (two previous algorithms that can be recovered by the IGO framework), and show that while the GIGO and xNES updates coincide when the mean is fixed, they are different in general, contrary to previous intuition. We then define a new algorithm (Blockwise GIGO) that recovers the xNES update from abstract principles.
 
Article
From the perspective of the statistical fluctuation theory, we explore the role of the thermodynamic geometries and vacuum (in)stability properties for the topological Einstein-Yang-Mills black holes. In this paper, from the perspective of the state-space surface and chemical Wienhold surface, we provide the criteria for the local and global statistical stability of an ensemble of topological Einstein-Yang-Mills black holes in arbitrary spacetime dimensions $D\ge 5$. Finally, as per the formulations of the thermodynamic geometry, we offer a parametric account of the statistical consequences in both the local and global fluctuation regimes of the topological Einstein-Yang-Mills black holes.
 
Top-cited authors
Yu-Dong Zhang
  • University of Leicester
Oleg Senkov
  • MRL Materials Resources LLC
Ronnie Kosloff
  • Hebrew University of Jerusalem
Shun-ichi Amari
Shuihua Wang
  • University of Leicester