Article

Energy and Information

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Instead, different schools of thought have formed that differ widely both from a philosophical-conceptual and (what will be our focus here) quantitative-numerical point of view. In addition, the topic appears unnecessarily mystified as evidenced, for instance, by the often quoted statement "no one knows what entropy really is"-allegedly made by von Neumann while discussing with Shannon about a name for his "entropy" concept [2], even though it is unclear whether von Neumann truly said it [3]. This quote also seems particularly unfortunate as historical evidence indicates that von Neumann had a clear opinion on which microscopic entropy definition to use in statistical physics [4][5][6], which-to add to the confusion-is not what is nowadays known as von Neumann entropy. ...
... Consequently, a number of interesting case studies emerged that focused on the nonequilibrium dynamics of thermodynamic entropy from first principles [32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47][48][49][50]. 2 However, somewhat echoing the criticism above, a study critically comparing a variety of entropy definitions on the same footing is not known to us. ...
... We call a system normal if it has a concave and non-decreasing Boltzmann entropy as a function of energy. In the literature one often requires only concavity, which ensures a positive heat capacity and implies equivalence of ensembles[7][8][9], but it is more convenient here to have a separate category for systems that can show negative (Boltzmann) temperatures.2 The literature is even wider if one includes equilibrium systems or effective dynamics such as master equations. ...
Article
Full-text available
We study the time evolution of eleven microscopic entropy definitions (of Boltzmann-surface, Gibbs-volume, canonical, coarse-grained-observational, entanglement and diagonal type) and three microscopic temperature definitions (based on Boltzmann, Gibbs or canonical entropy). This is done for the archetypal nonequilibrium setup of two systems exchanging energy, modeled here with random matrix theory, based on numerical integration of the Schrödinger equation. We consider three types of pure initial states (local energy eigenstates, decorrelated and entangled microcanonical states) and three classes of systems: (A) two normal systems, (B) a normal and a negative temperature system and (C) a normal and a negative heat capacity system. We find: (1) All types of initial states give rise to the same macroscopic dynamics. (2) Entanglement and diagonal entropy sensitively depend on the microstate, in contrast to all other entropies. (3) For class B and C, Gibbs-volume entropies can violate the second law and the associated temperature becomes meaningless. (4) For class C, Boltzmann-surface entropies can violate the second law and the associated temperature becomes meaningless. (5) Canonical entropy has a tendency to remain almost constant. (6) For a Haar random initial state, entanglement or diagonal entropy behave similar or identical to coarse-grained-observational entropy.
... There is a comment by von Neumann to Shannon in relation to a question raised by the latter about what would be an appropriate term for a formula that he derived; the following answer by von Neumann to Shanon was communicated to Myron Tribus by Shannon [4]: ...
... The entropy definition given in Equation (3) is valid for non-equilibrium situations as far as the H functional is well defined. We do not pretend to give a review on the origin of the concept and its development; the interested reader will find critical insights in references [1,3,4,[9][10][11][12][13] among others. ...
... where the subscript 0 means the values of the quantities evaluated at the upflow. Equations (4) and (5), written in terms of the reduced variables, are given as ...
Article
Full-text available
Entropy density behavior poses many problems when we study non-equilibrium situations. In particular, the local equilibrium hypothesis (LEH) has played a very important role and is taken for granted in non-equilibrium problems, no matter how extreme they are. In this paper we would like to calculate the Boltzmann entropy balance equation for a plane shock wave and show its performance for Grad’s 13-moment approximation and the Navier–Stokes–Fourier equations. In fact, we calculate the correction for the LEH in Grad’s case and discuss its properties.
... Nitrogen oxides, generally known as NOxs, are some of the most important hazardous air pollutants from industry [1,2]. In China, the NOx emission in the waste gas of industrial sources was about 8.957 million tons in 2022 [3]. ...
... By contrast, with the fast development of industrial digitalization [20,21], the massive data generated in power plants every day [2] may contain a lot of information not known to us yet. Fully excavating and making good use of these data might be immensely helpful in regulating the SCR control using a different aspect. ...
Article
Full-text available
Nitrogen oxides (NOxs) are some of the most important hazardous air pollutants from industry. In China, the annual NOx emission in the waste gas of industrial sources is about 8.957 million tons, while power plants remain the largest anthropogenic source of NOx emissions, and the precise control of NOx in power plants is crucial. However, due to inherent issues with measurement and pipelines in coal-fired power plants, there is typically a delay of about three minutes in NOx measurements, bringing mismatch between its control and measurement. Measuring delays in NOx from power plants can lead to excessive ammonia injection or failure to meet environmental standards for NOx emissions. To address the issue of NOx measurement delays, this study introduced a hybrid boosting model suitable for on-site implementation. The model could serve as a feedforward signal in SCR control, compensating for NOx measurement delays and enabling precise ammonia injection for accurate denitrification in power plants. The model combines generation mechanism and data-driven approaches, enhancing its prediction accuracy through the categorization of time-series data into linear, nonlinear, and exogenous regression components. In this study, a time-based method was proposed for analyzing the correlations between variables in denitration systems and NOx concentrations. This study also introduced a new evaluation indicator, part of R² (PR2), which focused on the prediction effect at turning points. Finally, the proposed model was applied to actual data from a 330 MW power plant, showing excellent predictive accuracy, particularly for one-minute forecasts. For 3 min prediction, compared to predictions made by ARIMA, the R-squared (R²) and PR2 were increased by 3.6% and 30.6%, respectively, and the mean absolute error (MAE) and mean absolute percentage error (MAPE) were decreased by 9.4% and 9.1%, respectively. These results confirmed the accuracy and applicability of the integrated model for on-site implementation as a 3 min advanced prediction soft sensor in power plants.
... According to M. Tribus and E.C. McIrvine [58], we can conclude that information predominates over all other energy conversion incentives in control systems. The quality of information used for decision making affects system efficiency. ...
... One of the main ideas of the works [58][59][60][61][62][63] is the need to consider information in close connection with the concept of entropy. This concept has a variety of meanings [64][65][66]. ...
Article
Full-text available
Transport systems are complex systems present in modern cities. The sustainability of all other urban systems depends on the sustainable functioning of urban transport. Various processes occur within transport systems. Road traffic is one of them. At the same time, road traffic is a rather complex process to manage, which is explained by the influence of many different internal and external environmental factors. The unpredictable and chaotic behavior of each vehicle in a traffic flow complicates predicting the transport situation and traffic management. This problem gave rise to several unsolved problems, including traffic congestion and road accident rates. The solution to these problems is connected with sustainably managing transport systems in terms of road traffic. However, numerous regularities between elements within the system should be understood in order to implement the management process. Unfortunately, the results of many previous studies often reflect only partial regularities and have limited functionality. Therefore, a new approach to urban traffic management is needed. As opposed to the existing solutions, the authors of this paper propose to implement management based on the regularities of changes in the chaos of the transport system. In this regard, the purpose of this research is to establish the relationship between road traffic chaos and road accident rates. The general methodological basis of this research is the system approach and its methods: analysis and synthesis. The theoretical studies were mostly based on the theories of chaos, dynamic systems, and traffic flows. The experimental studies were based on the theories of experimental design, probability, and mathematical statistics. To achieve this goal, a profound analysis covered studies on the sustainability of transport and dynamic systems, sociodynamics, and traffic. The authors proposed considering the relative entropy of lane occupancy at signal-controlled intersections as a measure for assessing traffic flow chaos and sustainability. Notably, as the main conclusions, the authors established regularities for the influence of entropy on the kinetic energy of traffic flows and injury risk. It also makes sense to emphasize that the initial data for the experiment were collected via real-time processing of video images using neural network technologies. These technologies will further allow for the implementation of traffic management and real-time forecasting of various events. Ultimately, the authors identified changes in injury risk depending on the level of road chaos. According to the authors, the obtained results can be used to improve the sustainability of urban transport systems. The research identified changes in injury risk depending on the level of road chaos, which could have significant implications for urban traffic management strategies.
... Theoretically, the maximum entropy of any given vertex is log 2 W D . Therefore, negentropy of vertex i can be calculated as (Corning & Kline, 1998) is the equivalence of Gibbs free energy in thermodynamics (Tribus & McIrvine, 1971) and inherently is a measure of order or integration in the system (Corning & Kline, 1998). Furthermore, in principle, information can be converted to energy (Toyabe et al., 2010). ...
... Accordingly, it is possible to treat negentropy as a form of energy in a semantic system. This conceptualization can be supported by the literature, as stated by Tribus and McIrvine (1971), "It takes energy to obtain knowledge and that it takes information to harness energy." At the graph level, following the same logic above, the formalization can be expanded (assuming degree centrality as the function of interest) (Ai, 2017): ...
Article
This article discusses the history and status quo of traditional tourism motivation theories as well as their shortcomings. By adopting a collective approach to motivation, this study proposes a framework that examines tourism motivations from a complex adaptive system's perspective. To conduct this study, the destination-motivation semantic system was designed as a bipartite scale-free network that takes in inputs such as values, costs, benefits, experiences, reasons to avoid/approach, attitudes, and expectations, and delivers the outputs of motivational force (valence) and destination utility. Next, by employing expectancy and utility theories and applying the principles of information theory, statistical mechanics, and thermodynamics, several appraisals were developed to determine the system's state, structure, and functionality. Finally, a toy model that presents the empirical proof of the proposed framework is depicted.
... That is known by common men and ancient civilizations as the trade-off 'no gain without pain' (Figure 12). Adding some associative thought: thermodynamics did not invent or own tendency towards disorder (if all left to itself), but thermodynamics did very well describe the physical processes, together with its cousin 'information science' or 'cybernetics' doing its own part [89]. ...
... That is known by common men and ancient civilizations as the trade-off 'no gain without pain' (Figure 12). Adding some associative thought: thermodynamics did not invent or own tendency towards disorder (if all left to itself), but thermodynamics did very well describe the physical processes, together with its cousin 'information science' or 'cybernetics' doing its own part [89]. All schools on thermodynamics indeed do agree on overall validity (This is perhaps a strange and deep-down case of 'universal truth' that keeps one wondering) of the first and second law. ...
Article
Full-text available
Terms such as system crash, collapse of chaos and complexity can help one understand change, also in biological, socio-economic and technical systems. These terms need, however, explanation for fruitful dialogue on design of sustainable systems. We start this paper on Grass Based (GB) systems, therefore, dwelling on these terms and notions as review for the insiders and to help interested ‘outsiders’. We also stress the need to use additional and/or new paradigms for understanding of the nature of nature. However, we show that many such ‘new’ paradigms were known for long time around the globe among philosophers and common men, giving reason to include quotes and examples from other cultures and eras. In the past few centuries, those paradigms have become hidden, perhaps, under impressive but short-term successes of more linear paradigms. Therefore, we list hang-ups on paradigms of those past few centuries. We then outline what is meant by ‘GB systems’, which exist in multiple forms/‘scapes’. Coping with such variation is perhaps the most central aspect of complexity. To help cope with this variation, the different (GB) systems can be arranged on spatial, temporal, and other scales in such a way that the arrangements form logical sequences (evolutions) of stable states and transitions of Complex Adaptive Systems (CAS). Together with other ways to handle complexity, we give examples of such arrangements to illustrate how one can (re-)imagine, (re-)cognize and manage initial chaotic behaviors and eventual ‘collapse’ of chaos into design and/or emergence of new systems. Then, we list known system behaviors, such as predator–prey cycles, adaptive cycles, lock-in, specialization and even tendency to higher (or lower) entropy. All this is needed to understand changes in management of evolving GB into multi-scapes. Integration of disciplines and paradigms indicates that a win-win is likely to be exception rather than rule. With the rules given in this paper, one can reset teaching, research, rural development, and policy agendas in GB-systems and other areas of life.
... The metabolic heat of a resting human body could represent the lower bound of the energy loss (~100 W of power × lifespan [129]). If individuals are considered as workers in a society, their loss could be defined in terms of lack of work related to muscle power or to their amplified power by machines [169]. Their role may not be actual physical work but intellectual work, with a connection to be made between information (as entropy) and energy [169,170]. ...
... If individuals are considered as workers in a society, their loss could be defined in terms of lack of work related to muscle power or to their amplified power by machines [169]. Their role may not be actual physical work but intellectual work, with a connection to be made between information (as entropy) and energy [169,170]. Note that an information-based metric could also be used to describe energy loss in a cyber-attack (Table 1). ...
Article
Full-text available
The literature on probabilistic hazard and risk assessment shows a rich and wide variety of modeling strategies tailored to specific perils. On one hand, catastrophe (CAT) modeling, a recent professional and scientific discipline, provides a general structure for the quantification of natural (e.g., geological, hydrological, meteorological) and man-made (e.g., terrorist, cyber) catastrophes. On the other hand, peril characteristics and related processes have yet to be categorized and harmonized to enable adequate comparison, limit silo effects, and simplify the implementation of emerging risks. We reviewed the literature for more than 20 perils from the natural, technological, and socio-economic systems to categorize them by following the CAT modeling hazard pipeline: (1) event source → (2) size distribution → (3) intensity footprint. We defined the following categorizations, which are applicable to any type of peril, specifically: (1) point/line/area/track/diffuse source, (2) discrete event/continuous flow, and (3) spatial diffusion (static)/threshold (passive)/sustained propagation (dynamic). We then harmonized the various hazard processes using energy as the common metric, noting that the hazard pipeline’s underlying physical process consists of some energy being transferred from an energy stock (the source), via an event, to the environment (the footprint).
... The meaning of order/disorder in thermodynamics was seriously studied in information theory, from which the notion of missing information emerged [40,[79][80][81][82][83][84][85][86]. If we look at the number 3.14159. . . ...
Article
Full-text available
In glass physics, order parameters have long been used in the thermodynamic description of glasses, but their nature is not yet clear. The difficulty is how to find order in disordered systems. This study provides a coherent understanding of the nature of order parameters for glasses and crystals, starting from the fundament of the definition of state variables in thermodynamics. The state variable is defined as the time-averaged value of a dynamical variable under the constraints, when equilibrium is established. It gives the same value at any time it is measured as long as the equilibrium is maintained. From this definition, it is deduced that the state variables of a solid are the time-averaged positions of all atoms constituting the solid, and the order parameters are essentially the same as state variables. Therefore, the order parameters of a glass are equilibrium atom positions.
... Remarkably, Claude Shannon himself used the term 'uncertainty' as an intuitive paraphrase for the quantity today known as 'entropy' [29]. Historically, the decision to call the Shannon entropy an 'entropy' goes back to a suggestion John von Neumann gave to Shannon, when he was visiting Weyl in 1940 (there are, at least, three versions of this anecdote known [31], the most popular is [32]). ...
Preprint
Full-text available
We consider the uncertainty between two pairs of local projective measurements performed on a multipartite system. We show that the optimal bound in any linear uncertainty relation, formulated in terms of the Shannon entropy, is additive. This directly implies, against naive intuition, that the minimal entropic uncertainty can always be realized by fully separable states. Hence, in contradiction to proposals by other authors, no entanglement witness can be constructed solely by comparing the attainable uncertainties of entangled and separable states. However, our result gives rise to a huge simplification for computing global uncertainty bounds as they now can be deduced from local ones. Furthermore, we provide the natural generalization of the Maassen and Uffink inequality for linear uncertainty relations with arbitrary positive coefficients.
... Entropy is a concept that ranges from thermodynamics to complexity of a physical dynamic system. Higher entropy indicates a more complex system with more irregularity [1] or uncertainty [2] of its dynamics. Unlike increasing monotonically over time in a closed system (e.g., the universe), entropy remains relatively low in a biological system by continually exchanging energy with environment to maintain its own orderliness. ...
Article
Full-text available
Background Entropy trajectories remain unclear for the aging process of human brain system due to the lacking of longitudinal neuroimaging resource. Results We used open data from an accelerated longitudinal cohort (PREVENT-AD) that included 24 healthy aging participants followed by 4 years with 5 visits per participant to establish cortical entropy aging curves and distinguish with the effects of age and cohort. This reveals that global cortical entropy decreased with aging, while a significant cohort effect was detectable that people who were born earlier showed higher cortical entropy. Such entropy reductions were also evident for large-scale cortical networks, although with different rates of reduction for different networks. Specifically, the primary and intermediate networks reduce their entropy faster than the higher-order association networks. Conclusions Our study confirmed that cortical entropy decreases continually in the aging process, both globally and regionally, and we conclude two specific characteristics of the entropy of the human cortex with aging: the shift of the complexity hierarchy and the diversity of complexity strengthen.
... Entropy is a concept that ranges from thermodynamics to complexity of a physical dynamic system. Higher entropy indicates a more complex system with more irregularity [1] or uncertainty [2] of its dynamics. Unlike increasing monotonically over time in a closed system (e.g., the universe), entropy remains relatively low in a biological system by continually exchanging energy with environment to maintain its own orderliness. ...
Preprint
Full-text available
Entropy trajectories remain unclear for the aging process of human brain system due to the lacking of longitudinal neuroimaging resource. We used open data from an accelerated longitudinal cohort (PREVENT-AD) that included 24 healthy aging participants followed by 4 years with 5 visits per participant to establish cortical entropy aging curves and distinguish with the effects of age and cohort. This reveals that global cortical entropy decreased with aging, while a significant cohort effect was detectable that people who were born earlier showed higher cortical entropy. Such entropy reductions were also evident for large-scale cortical networks, although with different rates of reduction for different networks. Specifically, the primary and intermediate networks reduce their entropy faster than the higher-order association networks. We conclude two specific characteristics of the entropy of the human cortex with aging: the shift of the complexity hierarchy and the diversity of complexity strengthen.
... Therefore, this section explores the relationship between laser power input (W) and IMC thickness and microstructural behavior of the solder were explored. The relationship between energy (E), power (P), and time (t) is described by the equation [71]: ...
... These include attention, recognition, and memory [61]. Another form of energy is information [62]. ...
Article
Full-text available
Human operator-induced assembly errors affect the quality of car manufacturing. Understanding the factors influencing assembly errors is critical for quality improvement. The sequence of assembly operations, a factor markedly affecting cognitive load, remains largely understudied. We aimed to assess the effect of changing the sequence of assembly operations on error rates through four field experiments conducted in a car manufacturing plant. The parts (and errors) under study were child lock labels (missing labels), sunroofs (missing bolts), windshield wiper arms (loose bolts), and armrests (wrong selection). Parts were chosen based on data from quality records, and they represent different scenarios regarding the sequencing of assembly operations. Minitab was used to conduct the statistical test for two proportions at a significance level of 0.05. The experiments ran for a period varying from 9 to 25 weeks (22292 to 138456 cars). All four experiments exhibited significant differences in the proportion of errors. MODAPTS cycle-time calculations revealed no negative effect of assembly sequence variations on productivity. The study findings show that changing the assembly operation sequence can reduce the error rates, possibly due to the intermediary effect in reducing the operator’s cognitive load. Overall, realizing quality improvement requires optimizing the assembly operation sequence in terms of time and productivity while considering its possible impact on error rates.
... Entropy is a concept from thermodynamics to complexity of a physical dynamic system. Higher entropy indicates a more complex system with more irregularity [1] or uncertainty [2] of its dynamics. Unlike increasing monotonically over time in a closed systems (e.g., the universe), entropy remains relatively low in a biological system by continually exchanging energy with environment to maintain its own orderliness. ...
Preprint
Full-text available
Entropy trajectories remain unclear for the aging process of human brain system. We employed the open data from an accelerated longitudinal cohort (PREVENT-AD) including 24 healthy aging participants followed by 4 years with 5 visits per participant to establish the brain entropy aging curves and distinguish with the age and cohort effects. This reveals that global cortical entropy decreased with aging while a significant cohort effect was detectable that people who were born earlier showed higher entropy. Such entropy reduction were also evident for the large-scale networks although with different speeds for different networks: the primary and intermediate networks reduce their entropy faster than the higher-order transmodal networks. We conclude two specific brain entropy features with aging: the shift of the complexity hierarchy of networks and the diversity of complexity increases among networks.
... One of the more striking features of this theory is that the definition of uncertainty in Equation (2) is formally identical to that of entropy introduced by Boltzmann in statistical mechanics precisely by an equation identical to Equation (4) [12]. It is interesting that it was John von Neumann who draw Shannon's attention to this point and who recommended him to use the term 'entropy' [13]. It is perhaps necessary to stress that the entropy measure introduced by Equation (2) is ascribed to the whole probability distribution and not to any individual outcome. ...
... By linking patient interactions with underlying microstates, we hinted at an equivalency between Shannon entropy and mathematical analysis entropy. The work of [23] and [16,17] is a good starting point for a more in-depth investigation of Shannon entropy. We think health informatics experts should promote information theory knowledge. ...
Chapter
Full-text available
Cryptography, molecular biology, natural language processing, and statistical inference are a few fields where information theory is used. It’s also utilized in medical science. In this article, we show how principles from information theory have been applied to improve medical decision-making in various ways. We begin with an overview of information theory and the notions of available data and entropy. In this study, we show how ‘useful’ relative entropy may be utilized to determine which diagnostic seems to be the utmost useful at a given stage of analysis. Shannon information may be utilized to determine the range of standards of medical reports across which the test offers relevant information about the patient’s condition when the result is binary. Of course, this isn’t the only approach available, but it may create a visually appealing representation. Next, the article introduces the more complex ideas of ‘useful’ conditional entropy and ‘useful’ mutual information, demonstrating how they may be used to prioritize clinical testing and uncover redundancies. Finally, we evaluate our findings thus far and suggest that providing a well-informed framework for the extensive use of theory of information to clinical managerial problems is worthwhile.Keywords‘Useful’ conditional entropyShannon information‘Useful’ Kullback–Leibler divergence‘Useful’ mutual informationMedical diagnosis
... Power factor, energy consumption, peak voltage, as well as power consumption are all factors that influence energy consumption. Meters' functionality enables them to be used for a variety of monitoring purposes [31]. ...
Article
Full-text available
Saudi Arabia initiated its much-anticipated Vision 2030 campaign, a long-term economic roadmap aimed at reducing the country’s reliance on oil. The vision, which is anticipated to be accomplished in the future, underlines compliance, fiscal, and strategy adjustments that will significantly affect all the important features of Saudi economic growth. Technology will be a critical facilitator, as well as controller, of the initiative’s significant transformation. Cloud computing, with the Internet of things (IoT), could make significant contributions to Saudi Vision 2030’s efficient governance strategy. There are multiple IoT applications that cover every part of everyday life, as well as enabling users to use a variety of IoT applications. Choosing the best IoT applications for specific customers is a difficult task. This paper concentrates on the Kingdom’s advancement towards a fresh, as well as enhanced, method of advancing the development phases pertaining to digital transformation, through implementing and adopting modern communications infrastructure and ICT technology. In addition, this study proposes a recommendation system that relies on a multi-criteria decision-making investigation focusing on the fuzzy TOPSIS method for selecting highly efficient IoT applications. The prototype, as well as the hierarchy, was created to assess and correlate critical criteria based on specialist preferences and recommendations. The T5 IoT application alternative was shown to be the most highly effective and reliable choice according to the findings of both fuzzy TOPSIS and TOPSIS.
... Shannon first proposed information entropy in 1948 to represent the average amount of information after excluding redundancy in the information (Tribus & McIrvine, 1971). The entropy weight method uses entropy to judge the degree of dispersion of a certain category. ...
Article
Full-text available
Multitarget threat evaluation of warship air attacks is one of the most urgent problems in warship defense operations. To evaluate the target threat quickly and accurately, an air attack multitarget threat evaluation method based on improved TOPSIS gray relational analysis is proposed. This method establishes threat assessment system of five attributes of target type, anti-jamming ability, heading angle, altitude, and speed. The weight coefficient of each index of the warship is obtained by combining the entropy weight method with the analytic hierarchy process. Topsis can make full use of the information of the original data, and its results can accurately reflect the gap between various evaluation schemes. The weighted Mahalanobis distance and comprehensive gray correlation between the attribute to be evaluated and the positive and negative ideal states are calculated by the improved TOPSIS gray correlation method. The target threat degree to be evaluated is obtained by combining the two methods. Finally, an example is given to prove the effectiveness of the evaluation model.
... The second subroutine procures the energy needed for speeding up cerebral information processing. For without energy, no information can be obtained (Tribus and Mcirvine, 1971;Toyabe et al., 2010;Berut et al., 2012). Indeed, experimentally induced stress arousals (i.e., states with high expected variational free energy) have been shown to be highly thermodynamicenergy costly; even a mild mental laboratory stressor results in a 12% increase in global cerebral metabolic rate of glucose (Madsen et al., 1995). ...
Article
Full-text available
According to the free energy principle, all sentient beings strive to minimize surprise or, in other words, an information-theoretical quantity called variational free energy. Consequently, psychosocial “stress” can be redefined as a state of “heightened expected free energy,” that is, a state of “expected surprise” or “uncertainty.” Individuals experiencing stress primarily attempt to reduce uncertainty, or expected free energy, with the help of what is called an uncertainty resolution program (URP). The URP consists of three subroutines: First, an arousal state is induced that increases cerebral information transmission and processing to reduce uncertainty as quickly as possible. Second, these additional computations cost the brain additional energy, which it demands from the body. Third, the program controls which stress reduction measures are learned for future use and which are not. We refer to an episode as “good” stress, when the URP has successfully reduced uncertainty. Failure of the URP to adequately reduce uncertainty results in either stress habituation or prolonged toxic stress. Stress habituation reduces uncertainty by flattening/broadening individual goal beliefs so that outcomes previously considered as untenable become acceptable. Habituated individuals experience so-called “tolerable” stress. Referring to the Selfish Brain theory and the experimental evidence supporting it, we show that habituated people, who lack stress arousals and therefore have decreased average brain energy consumption, tend to develop an obese type 2 diabetes mellitus phenotype. People, for whom habituation is not the free-energy-optimal solution, do not reduce their uncertainty by changing their goal preferences, and are left with nothing but “toxic” stress. Toxic stress leads to recurrent or persistent arousal states and thus increased average brain energy consumption, which in turn promotes the development of a lean type 2 diabetes mellitus phenotype. In conclusion, we anchor the psychosomatic concept of stress in the information-theoretical concept of uncertainty as defined by the free energy principle. In addition, we detail the neurobiological mechanisms underlying uncertainty reduction and illustrate how uncertainty can lead to psychosomatic illness.
Article
Energy generation, which promotes a nation's economic stability and advancement, is one of the most significant facets of modern society. Recent years have seen significant advancements in energy conversion and storage technology, particularly in mobile gadgets and electric vehicles. Lithium-ion batteries are utilized in energy storage and electric vehicles because of their low self-discharge rates, long cycle life, and high energy density. Therefore, precise evaluations of battery conditions are necessary for safe operation. This study proposes a hybrid model based on the Bidirectional Gated Recurrent Unit (Bi-GRU) with the Giant Trevally Optimizer (GTO) for state of health (SOH) prediction, which will help in improving the predictive accuracy. In the estimation of SOH, some key features of charge-discharge cycle characteristics are used based on the NASA lithium-ion battery dataset. The proposed GTO-Bi-GRU model outperforms Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) models by incorporating the bidirectional learning abilities of Bi-GRU, which captures the complex trend in battery degradation more effectively. Meanwhile, GTO performs the hyperparameter tuning optimally, outperforming classical optimization techniques such as particle swarm optimization (PSO), genetic algorithm (GA), and Cuckoo search algorithm (CS). This comparative study demonstrates that GTO-Bi-GRU achieves the highest prediction accuracy among all with coefficients of determination values of 0.9969, 0.9917, 0.9948, and 0.9882 on B5, B6, B7, and B18 battery cells. These results depict that GTO-Bi-GRU outperforms PSO-Bi-GRU, GA-Bi-GRU, and CS-Bi-GRU by a great margin, hence establishing it as a very effective model for SOH estimation. The results prove that GTO-Bi-GRU is robust enough and scalable for battery health monitoring applications in electric vehicles.
Article
Full-text available
In this paper I discuss how to conceptual engineer ‘entropy’ and ‘information’ as they are used in information theory and statistical mechanics. Initially, I evaluate the extent to which the all-pervasive entangled use of entropy and information notions can be somehow defective in these domains, such as being meaningless or generating confusion. Then, I assess the main ameliorative strategies to improve this defective conceptual practice. The first strategy is to substitute the terms ‘entropy’ and ‘information’ by non-loaded terms, as it was first argued by Bar-Hillel in the 1950s. A second strategy is to prescribe how these terms should be correctly used to be meaningful, as it was pioneered by Carnap (Two essays on entropy, University of California Press, 1977) in Two Essays on Entropy. However, the actual implementation of these two ameliorative strategies has been historically unsuccessful due to the low credentials that philosophers as conceptual prescribers have among scientists. Finally, to try to solve these obstacles, I propose a third strategy based on leveraging evidence from the contribution of philosophy as a complementary science or the so-called ‘Philosophy in Science’ (à la Pradeu et al. in Brit J Philos Sci 75:(2):375–416, 2024) to integrate conceptual prescriptions and analyses of entropy and information as part of the scientific practices in which these notions are used.
Article
Infodynamics is the study of how information behaves and changes within a system during its development. This study investigates the insights that informational analysis can provide regarding the ramifications predicted by constructal design. First, infodynamic neologisms informature, defined as a measure of the amount of information in indeterminate physical systems, and infotropy – contextualized informature representing the degree of transformation of indeterminate physical systems – are introduced. Flow architectures can be designed using either symmetric or asymmetric branching. The infodynamic analysis of symmetric branching revealed diminishing returns in information content, demonstrating that informature serves as a measure of diversity. These findings align with the principle of “few large and many small, but not too many,” which is consistent with higher thermofluid performance. The Performance Scaled Svelteness expresses the ability of the flow architecture to promote thermofluid performance. By contextualizing the informature with , a performance infotropy that quantifies the degree of transformation associated with the link between thermofluid performance and diversity in the ramified flow structure is obtained. A predicted growth and decay effect with increasing branching levels leads to a local maximum, highlighting that the evolutionary direction of the ramifications is inversely proportional to the scale of the environment in which the flow structure develops. Assuming an evolutionary trend toward maximum infodynamic complexity, a pattern of asymmetric ramifications emerges, similar to the sap distribution in leaves or branching of trees.
Chapter
This book is about the applications of Shannon’s measure of information (SMI). The SMI was originally developed within the theory of communication. Soon after its publication, it became quite useful concept in many branches of science. However, the SMI was unfortunately confused with two other concepts; entropy and information. We begin this chapter with a brief definition of SMI and also discuss some of its meanings and its various interpretations. We will demonstrate that although the SMI and the entropy have some mathematical similarities, they are entirely different and distinct from each other, and while the SMI is a specific measure of a specific information, it is not the same as the general concept of information. In subsequent sections, we shall further discuss the meaning of the bit as a binary digit and a unit of information, the concept of “self-information,” and the relationship between entropy and probability. We will also discuss the Monty-Hall problem and its solution.
Article
Full-text available
The proximal Galerkin finite element method is a high-order, low iteration complexity, nonlinear numerical method that preserves the geometric and algebraic structure of pointwise bound constraints in infinite-dimensional function spaces. This paper introduces the proximal Galerkin method and applies it to solve free boundary problems, enforce discrete maximum principles, and develop a scalable, mesh-independent algorithm for optimal design with pointwise bound constraints. This paper also introduces the latent variable proximal point (LVPP) algorithm, from which the proximal Galerkin method derives. When analyzing the classical obstacle problem, we discover that the underlying variational inequality can be replaced by a sequence of second-order partial differential equations (PDEs) that are readily discretized and solved with, e.g., the proximal Galerkin method. Throughout this work, we arrive at several contributions that may be of independent interest. These include (1) a semilinear PDE we refer to as the entropic Poisson equation ; (2) an algebraic/geometric connection between high-order positivity-preserving discretizations and certain infinite-dimensional Lie groups; and (3) a gradient-based, bound-preserving algorithm for two-field, density-based topology optimization. The complete proximal Galerkin methodology combines ideas from nonlinear programming, functional analysis, tropical algebra, and differential geometry and can potentially lead to new synergies among these areas as well as within variational and numerical analysis. Open-source implementations of our methods accompany this work to facilitate reproduction and broader adoption.
Book
This book analyses the physics of complex systems to elaborate on the problems encountered in teaching and research. Inspired by Kurt Gödel (including his incompleteness theorems) it considers the concept of time, the idea of models, and the concept of complexity before trying to assess the state of physics in general. Using both general and practical examples, the idea of information is discussed, emphasizing its physical interpretation, and debates ideas in depth using examples and evidence to provide detailed considerations on the topics. Based on the authors’ own research on these topics, this book puts forward the idea that applying information measures can provide new results in studying complex systems. Helpful for those already familiar with the concepts who wish to deepen their critical understanding, Physics of Complex Systems will be extremely valuable both for people who are already involved in complex systems and also for readers beginning their journey into the subject. This work will encourage readers to follow and continue these ideas, enabling them to investigate the various topics further.
Chapter
This chapter focuses on self-organization, which explains the formation of structure without external control. Self-organization is found in physical, chemical, and biological systems. Examples from these disciplines are used to show that self-organization of agents does not require cognition or purposeful behavior. Self-organization is also found in social systems, but with the complication that it interacts with human-designed organization. Several examples of small-scale self-organization in social systems are presented before discussing large-scale self-organization. Hayek’s concept of spontaneous order is introduced and compared with the complexity concept of self-organization. The chapter concludes with a detailed discussion of the concept of dissipative systems and dissipative structure, which originated in chemistry but is highly relevant to economics. In terms of methods, the chapter introduces cellular automata and agent-based models as highly relevant tools for complexity economics.
Chapter
Probability theory is a core discipline in many fields of science and engineering, including LCA. Yet, its foundations and interpretation are the subject of continuous controversy. In this chapter, we will briefly address the Bayesian approach, fuzzy numbers, and some other alternatives.
Article
Full-text available
An interesting dialogue is developed between Newman et al. (2023) and Riera et al. (2023), in which proposals related to the development of equations of state in ecosystem ecology are discussed in depth. This debate is more important than it first appears, since the persistent gap between theoretical and empirical ecology is due, in part, to the absence of a comprehensive paradigm in this field. As it is exemplified in the first section of this article, a sequence of models derived from a reliable equation of state would help to bridge the aforementioned gap. Although this manuscript is analytically monolithic, five main thematic strands can be identified: (i) Examination of the objections of Newman et al. (2023), juxtaposing them with key concepts from ecology, information theory, physics and the MaxEnt algorithm. (ii) Validation of the criteria in (i) through theoretical and data-based examples. (iii) Interdisciplinary linkages between (i) and (ii). (iv) Epistemological generalizations from the previous strands to obtain a strategic roadmap for interdisciplinary modeling in ecology. (v) Conclusions referred to the general meaning of points (i) and (ii). On a general level, our objective is that this manuscript will go beyond a simple academic debate, being useful for colleagues interested in interdisciplinary modeling.
Article
Full-text available
The spectrum of species diversity (SDi) can be broken down into αSDi (taxocene level), βSDi (community level), and γSDi (metacommunity level). Species richness (S) and Shannon's index (H) are well-known SDi measures. The use of S as a surrogate for SDi often neglects evenness (J). Additionally, there is a wide variety of indicators of SDi. However, there are no reliable theoretical criteria for selecting the most appropriate SDi index despite the undeniable empirical usefulness of this parameter. This situation is probably due to the analytical gap still existing between SDi and trophodynamics. This article contributes to closing that gap by analyzing why S as a single surrogate for SDi is inconsistent from the trophodynamic point of view, so that an index combining S and J, such as H or H B (Brillouin's index), are the most appropriate choices in the context of a new theoretical framework (organic biophysics of ecosystems, OBEC) based on the well-known classical links between ecosystem ecology and thermodynamics. Exploration of data from reef fish surveys under stationary and non-stationary conditions corroborated the existence of the ecological equivalent of Boltzmann's constant (k eτ(e)) at the worldwide scale. This result substantiates the usefulness of the ecological equivalent of the compressibility factor as an indicator of environmental impact. k eτ(e) stablishes an analytical linkage between ecology, information theory, and statistical mechanics that allowed us to propose a new measure of total negative entropy (a.k.a. syntropy) per survey (S eτT) that is easy to calculate and displayed a highly significant correlation with total standing biomass per survey (m eTs). According to the slope of the regression equation S eτT , m eTs there is a large portion of S eτT that leaks into the environment and/or is captured by numerous ecological degrees of freedom independent of standing biomass. According to the changing value of the exponent of k eτ(e) , even among coex-isting taxocenes, it would have been impossible to obtain the results discussed in this article if the analysis had been carried out at the βSDi or γSDi level. This establishes αSDi as the most appropriate level of analysis to obtain empirically useful results about the key functional connections on which trophodynamic stability depends in dynamic multispaces. The results summarized here are based on the careful selection and intertwining of a few key variables, which indicates the importance of developing models as simple as possible in order to achieve the reliability necessary for successful biological conservation.
Article
Full-text available
Surface runoff over time shapes the morphology of the landscape. The resulting forms and patterns have been shown to follow distinct rules, which hold throughout almost all terrestrial catchments. Given the complexity and variety of the Earth's runoff processes, those findings have inspired researchers for over a century, and they resulted in many principles and sometimes proclaimed laws to explain the physics that govern the evolution of landforms and river networks. Most of those point to the first and second law of thermodynamics, which describe conservation and dissipation of free energy through fluxes depleting their driving gradients. Here we start with both laws but expand the related principles to explain the co-evolution of surface runoff and hillslope morphology by using measurable hydraulic and hydrological variables. We argue that a release of the frequent assumption of steady states is key, as the maximum work that surface runoff can perform on the sediments relates not only to the surface structure but also to “refueling” of the system with potential energy by rainfall events. To account for both factors, we introduce the concept of relative dissipation, relating frictional energy dissipation to the energy influx, which essentially characterizes energy efficiency of the hillslope when treated as an open, dissipative power engine. Generally, we find that such a hillslope engine is energetically rather inefficient, although the well-known Carnot limit does not apply here, as surface runoff is not driven by temperature differences. Given the transient and intermittent behavior of rainfall runoff, we explore the transient free energy balance with respect to energy efficiency, comparing typical hillslope forms that represent a sequence of morphological stages and dominant erosion processes. In a first part, we simulate three rainfall runoff scenarios by numerically solving the shallow water equations, and we analyze those in terms of relative dissipation. The results suggest that older hillslope forms, where advective soil wash erosion dominates, are less efficient than younger forms which relate to diffusive erosion regimes. In the second part of this study, we use the concept of relative dissipation to analyze two observed rainfall runoff extremes in the small rural Weiherbach catchment. Both flood events are extreme, with estimated return periods of 10 000 years, and produced considerable erosion. Using a previously calibrated, distributed physics-based model, we analyze the free energy balance of surface runoff simulated for the 169 model hillslopes and determine the work that was performed on the eroded sediments. This reveals that relative dissipation is largest on hillslope forms which relate to diffusive soil creep erosion and lowest for hillslope profiles relating to advective soil wash erosion. We also find that power in surface runoff and power in the complementary infiltration flux are during both events almost identical. Moreover, there is a clear hierarchy of work, which surface runoff expended on the sediments and relative dissipation between characteristic hillslope clusters. For hillslope forms that are more energy efficient in producing surface runoff, on average, a larger share of the free energy of surface runoff performs work on the sediments (detachment and transport) and vice versa. We thus conclude that the energy efficiency of overland flow during events does indeed constrain erosional work and the degree of freedom for morphological changes. We conjecture that hillslope forms and overland dynamics co-evolve, triggered by an overshoot in power during intermittent rainfall runoff events, towards a decreasing energy efficiency in overland flow. This implies a faster depletion of energy gradients during events and a stepwise downregulation of the available power to trigger further morphological development.
Article
Purpose This research fills a gap in process science by defining and explaining entropy and the increase of entropy in processes. Design/methodology/approach This is a theoretical treatment that begins with a conceptual understanding of entropy in thermodynamics and information theory and extends it to the study of degradation and improvement in a transformation process. Findings A transformation process with three inputs: demand volume, throughput and product design, utilizes a system composed of processors, stores, configuration, human actors, stored data and controllers to provide a product. Elements of the system are aligned with the inputs and each other with a purpose to raise standard of living. Lack of alignment is entropy. Primary causes of increased entropy are changes in inputs and disordering of the system components. Secondary causes result from changes made to cope with the primary causes. Improvement and innovation reduce entropy by providing better alignments and new ways of aligning resources. Originality/value This is the first detailed theoretical treatment of entropy in a process science context.
Article
Our purpose is to address the biological problem of finding foundations of the organization in the collective activity among cell networks in the nervous system, at the meso/macroscale, giving rise to cognition and consciousness. But in doing so, we encounter another problem related to the interpretation of methods to assess the neural interactions and organization of the neurodynamics, because thermodynamic notions, which have precise meaning only under specific conditions, have been widely employed in these studies. The consequence is that apparently contradictory results appear in the literature, but these contradictions diminish upon the considerations of the specific circumstances of each experiment. After clarifying some of these controversial points and surveying some experimental results, we propose that a necessary condition for cognition/consciousness to emerge is to have available enough energy, or cellular activity; and a sufficient condition is the multiplicity of configurations in which cell networks can communicate, resulting in non-uniform energy distribution, the generation and dissipation of energy gradients due to the constant activity. The diversity of sensorimotor processing of higher animals needs a flexible, fluctuating web on neuronal connections, and we review results supporting such multiplicity of configurations among brain regions associated with conscious awareness and healthy brain states. These ideas may reveal possible fundamental principles of brain organization that could be extended to other natural phenomena and how healthy activity may derive to pathological states.
Article
Full-text available
An important step to incorporate information in the second law of thermodynamics was done by Landauer, showing that the erasure of information implies an increase in heat. Most attempts to justify Landauer’s erasure principle are based on thermodynamic argumentations. Here, using just the time-reversibility of classical microscopic laws, we identify three types of the Landauer’s erasure principle depending on the relation between the two final environments: the one linked to a logical input 1 and the other to logical input 0. The strong type (which is the original Landauer’s formulation) requires the final environments to be in thermal equilibrium. The intermediate type giving the entropy change of kBln2k_{\textrm{B}} \ln 2 k B ln 2 occurs when the two final environments are identical macroscopic states. Finally, the weak Landauer’s principle, providing information erasure with no entropy change, when the two final environments are macroscopically different. Even though the above results are formally valid for classical erasure gates, a discussion on their natural extension to quantum scenarios is presented. This paper strongly suggests that the original Landauer’s principle (based on the assumption of thermalized environments) is fully reasonable for microelectronics, but it becomes less reasonable for future few-atoms devices working at THz frequencies. Thus, the weak and intermediate Landauer’s principles, where the erasure of information is not necessarily linked to heat dissipation, are worth investigating.
Article
Full-text available
Argument In this comparative historical analysis, we will analyze the intellectual tendency that emerged between 1946 and 1956 to take advantage of the popularity of communication theory to develop a kind of informational epistemology of statistical mechanics. We will argue that this tendency results from a historical confluence in the early 1950s of certain theoretical claims of the so-called English School of Information Theory, championed by authors such as Gabor (1956) or MacKay (1969), and from the attempt to extend the profound success of Shannon’s ([1948] 1993) technical theory of sign transmission to the field of statistical thermal physics. As a paradigmatic example of this tendency, we will evaluate the intellectual work of Léon Brillouin (1956), who, in the mid-fifties, developed an information theoretical approach to statistical mechanical physics based on a concept of information linked to the knowledge of the observer.
Preprint
This book will be published by Taylor&Francis in 2023. Content Preface v 1. Prolegomenon 1 1.1. The generality of physics 1 1.2. Physics: A crisis that has been lasting for a century! Is that really so? 5 1.3. Complex systems in physics 13 1.4. Physics and mathematics walk together along a narrow path 16 2. Gödel’s incompleteness theorems and physics 31 2.1. Gödel’s biography and historical background of incompleteness theorems 31 2.2. An informal proof of Gödel’s incompleteness theorems of formal arithmetic 35 2.3. Gödel’s incompleteness theorems as a metaphor. Real possibilities and misrepresentation in their applications 42 2.4. Gödel’s work in physical problems and computer science 45 3. Time in physics 54 3.1. Time in philosophy and physics. Beyond Gödel’s time 54 3.2. Does the quantum of time exist? 59 3.3. Continuous and discrete time 62 3.4. Time in complex systems 67 4. Are model and theory synonymous in physics? Between epistemology and practice 82 4.1. Some background concepts and epistemology 82 4.2. Choice in model building 86 4.3. The discrete versus continuous dichotomy: Time and space in model building 90 4.4. The predictability of complex systems. Lyapunov and Kolmogorov time 93 4.5. Chaos in environmental interfaces in climate models 98 5. How to assimilate hitherto inaccessible information? 107 5.1. The physicality, abstractness, and concept of information 107 5.2. The metaphysics of chance (probability) 110 5.3. Shannon information. The triangle of the relationships between energy, matter, and information 114 5.4. Rare events in complex systems: What information can be derived from them? 118 5.5. Information in complex systems 122 6. Kolmogorov and change complexity and their applications to physical complex systems 132 6.1. Kolmogorov complexity: An incomputable measure and Lempel-Ziv algorithm 132 6.2. Change complexity: A measure that detects change 136 6.3. Kolmogorov complexity in the analysis of the LIGO signals and Bell’s experiments 141 6.4. Change complexity in the search for patterns in river flows 149 7. The separation of scales in complex systems. “Breaking” point at the time scale 160 7.1. The generalization of scaling in Gödel’s world. Scaling in phase transitions and critical phenomena 160 7.2. The separation of scales and capabilities of the renormalization group 166 7.3. A phase transition model example: The longevity of the Heisenberg model 174 7.4. Complexity and time scale. The “breaking” point with an experimental example 178 8. The representation of the randomness and complexity of turbulent flows 194 8.1. The randomness of turbulence in fluids 194 8.2. The representation of the randomness and complexity of turbulent flows with Kolmogorov complexity 199 8.3. The complexity of coherent structures in the turbulent mixing layer 205 8.4. Information measures describing the river flow as a complex natural fluid system 211 9. The physics of complex systems and art 221 9.1. An attempt to grasp the complexity of the human brain 221 9.2. The dualism between science and art 228 9.3. Perception: Change complexity in psychology 232 9.4. Entropy, change complexity, and Kolmogorov complexity in observing differences in painting 238 10. The modeling of complex biophysical systems 251 10.1. The role of physics in the modeling of the human body’s complex systems 251 10.2. The stability of the synchronization of intercellular communication in the tissue with the closed contour arrangement of cells 258 10.3. The instability of the synchronization of intercellular communication in the tissue with a closed contour arrangement of cells: a potential trigger for autoimmune disorders 263 10.4. The search for information in brain disorders 269 Appendix A 281 Appendix B 284 Short abstract Ch1 is a discursive introduction to the book about the current state of physics considering complex systems (CSs) through the relationship between physics and mathematics. Ch2 deals with Kurt Gödel's background giving informal proof of his incompleteness theorems (ITs) and misconceptions about applying them in physics. Ch3 deals with issues in philosophy regarding time. We shortly outlined the understanding of time in physics since, in CSs, it operates concurrently at different scales. Ch4 is devoted to models in physics considering model choice, continuous-time versus discrete-time in model building, model predictability (Lyapunov time), and chaos in climate models. Ch5 discusses information and its relation to physics, addressing the following aspects of information: physicality, abstractness, concept, metaphysics of chance, and information in CSs. In Ch6 are elaborated two complexity measures that are used for the analysis of Bell's and the LIGO signals and environmental fluid flows. In Ch7 (i) we set one view on the separation of scales in CSs as a reflection of Gödel's ITs, (ii) we pointed out the limits of the renormalization group related to the separation of scales, (iii) we emphasized a need for new mathematics for scaling in CSs introducing the "breaking" point on a time scale. Ch8 discusses randomness in turbulent flows and its quantification via complexity and considers information measures suitable for its description. We elaborate on the dualism between physics and art in Ch9, emphasizing the place of the physics of CSs in creating an impression about a picture through perception analyzed with change complexity and the recognition of order and disorder with entropy. In Ch10 are presented the contributions of the physics of CSs to medical science (intercellular communication, autoimmune diseases, and brain disorders.)
Preprint
Full-text available
Surface runoff over time shapes the morphology of the landscape. The resulting forms and patterns have been shown to follow distinct rules, which hold throughout almost all terrestrial catchments. Given the complexity and variety of the earth’s runoff processes, those findings have inspired researchers for over a century, and they resulted in many principles and sometimes proclaimed laws to explain the physics that govern the evolution of landforms and river networks. Most of those point to the 1st and 2nd law of thermodynamics, which describe conservation and dissipation of free energy through fluxes depleting their driving gradients. Here we start with both laws but expand the related principles to explain the coevolution of surface runoff and hillslope morphology by using measurable hydraulic and hydrological variables. We argue that a release of the frequent assumption of steady states is key, as the maximum work that surface runoff can perform on the sediments relates not only to the surface structure but also to “refueling” of the system with potential energy by rainfall events. To account for both factors, we introduce the concept of relative dissipation, relating frictional energy dissipation to the energy influx, which essentially characterises energy efficiency of the hillslope when treated as an open, dissipative power engine. Generally, we find that such a hillslope engine is energetically rather inefficient, although the well-known Carnot limit does not apply here, as surface runoff is not driven by temperature differences. Given the transient and intermittent behaviour of rainfall runoff, we explore the transient free energy balance with respect to energy efficiency, comparing typical hillslope forms that represent a sequence of morphological stages and dominant erosion processes. In a first part, we simulate three rainfall-runoff scenarios by numerically solving the shallow water equations and we analyse those in terms of relative dissipation. The results suggest that older hillslope forms, where advective soil wash erosion dominates, are less efficient than younger forms which relate to diffusive erosion regimes. In the second part of this study, we use the concept of relative dissipation to analyse two observed rainfall runoff extremes in the small rural Weiherbach catchment. Both flood events are extreme, with estimated return periods of 10000 years and produced considerable erosion. Using a previously calibrated, distributed physics-based model, we analyse the free energy balance of surface runoff simulated for the 169 model hillslopes and determine the work that was performed on the eroded sediments. This reveals, that relative dissipation is largest on hillslope forms which relate to diffusive soil creep erosion, and lowest for hillslope profiles relating to advective soil wash erosion. We also find that power in surface runoff and power in the complementary infiltration flux are during both events almost identical. Moreover, there is a clear hierarchy of work, which surface runoff expended on the sediments and relative dissipation between characteristic hillslope clusters. For hillslope forms that are more energy efficient in producing surface-runoff, on average a larger share of the free energy of surface runoff performs work on the sediments (detachment and transport) and vice versa. We thus conclude that the energy efficiency of overland flow during events does indeed constrain erosional work and the degree of freedom for morphological changes. We conjecture that hillslope forms and overland dynamics coevolve, triggered by an overshoot in power during intermittent rainfall runoff events, towards a decreasing energy efficiency in overland flow. This implies a faster depletion of energy gradients during events, and a stepwise downregulation of the available power to trigger further morphological development.
Article
Full-text available
Navigation is one of the most fundamental skills of animals. During spatial navigation, grid cells in the medial entorhinal cortex process speed and direction of the animal to map the environment. Hippocampal place cells, in turn, encode place using sensory signals and reduce the accumulated error of grid cells for path integration. Although both cell types are part of the path integration system, the dynamic relationship between place and grid cells and the error reduction mechanism is yet to be understood. We implemented a realistic model of grid cells based on a continuous attractor model. The grid cell model was coupled to a place cell model to address their dynamic relationship during a simulated animal’s exploration of a square arena. The grid cell model processed the animal’s velocity and place field information from place cells. Place cells incorporated salient visual features and proximity information with input from grid cells to define their place fields. Grid cells had similar spatial phases but a diversity of spacings and orientations. To determine the role of place cells in error reduction for path integration, the animal’s position estimates were decoded from grid cell activities with and without the place field input. We found that the accumulated error was reduced as place fields emerged during the exploration. Place fields closer to the animal’s current location contributed more to the error reduction than remote place fields. Place cells’ fields encoding space could function as spatial anchoring signals for precise path integration by grid cells.
Article
Does the brain actively draw energy from the body when needed? There are different schools of thought regarding energy metabolism. In this study, the various theoretical models are classified into one of two categories: (1) conceptualizations of the brain as being purely passively supplied, which we call ‘P-models,’ and (2) models understanding the brain as not only passively receiving energy but also actively procuring energy for itself on demand, which we call ‘A-models.’ One prominent example of such theories making use of an A-model is the selfish-brain theory. The ability to make predictions was compared between the A- and P-models. A-models were able to predict and coherently explain all data examined, which included stress, sleep, caloric restriction, stroke, type-1-diabetes mellitus, obesity, and type-2-diabetes, whereas the predictions of P-models failed in most cases. The strength of the evidence supporting A-models is based on the coherence of accurate predictions across a spectrum of metabolic states. The theory test conducted here speaks to a brain that pulls its energy from the body on-demand.
Article
Full-text available
Rural residential sector is known to employ outdated and old inefficient appliances which lead to significant energy depletion and this has a bleak outcome on sustainability. Energy, exergy and sustainability analyses can be employed to point out the link between energy used and sustainability. This paper aims to highlight the effect of exergy loss on the sustainability of Cameroon residential sector. Hence, an exergy-based sustainability assessment of this sector is performed based on statistical data from 2000 to 2018. Measures to enhance the sustainability of this sector in terms of sustainability indicators are also addressed. The energy and exergy efficiencies of this sector are found to vary respectively from 26.32 to 28.55%; and from 5.95 to 6.58%. It was found from sustainability analysis that, depletion number and sustainability index were almost constant at 0.93 and 1.06, respectively. The waste exergy ratio for biofuel and wood energy are higher than that of kerosene, electricity and liquefied petroleum gas. The environmental destruction index of this sector was found to be high and records a maximum value of 31.60 and the environmental benign index was found to be low and its highest obtained value was 0.035. For biofuel and wood energy, the highest relative irreversibility was found to be 0.59 and 0.48, respectively while the highest lack of productivity was found to be 9.33 and 6.98, respectively. Replacing obsolete and outdated inefficient devices by modern efficient devices can enhance the sustainability of this sector.
ResearchGate has not been able to resolve any references for this publication.