Article

Thermodynamics and An Introduction to Thermostatistics

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... The connection between the Cobb-Douglas production function and Bernoulli's utility is that they both consider total productivity as being proportional to utility. In physics, we see a very similar equation as the fundamental equation of state describing an ideal gas [19] ( §3-4). This connection is beyond mere coincidence. ...
... Since entropy is utility [1], we will consider our model of the economy as following Bernoulli's logarithmic utility and be of a similar form to the Cobb-Douglas production function and that of an ideal (canonically distributed like income) gas [19] (eq 3.38), ...
... We can use Equation (10) to derive two important equations of state (see [19] ( §3-4)): the ideal money equation, P m = R T; ...
Article
Full-text available
The axiomatic framework of quantum game theory gives us a new platform for exploring economics by resolving the foundational problems that have long plagued the expected utility hypothesis. This platform gives us a previously unrecognized tool in economics, the statistical ensemble, which we apply across three distinct economic spheres. We examine choice under uncertainty and find that the Allais paradox disappears. For over seventy years, this paradox has acted as a barrier to investigating human choice by masking actual choice heuristics. We discover a powerful connection between the canonical ensemble and neoclassical economics and demonstrate this connection’s predictive capability by examining income distributions in the United States over 24 years. This model is an astonishingly accurate predictor of economic behavior, using just the income distribution and the total exergy input into the economy. Finally, we examine the ideas of equality of outcome versus equality of opportunity. We show how to formally consider equality of outcome as a Bose–Einstein condensate and how its achievement leads to a corresponding collapse in economic activity. We call this new platform ‘statistical economics’ due to its reliance on statistical ensembles.
... Although there are still many open conceptual questions, black hole thermodynamics is currently a field of active research in part because it is widely believed that it represents a tool for exploring problems related to quantum gravity. In this work, we will assume that black holes can be investigated as thermodynamic systems, in which the fundamental thermodynamic equation [26] is determined by the Bekenstein-Hawking entropy relationship. ...
... and constitutes the fundamental thermodynamic equation from which all the properties of the system can be derived [26]. In the case of black holes, fundamental equations are no longer homogeneous functions of their variables. ...
... To this end, the equilibrium states are represented as points of an abstract space called the equilibrium space, which is then equipped with Riemannian metrics. The choice of these metrics is important and in GTD this is done by demanding that they are invariant with respect to Legendre transformations, i.e., they do not depend on the choice of thermodynamic potential used for its description [26]. From the geometric point of view, it is appropriate to represent Legendre transformations as coordinate transformations that leave the geometric structure of a differential manifold invariant. ...
Article
Full-text available
We study the thermodynamic properties of a black hole that takes into account the effects of non-commutative geometry. To emphasize the role of new effects, we have chosen a specific modified Schwarzschild black hole inspired by non-commutative geometry. We show that, in order to apply the laws of quasi-homogeneous thermodynamics using the formalism of geometrothermodynamics, it is necessary to consider the non-commutative parameter as an independent thermodynamic variable. As a result, the properties of the black hole change drastically, leading to phase transitions that are directly related to the value of the non-commutative parameter. We also prove that an unstable commutative black hole can become stable in non-commutative geometry for particular values of the non-commutative parameter.
... For example, we allow for d i W k to have a fixed sign in any process, but it can be either positive or negative so it may not satisfy SL. This allows us to understand the significance on internal constraints as discussed by Callen [13] extensively as its implication for SL such as in Szilard engine [40,41] has not been properly understood, until clarified recently [42]. There are also interesting attempts to understand dissipation using information theory to understand dissipated heat ; we cite a few [43,44]. ...
... is not understood theoretically. The traditional approach in classical thermodynamics is to postulate SL in the form of entropy maximum principle ( [13], see for example, Chapter 4), then prove thermodynamic stability [15] and convergence to M seq by following M sevolution (M s → M seq as t → ∞) along blue arrows in the state space of Σ; see Figure 3. However, M u -evolution is never considered in classical thermodynamics, but we do for the following reason. ...
... Uniformity of the system or its lack plays an important role in our approach. An EQ macrostate M eq is uniform, while a NEQ macrostate M is nonuniform [13,15]. All arrows in Figure 3 point towards increasing time t. ...
Article
Full-text available
Dissipation and irreversibility are two central concepts of classical thermodynamics that are often treated as synonymous. Dissipation D is lost or dissipated work Wdiss≥0 but is commonly quantified by entropy generation ΔiS in an isothermal irreversible macroscopic process that is often expressed as Kullback–Leibler distance DKL in modern literature. We argue that DKL is nonthermodynamic, and is erroneously justified for quantification by mistakenly equating exchange microwork ΔeWk with the system-intrinsic microwork ΔWk=ΔeWk+ΔiWk, which is a very common error permeating stochastic thermodynamics as was first pointed out several years ago, see text. Recently, it is discovered that dissipation D is properly identified by ΔiW≥0 for all spontaneously irreversible processes and all temperatures T, positive and negative in an isolated system. As T plays an important role in the quantification, dissipation allows for ΔiS≥0 for T>0, and ΔiS<0 for T<0, a surprising result. The connection of D with Wdiss and its extension to interacting systems have not been explored and is attempted here. It is found that D is not always proportional to ΔiS. The determination of D requires dipk, but we show that Fokker-Planck and master equations are not general enough to determine it, which is contrary to the common belief. We modify the Fokker-Planck equation to fix the issue. We find that detailed balance also allows for all microstates to remain disconnected without any transition among them in an equilibrium macrostate, another surprising result. We argue that Liouville’s theorem should not apply to irreversible processes, contrary to the claim otherwise. We suggest to use nonequilibrium statistical mechanics in extended space, where pk’s are uniquely determined to evaluate D.
... Abraham follows the theory given by Callen [12] and gives significant postulates to describe the thermodynamics of a one-component system. From his work, the "internal energy" of a system can be defined as ...
... Besides the internal energy, another function is desired in which the intensive parameters are considered independent variables. This can be achieved by following the Legendre transformations [12] and results in the Helmholtz free energy The newly derived parameters , , and are called thermodynamic potentials. However, the Gibbs free energy may be the most interesting one for studying the phase transition because its condition corresponds to that of the phase transition [76]. ...
... Before a further simplification to eq.(4. [1][2][3][4][5][6][7][8][9][10][11][12], the definition of the change in chemical potential in a pure phase needs to be introduced. It describes the difference between molecular chemical potentials in two fluid states of a certain phase. ...
Thesis
Full-text available
Condensation is a significant topic in engineering areas. For example, during the expansion processwithin a steam turbine, condensing steam may form droplets which can reduce the machine’s efficiencyor even damage the blades. Hence, understanding this phenomenon is a hard requirement to minimisesuch damage or, in turn, to utilise the phenomenon under certain conditions.Nucleation or droplet formation, normally regarded as the first step of condensation, may determinewhether condensation occurs. Thus, numerous studies have been conducted to build the first understand-ing of nucleation, namely the nucleation theory. Amount this, the so-called classical nucleation theory(CNT) is popular which describes the homogeneous nucleation process and is widely used in engineer-ing calculations. The CNT partly considers the ideal gas law and is normally applied for steam at lowpressures exhibiting semi-ideal states. However, it was found later that the CNT might not reflect thenucleation process of real gases correctly, because real gases do not follow the ideal gas law. To predictthe nucleation process of real gases, modifications to the classical nucleation model have been made.However, from the author’s knowledge, none of them has been widely proven. The reason could be thatthe modifications did not include a review of the CNT from the perspective of real gases. This mayalso prevent an individual discussion on nucleation models because they normally have to be applied inconjunction with a droplet growth model.Hence, the presented work intends to check the validity of CNT from the perspective of real gasesand to develop a nucleation model for real gases, by following the classical derivation process from thethermodynamic-kinetical aspect. To achieve this goal, the assumptions made in the CNT regarding theideal gas law are identified and appropriately modified by considering the real gas equation of state.Firstly, models of the elevation in Gibbs free energy based on various approaches of the equationof state are concretely derived and compared, to analyse the impact of real gases on a simple vapour-droplet system. Secondly, an inconsistency is identified in the classical equilibrium droplet distributionwithin a supersaturated vapour against one of its significant assumptions as the fluid state exhibits a lowcompressibility factor. To eliminate this inconsistency, a method is presented by which the equilibriumdroplet distribution of a real gas is calculated. It shows plausible results at relatively low reduced tem-peratures and has limitation at relatively high reduced temperatures due to uncertainty in solving real gasequation of state. Furthermore, the presented work assumes an additional nucleation of small droplets tothe CNT, increasing the evaluated nucleation rate in principle.Finally, different nucleation models are applied to calculate the condensation process with constantexpansion rates. To focus on the nucleation process, the supercooling at the Wilson point is considered thekey parameter. The comparison between calculation results exhibits a quasi-linear correlation betweenthe supercooling at Wilson point and the logarithm of expansion rate. An extension of the “peak” ofnucleation rate can be detected at Wilson points very close to the critical point, leading to an evidentdeviation between the Wilson supercooling and the maximal supercooling. Furthermore, the calculationresults are compared with experiments regarding CO2, R12, R22, and water in Laval nozzles. It is foundin general that the classical model overpredicts the nucleation rate. In contrast, the presented models withthe additional nucleation agree with the test results regarding the supercooling at Wilson points.
... A quasi-static process is defined as a succession of equilibrium states [52]. Therefore, an isothermal quasi-static process is represented by an equilibrium curve of the form given by Equation (13), in which the temperature θ is fixed. ...
... which is the maximum achievable thermal efficiency, as stated by Carnot's theorem, which is derived as a consequence of the second law of thermodynamics [52]. The projection of the quasi-static Stirling cycle onto the (κ, y) plane in phase space is illustrated in the left panel of Figure 2 for the particular choice of parameters ν = χ = 0.5. ...
... Therefore, we have solved the optimisation problem presented in Equation (52), and the optimal irreversible Stirling cycle is fully characterised for any given operation points and extremal bath temperatures (θ min , θ max ). The corresponding protocols for the trap stiffness κ(τ) in the isothermal branches and the bath temperature θ(τ) in the isochoric connections are described in detail in Section 3. ...
Article
Full-text available
Heat engines transform thermal energy into useful work, operating in a cyclic manner. For centuries, they have played a key role in industrial and technological development. Historically, only gases and liquids have been used as working substances, but the technical advances achieved in recent decades allow for expanding the experimental possibilities and designing engines operating with a single particle. In this case, the system of interest cannot be addressed at a macroscopic level and their study is framed in the field of stochastic thermodynamics. In the present work, we study mesoscopic heat engines built with a Brownian particle submitted to harmonic confinement and immersed in a fluid acting as a thermal bath. We design a Stirling-like heat engine, composed of two isothermal and two isochoric branches, by controlling both the stiffness of the harmonic trap and the temperature of the bath. Specifically, we focus on the irreversible, non-quasi-static case—whose finite duration enables the engine to deliver a non-zero output power. This is a crucial aspect, which enables the optimisation of the thermodynamic cycle by maximising the delivered power—thereby addressing a key goal at the practical level. The optimal driving protocols are obtained by using both variational calculus and optimal control theory tools. Furthermore, we numerically explore the dependence of the maximum output power and the corresponding efficiency on the system parameters.
... Equation (8) generalizes a known result in thermodynamics [10,[14][15][16]. The more general result, Equation (6), is that the entropy is the weighted sum of the values of the observables that define the state. ...
... Given that the state of the system is of maximal entropy, the set of m values of the constraints and the set of m Lagrange multipliers that is conjugate are each an equally correct and useful characterization of the state. As discussed by Callen [14] (the first edition is much better in this particular respect), there is a Legendre transform relating the use of the two sets of variables. As clearly discussed by Callen, one can also usefully introduce intermediate characterizations, using some observables and the other variables being the rest of the Lagrange multipliers. ...
... This is the most general equation of change for the values of the information provided by the basis observables E kl . The one cardinal assumption is that the dynamics are unitary as used in Equation (14). It is a strong assumption because unitary implies reversible, U(−t) = U † (t) = U −1 (t). ...
Article
Full-text available
A quantitative expression for the value of information within the framework of information theory and of the maximal entropy formulation is discussed. We examine both a local, differential measure and an integral, global measure for the value of the change in information when additional input is provided. The differential measure is a potential and as such carries a physical dimension. The integral value has the dimension of information. The differential measure can be used, for example, to discuss how the value of information changes with time or with other parameters of the problem.
... Conventional thermodynamics can only fully describe infinitely slow, equilibrium processes. About all real, finite-time processes the second law of thermodynamics only asserts that some amount of entropy is dissipated into the environment, which can be expressed with the average, irreversible entropy production as Σ ≥ 0 [9]. The (detailed) fluctuation theorem makes this statement more precise by expressing that negative fluctuations of the entropy production are exponentially unlikely [5][6][7][8]10], ...
... One key advantage of the effective field theory (16) over Eq. (15) is that the classical field χ cl can be thought of as a work reservoir [9,84]. This reservoir sources all interactions and the χ cl field carries this energy into or out of the system. ...
... However, the quantum Jarzynski equality made no assumptions of perturbativity and only required the mild assumption of unital dynamics in Eqs. (8) and (9). Thus, while not verified analytically, these fluctuation theorems should hold to any order perturbatively and may even hold non-perturbatively. ...
Preprint
The fluctuation theorems, and in particular, the Jarzynski equality, are the most important pillars of modern non-equilibrium statistical mechanics. We extend the quantum Jarzynski equality together with the Two-Time Measurement Formalism to their ultimate range of validity -- to quantum field theories. To this end, we focus on a time-dependent version of scalar phi-four. We find closed form expressions for the resulting work distribution function, and we find that they are proper physical observables of the quantum field theory. Also, we show explicitly that the Jarzynski equality and Crooks fluctuation theorems hold at one-loop order independent of the renormalization scale. As a numerical case study, we compute the work distributions for an infinitely smooth protocol in the ultra-relativistic regime. In this case, it is found that work done through processes with pair creation is the dominant contribution.
... In thermodynamics second order phase transitions can be classified into universality classes [3]. At the critical point thermodynamic response functions, such as the magnetic susceptibility, diverge, χ ∼ |T − T c | −γ , where T is the temperature and γ is called critical exponent. ...
... Maximum available work theorem The only processes that can be fully described by means of conventional thermodynamics are infinitely slow, equilibrium, aka quasistatic processes [3]. Nonequilibrium processes are characterized by the maximum available work theorem [36]. ...
... where ∆S is the change of thermodynamic entropy of the system, ∆S B is the change of entropy in B, and where we used that the entropy of the work reservoir is negligible [3,37]. Since the heat reservoir is so large that it is always in equilibrium at inverse temperature β we immediately can write β∆E B = ∆S B , and hence we always have ...
Preprint
If a system is driven at finite-rate through a phase transition by varying an intensive parameter, the order parameter shatters into finite domains. The Kibble-Zurek mechanism predicts the typical size of these domains, which are governed only by the rate of driving and the spatial and dynamical critical exponents. We show that also the irreversible entropy production fulfills a universal behavior, which however is determined by an additional critical exponent corresponding to the intensive control parameter. Our universal prediction is numerically tested in two systems exhibiting noise-induced phase transitions.
... Cv −1 , where V c/h and T c/h are the volume and temperature of the working medium at the end of the hot and cold isochores, respectively. C p and C v are the heat capacities under constant pressure and constant volume [33]. As expected, Otto efficiency is always smaller than the efficiency of the Carnot cycle η o ≤ η c = 1 − Tc T h . ...
... where the convention of the sign of the work for a working engine is negative, in correspondance with Callen [33], and we use the convention of Figure 3 to mark the population and energy at the corners of the cycle. ...
... Under squeezing, the equation of motion of the hot isochore thermalisation (Equation (33)) is modified to: ...
Preprint
The quantum Otto cycle serves as a bridge between the macroscopic world of heat engines and the quantum regime of thermal devices composed from a single element. We compile recent studies of the quantum Otto cycle with a harmonic oscillator as a working medium. This model has the advantage that it is analytically trackable. In addition, an experimental realization has been achieved employing a single ion in a harmonic trap. The review is embedded in the field of quantum thermodynamics and quantum open systems. The basic principles of the theory are explained by a specific example illuminating the basic definitions of work and heat. The relation between quantum observables and the state of the system is emphasized. The dynamical description of the cycle is based on a completely positive map formulated as a propagator for each stroke of the engine. Explicit solutions for these propagators are described on a vector space of quantum thermodynamical observables. These solutions which employ different assumptions and techniques are compared. The tradeoff between power and efficiency is the focal point of finite-time-thermodynamics. The dynamical model enables to study finite time cycles limiting time on the adiabtic and the thermalization times. Explicit finite time solutions are found which are frictionless, meaning that no coherence is generated also known as shortcuts to adiabaticity. The transition from frictionless to sudden adiabats is characterized by a non-hermitian degeneracy in the propagator. In addition the influence of noise on the control is illustrated. These results are used to close the cycles either as engines or as refrigerators.
... The statistical mechanics perspective, as discussed in [20], [28], [8], [7], [29], [4], [13], [22], [5], [6], [15], [37], [1], and [3], provides additional insights into the behavior of complex systems that can be modeled using renewal processes. In particular, the techniques developed in these works for analyzing phase transitions and critical phenomena have analogues in the study of the asymptotic behavior of renewal processes. ...
... The theory of Wiener chaos expansions for renewal processes has intriguing connections to statistical mechanics [4], [5], [6], [7], [8], [20], [13], [22], [15], [37], [1], [3]. In particular, the total variation distance between the distribution of Wiener chaos expansions and their limiting Gaussian distributions is analogous to the distance between the microstate distribution of a physical system and its equilibrium distribution [32]. ...
Preprint
This article studies the total variation distance between the distribution of Wiener chaos expansions associated with renewal processes and their limiting Gaussian distributions. Renewal processes, which model sequences of random events separated by independent and identically distributed intervals , are fundamental in probability theory and have applications in queueing theory, reliability analysis, and risk management. By leveraging tools from Malliavin calculus and Stein's method, we derive explicit bounds on the total variation distance for functionals of renewal processes expressed in terms of Wiener chaos expansions. We provide detailed mathematical proofs and numerical simulations to illustrate the convergence rates and the applicability of our results. This work bridges the gap between classical renewal theory and modern stochastic analysis, offering new insights into the asymptotic behavior of renewal processes.
... The classical central limit theorem for renewal processes states that, under appropriate conditions, the normalized and centered renewal counting process converges in distribution to a standard normal random variable as t approaches infinity. This result, inspired by principles discussed in Callen's work on thermodynamics [4], establishes a bridge between discrete counting processes and continuous normal approximations. Specifically, if 0 ¡ µ ¡ ∞ and 0 ¡ σ 2 ¡ ∞, then: ...
... 3. Spatial Extensions: Extending the theory to spatial renewal processes and random fields, building on Feng's work on random analytic functions [27], [28]. 4. Applications in Machine Learning: Leveraging the theory for analyzing the convergence properties of stochastic algorithms in machine learning, particularly in the analysis of neural network training dynamics. ...
Preprint
This article investigates the convergence properties of the central limit theorem (CLT) for renewal processes and its generalization to Wiener chaos expansions. Renewal processes, which model sequences of random events separated by independent and identically distributed intervals, are fundamental in probability theory and have wide-ranging applications in queueing theory, reliability analysis, and risk management. We establish precise convergence rates for the CLT in the context of renewal processes , leveraging tools such as Stein's method and Malliavin calculus. Furthermore, we extend these results to Wiener chaos expansions, providing a framework for analyzing the asymptotic behavior of functionals of renewal processes in higher-order chaos. By connecting renewal theory with the Wiener chaos decomposition, we offer new insights into the interplay between classical probability theory and modern stochastic analysis. The results are supported by rigorous mathematical proofs and numerical simulations, highlighting the applicability of these techniques to both theoretical and applied problems.
... ECC offers superior security and efficiency compared to traditional cryptographic systems based on integer factorization or discrete logarithms. The theoretical foundations for ECC, developed by [6] and [14], establish connections between the arithmetic of elliptic curves and the hardness of the elliptic curve discrete logarithm problem (ECDLP). ...
... The statistical interpretation of isogeny graphs, particularly in the presence of complex sampling schemes or dependent data, requires further theoretical development. While asymptotic results have been established by [6] and [14], finite-sample guarantees and non-parametric methods for cryptographic inference represent important open problems. ...
Preprint
Topological Data Analysis (TDA) utilizes tools from algebraic topology to study the shape and structure of data. When applied to Gaussian Random Fields (GRFs) defined over manifolds, TDA facilitates the exploration of complex data structures by examining their topological features. This paper provides an overview of GRFs on manifolds and discusses how TDA methods, such as Betti numbers, Euler characteristic , and persistent homology, can be employed to analyze these fields.
... ECC offers superior security and efficiency compared to traditional cryptographic systems based on integer factorization or discrete logarithms. The theoretical foundations for ECC, developed by [6] and [14], establish connections between the arithmetic of elliptic curves and the hardness of the elliptic curve discrete logarithm problem (ECDLP). ...
... The statistical interpretation of isogeny graphs, particularly in the presence of complex sampling schemes or dependent data, requires further theoretical development. While asymptotic results have been established by [6] and [14], finite-sample guarantees and non-parametric methods for cryptographic inference represent important open problems. ...
Preprint
The embedding of data into higher-dimensional spaces is a pivotal technique in modern data analysis, enabling the capture of complex structures and relationships within the data. This paper explores various methods of embedding data into higher-dimensional spaces and examines the topological properties of these embed-dings. We discuss the theoretical foundations, present mathematical formulations, and highlight applications in data analysis.
... Callen posed this question, in his textbook, when the definition of equilibrium is discussed (Ref. [41], Section 1.5). He noticed that, for many solids, the current properties are affected by the past history. ...
... There has been a long debate about the nature of information when the meaning of entropy is investigated [83,90]. It is said that there are two approaches to statistical thermodynamics: objective and subjective information (in Ref. [41], p. 385). The issue is, who is the subject of missing information [90]? ...
Article
Full-text available
In glass physics, order parameters have long been used in the thermodynamic description of glasses, but their nature is not yet clear. The difficulty is how to find order in disordered systems. This study provides a coherent understanding of the nature of order parameters for glasses and crystals, starting from the fundament of the definition of state variables in thermodynamics. The state variable is defined as the time-averaged value of a dynamical variable under the constraints, when equilibrium is established. It gives the same value at any time it is measured as long as the equilibrium is maintained. From this definition, it is deduced that the state variables of a solid are the time-averaged positions of all atoms constituting the solid, and the order parameters are essentially the same as state variables. Therefore, the order parameters of a glass are equilibrium atom positions.
... An isolated system neither exchanges matter nor energy with its surroundings (Callen, 1985). If Heaven represents an isolated system, it would be completely detached from Earth, contradicting the idea of divine intervention. ...
... For example, a pot of boiling water (Callen, 1985 Open, closed, and isolated systems are used in a variety of disciplines. Table 1 illustrates areas of convergence (open systems, for example, allow for interaction in all fields) and divergence (isolated systems, for example, are theoretical in physics but philosophically real in religious thought). ...
Article
Full-text available
The openness or closeness of the universe has long been a topic of discussion in both scientific and religious circles. Scientific theories in cosmology and quantum mechanics suggest an evolving and possibly infinite universe, while religious perspectives offer varied interpretations, often emphasizing divine purpose, interconnectedness, and ultimate destiny. Understanding this debate is crucial in addressing global challenges and fostering a more unified perspective on existence. Purpose: This study aims to explore the implications of an open or closed universe through scientific and religious perspectives, identifying areas of convergence and divergence. It examines how these views shape human consciousness, governance, and ethical responsibility toward planetary and interstellar sustainability. A qualitative comparative analysis was conducted using literature reviews of scientific theories, theological texts, and historical perspectives. Key scientific frameworks included thermodynamics, quantum mechanics, and astrophysics, while religious interpretations were drawn from Christianity, Islam, and other spiritual traditions. Findings: The analysis revealed that an open-system perspective aligns with both scientific discoveries—such as cosmic expansion and interstellar material exchange—and religious teachings that emphasize universal interconnectedness. A closed-system perspective, while useful in deterministic models, may limit broader explorations of human potential, intergalactic cooperation, and ethical governance. Conclusions: By fusing scientific discoveries with spiritual consciousness, an open-system paradigm promotes a transition from conflict-driven government to collaborative global and interplanetary stewardship. Recommendations: Policymakers, educators, and religious leaders should foster interdisciplinary dialogue, promote ethical space exploration, and develop frameworks for sustainable planetary and cosmic engagement.
... It is an Eulerian degree 1 homogeneous function of ν: Φ(λν) = λΦ(ν). This fits naturally to the fundamental thermodynamic postulate formulated by H. B. Callen [23]. The LFT of Φ as a function of the normalizedν then yields [13,14,24]: ...
... With a set of observed values x ′ = (x 1 , · · · , x J ) in hand where J < n, the maximum entropy principle (MEP) from classical thermodynamics [23] and the contraction principle from the mathematical theory of probability [8] assert that the most probableν * that is consistent with the set of x ′ corresponds to minimum neg-entropy: ...
Article
Full-text available
Statistical counting ad infinitum is the holographic observable to a statistical dynamics with finite states under independent and identically distributed N sampling. Entropy provides the infinitesimal probability for an observed empirical frequency ν^ with respect to a probability prior p, when ν^≠p as N→∞. Following Callen’s postulate and through Legendre–Fenchel transform, without help from mechanics, we show that an internal energy u emerges; it provides a linear representation of real-valued observables with full or partial information. Gibbs’ fundamental thermodynamic relation and theory of ensembles follow mathematically. u is to ν^ what chemical potential μ is to particle number N in Gibbs’ chemical thermodynamics, what β=T−1 is to internal energy U in classical thermodynamics, and what ω is to t in Fourier analysis.
... Verifying Carleman's condition for U-statistics presents a significant challenge. The inherent complexity in deriving explicit expressions for all moments of a general U-statistic, particularly for higher orders or intricate kernel functions, often renders this condition practically impossible to check [1]. The combinatorial nature of the U-statistic's definition, involving summations over subsets of the sample, combined with the potential for complex dependencies introduced by the kernel, contributes to the intractability of analyzing the growth rate of these moments. ...
Preprint
U-statistics are fundamental tools in statistical estimation, providing unbiased estimators for a wide range of population parameters based on symmetric kernel functions applied to subsets of independent and identically distributed (IID) random variables. A significant challenge arises when proving the convergence of U-statistics to elements of Wiener chaos, particularly due to the difficulty in verifying Carleman's conditiona criterion ensuring that a distribution is uniquely determined by its moments. This condition is often intractable for U-statistics due to the combinatorial complexity of their moment structures. This work explores alternative methodologies to establish the convergence of U-statistics to Wiener chaos without relying on Carleman's condition. We highlight the use of advanced proba-bilistic tools, including Malliavin calculus, Stein's method, and convergence in total variation, to bypass the need for moment-based uniqueness criteria. Specifically, we discuss the Wiener-Ito chaos expansion as a framework for representing U-statistics in the limit, the application of Stein's method for normal approximation on Wiener chaos, and the phenomenon of "superconvergence" in total variation for sequences within finite Wiener chaoses. Through case studies and examples, we demonstrate how these methods can be applied to specific classes of U-statistics, such as those with product kernels or degenerate structures, to prove convergence to Gaussian, chi-squared, or Hermite polynomial limits. Our findings underscore the power of these alternative approaches in overcoming the limitations of traditional moment-based techniques, offering a robust pathway for analyzing the asymptotic behavior of U-statistics within the Wiener chaos framework.
... Let ρ ≥ 0 denote the mass density of a certain diffusive species. Thermodynamics postulates the existence of a Gibbs free energy density per unit volume G(ρ, T ), where T is the absolute temperature that is minimized at equilibrium [10]. The chemical potential µ is conjugate to the concentration and defined as (1) µ = ∂G ∂ρ (ρ, T ). ...
Preprint
Full-text available
We formulate a finite-particle method of mass transport that accounts for general mixed boundary conditions. The particle method couples a geometrically-exact treatment of advection; Wasserstein gradient-flow dynamics; and a Kullback-Leibler representation of the entropy. General boundary conditions are enforced by introducing an adsorption/depletion layer at the boundary wherein particles are added or removed as dictated by the boundary conditions. We demonstrate the range and scope of the method through a number of examples of application, including absorption of particles into a sphere and flow through pipes of square and circular cross section, with and without occlusions. In all cases, the solution is observed to converge weakly, or in the sense of local averages.
... AI -based forecasting methods mainly use machine -learning approaches to model the input -output relationship of the system by learning various rules, and use this model for prediction. Early methods mainly include artificial neural networks (ANN) [7] and support vector machines (SVM) [8], etc. In recent years, AIbased forecasting methods have been more widely applied, giving rise to methods such as recurrent neural networks (RNN) [9], long -short -term memory networks (LSTM) [10], k -means clustering algorithm [11], random forest algorithm (Random Forecast, RF) [12], etc. ...
Article
With the continuous growth of renewable energy in the power system, accurate prediction of wind power becomes an important issue in the auxiliary decision-making system of power trading, but the uncertainty of wind power generation brings great challenges to the stable operation of the power system. Accurate wind power prediction is helpful for the efficient development of wind power energy. Aiming at the problem of insufficient data mining in the face of complex wind power data, an ultra-short-term wind power prediction algorithm based on Gaussian mixture model (GMM), ridge regression algorithm and light gradient lift machine (LightGBM) is proposed. The proposed algorithm is specifically designed for application in the auxiliary decision-making system for power trading. The method uses GMM to cluster the features, and then uses the ridge regression algorithm to predict each cluster after clustering. Considering the limitations of a single prediction algorithm in the face of highly volatile wind power data, the prediction value after the ridge regression algorithm and the data after feature importance screening together constitute the final data set and transmit it to the LightGBM algorithm to obtain the final prediction result. Compared with widely used prediction algorithms such as CNN, SVM, and LightGBM, this method has higher stability and prediction accuracy. By applying this methodology in the auxiliary decision-making system for power trading, the proposed algorithm enables effective short-term wind power generation prediction and provides reliable decision-making support for the electricity market. Its application contributes to the improvement of operational efficiency in power systems and facilitates the sustainable utilization of renewable energy resources.
... In classical thermodynamics, Eq. (28) is the statement of the second law for a system in contact with one thermal reservoir [42]. Equation (28) remains valid even under strong system-reservoir coupling [44,45], and even if S does not end in equilibrium [57]. ...
Preprint
Full-text available
How should one define thermodynamic quantities (internal energy, work, heat, etc.) for quantum systems coupled to their environments strongly? We examine three (classically equivalent) definitions of a quantum system's internal energy under strong-coupling conditions. Each internal-energy definition implies a definition of work and a definition of heat. Our study focuses on quenches, common processes in which the Hamiltonian changes abruptly. In these processes, the first law of thermodynamics holds for each set of definitions by construction. However, we prove that only two sets obey the second law. We illustrate our findings using a simple spin model. Our results guide studies of thermodynamic quantities in strongly coupled quantum systems.
... The same holds for other thermodynamic potentials. Using the standard definition of the thermodynamic temperature, see for example Callen (1985), ...
Preprint
Full-text available
We propose a thermodynamically based approach for constructing effective rate-type constitutive relations describing finite deformations of metamaterials. The effective constitutive relations are formulated as second-order in time rate-type Eulerian constitutive relations between only the Cauchy stress tensor, the Hencky strain tensor and objective time derivatives thereof. In particular, there is no need to introduce additional quantities or concepts such as ``micro-level deformation'',``micromorphic continua'', or elastic solids with frequency dependent material properties. Moreover, the linearisation of the proposed fully nonlinear (finite deformations) constitutive relations leads, in Fourier/frequency space, to the same constitutive relations as those commonly used in theories based on the concepts of frequency dependent density and/or stiffness. From this perspective the proposed constitutive relations reproduce the behaviour predicted by the frequency dependent density and/or stiffness models, but yet they work with constant -- that is motion independent -- material properties. This is clearly more convenient from the physical point of view. Furthermore, the linearised version of the proposed constitutive relations leads to the governing partial differential equations that are particularly simple both in Fourier space as well as in physical space. Finally, we argue that the proposed fully nonlinear (finite deformations) second-order in time rate-type constitutive relations do not fall into traditional classes of models for elastic solids (hyperelastic solids/Green elastic solids, first-order in time hypoelastic solids), and that the proposed constitutive relations embody a \textit{new} class of constitutive relations characterising elastic solids.
... where is the specific entropy and is the absolute temperature [24]. For the linear thermoelastic material, thermoelastic constitutive equations are used: ...
Article
Full-text available
The selection of refractory bricks significantly impacts the operational performance of brick structures in high-temperature environments. In this study, a coupled thermal stress model of a refractory brick structure was established and validated by means of thermal expansion experiments. This paper innovatively combined the brick number, brick thickness, and brick material to investigate their influence on brick structural performance. The results indicated that the influence of the brick number on the temperature was less significant than that of brick thickness. However, the brick number had a greater effect on vertical displacement and principal compressive stress than brick thickness, with the maximum differences being 342.3% and 28.9%. Compared to brick thickness, brick material had a more significant effect on vertical displacement and principal compressive stress, with the maximum differences being 77.1% and 67.4%. Additionally, the influence of brick material properties on vertical displacement and principal compressive stress was greater than that of the brick number, with the maximum differences being 77.6% and 65%. Therefore, when selecting refractory bricks, it is advisable to consider the brick material first, the brick number second, and the brick thickness last. This study offers theoretical guidance for refractory brick structure design and material selection in high-temperature applications.
... The seminal paper by Curzon and Ahlborn [1], published in 1975, is a landmark work in finite-time thermodynamics [2]. By analyzing an endoreversible Carnot engine with finite temperature differences between the working substance and the reservoirs, the authors optimized the power output of the engine and obtained the efficiency at maximum power (EMP), which is a more practical efficiency bound than the Carnot efficiency [3]. The EMP formula they derived, ...
Article
Full-text available
Curzon and Ahlborn’s 1975 paper, a pioneering work that inspired the birth of the field of finite-time thermodynamics, unveiled the efficiency at maximum power (EMP) of the endoreversible Carnot heat engine, now commonly referred to as the Curzon–Ahlborn (CA) engine. Historically, despite the significance of the CA engine, similar findings had emerged at an earlier time, such as the Yvon engine proposed by J. Yvon in 1955 that shares the exact same EMP, that is, the CA efficiency ηCA. However, the special setup of the Yvon engine has circumscribed its broader influence. This paper extends the Yvon engine model to achieve a level of generality comparable to that of the CA engine. With the power expression of the extended Yvon engine, we directly explain the universality that ηCA is independent of the heat transfer coefficients between the working substance and the heat reservoirs. A rigorous comparison reveals that the extended Yvon engine and CA engine represent the steady-state and cyclic forms of the endoreversible Carnot heat engine, respectively, and are equivalent.
... 30 Other textbooks make similar claims about the temporal nature of equilibrium. Buchdahl (1966) defines equilibrium via staticity-lack of change over relevant timescales. Landau and Lifshitz (1980) note how equilibrium states are states which are necessarily arrived after some relaxation time. ...
Article
Full-text available
Preparing general relativity for quantization in the Hamiltonian approach leads to the ‘problem of time’, rendering the world fundamentally timeless. One proposed solution is the ‘thermal time hypothesis’, which defines time in terms of states representing systems in thermal equilibrium. On this view, time is supposed to emerge thermodynamically even in a fundamentally timeless context. Here, I develop the worry that the thermal time hypothesis requires dynamics—and hence time—to get off the ground, thereby running into worries of circularity.
... The evaluation of thermodynamic potentials such as the entropy or free energy is key to understanding the equilibrium properties of physical systems [1]. In real-sized classical problems, computer simulations based on Molecular Dynamics or Monte Carlo methods cannot generically access them, mainly because of the size of the space of states to sample, which grows exponentially with the number of particles. ...
Article
Full-text available
Probabilistic models in physics often require the evaluation of normalized Boltzmann factors, which in turn implies the computation of the partition function Z. Obtaining the exact value of Z, though, becomes a forbiddingly expensive task as the system size increases. A possible way to tackle this problem is to use the Annealed Importance Sampling (AIS) algorithm, which provides a tool to stochastically estimate the partition function of the system. The nature of AIS allows for an efficient and parallel implementation in Restricted Boltzmann Machines (RBMs). In this work, we evaluate the partition function of magnetic spin and spin-like systems mapped into RBMs using AIS. So far, the standard application of the AIS algorithm starts from the uniform probability distribution and uses a large number of Monte Carlo steps to obtain reliable estimations of Z following an annealing process. We show that both the quality of the estimation and the cost of the computation can be significantly improved by using a properly selected mean-field starting probability distribution. We perform a systematic analysis of AIS in both small- and large-sized problems, and compare the results to exact values in problems where these are known. As a result, we propose two successful strategies that work well in all the problems analyzed. We conclude that these are good starting points to estimate the partition function with AIS with a relatively low computational cost. The procedures presented are not linked to any learning process, and therefore do not require a priori knowledge of a training dataset.
... We express the efficiencies in terms of the characteristic thermodynamic variables of the engines that are kept fixed along the thermodynamic cycles. For the ideal gas our expressions for the efficiencies are consistent with the literature, e.g., [4,[44][45][46][47]. We plotted the P V -diagrams for all heat engines in Figures 3 (for a holographic CFT) and 4 (for a monatomic ideal gas) and the T S-diagrams in Figures 5 (for a holographic CFT) and 6 (for a monatomic ideal gas). ...
Preprint
Full-text available
According to holography, a black hole is dual to a thermal state in a strongly coupled quantum system. The principle example of holography is the Anti-de Sitter/Conformal Field Theory (AdS/CFT) correspondence. We construct reversible heat engines where the working substance consists of a static thermal equilibrium state of a CFT. For thermal states dual to an asymptotically AdS black hole, this yields a novel realization of Johnson's holographic heat engines. We compute the efficiency for a number of idealized heat engines, such as the Carnot, Brayton, Otto, Diesel, and Stirling cycles. The efficiency of most heat engines can be derived from the CFT equation of state, which follows from scale invariance, and we compare them to the efficiencies for an ideal gas. However, the Stirling efficiency for a generic CFT is uniquely determined in terms of its characteristic temperature and volume only in the high-temperature or large-volume regime. We derive an exact expression for the Stirling efficiency for CFT states dual to AdS-Schwarzschild black holes, and compare the subleading corrections in the high-temperature regime with those in a generic CFT.
... In thermodynamics, entropy generation is directly related to the irreversibility of a process, such as plastic deformation or fracture. The total entropy change ΔS in a system is given by the relationship (Callen, 1985): ...
Article
Grain threshing is aimed at separating the grain from the inedible chaff. However, mechanical forces often damage grains, impacting their quality, market value, and germination ability. This comprehensive review examines theories and models developed to study and predict grain damage during threshing. These include contact theory, fracture mechanics models, discrete element modeling, and finite element analysis. This review delves into how these theories elucidate the influence of grain characteristics , such as moisture content and kernel size, on susceptibility to damage. It assesses how different machine parameters like threshing speed drum design and concave settings contribute to damage such as breakage, fissures, and internal cracks. We delve deeply into utilizing contact theory to estimate stress distribution when metal grains collide, employing fracture mechanics to understand crack initiation and propagation, and utilizing DEM and FEA to simulate how grains move within the thresher. By synthesizing knowledge from these modeling approaches, this review offers an understanding of the multifaceted nature of grain damage during threshing. They emphasize the significance of tuning settings and implementing suitable pre and post-threshing techniques to reduce waste and maintain top-notch grain quality for eating and seeding. This in-depth evaluation offers insights for scientists, engineers, and farming experts dedicated to enhancing the productivity and eco-friendliness of grain cultivation methods.
... With reference to the subject matter of this discussion, it can be said that the only discriminating function of interest of the total-entropy criterion reduces to that it performs between the reversible and irreversible versions of spontaneous chemical reactions. Thus, while an irreversible spontaneous chemical reaction evolves along a total-entropy increasing path which culminates in the total-entropy maximum recognized as its state of equilibrium, its reversible version evolves, on the other hand, along a constant total-entropy path, condition which according to the previous quote of Pitzer & Brewer, and in the words of Callen [13], makes of this path "… a dense succession of equilibrium states," and in those of Schmidt [14] "… a series of states of equilibrium which follow one another…" ...
Article
Full-text available
This paper, an addendum to “Dialectical Thermodynamics’ solution to the conceptual imbroglio that is the reversible path”, this journal, 10, 775-799, was written in response to the requests of several readers to provide further evidence of the said “imbroglio”. The evidence here presented relates to the incompatibility existing between the total-entropy and the Gibbs energy prescriptions for the reversible path. The previously published proof of the negentropic nature of the transformation of heat into work is here included to validate out conclusions about the Gibbs energy perspective.
... In S H and T H , the subscript H indicates thermodynamic quantities on the horizon, as examined below. The Helmholtz free energy F is defined as F = E − T S [128] and, therefore, d F is given by ...
Article
Full-text available
Horizon thermodynamics and cosmological equations in standard cosmology provide a holographic-like connection between thermodynamic quantities on a cosmological horizon and in the bulk. It is expected that this connection can be modified as a holographic-like thermodynamic relation for dissipative and non-dissipative universes whose Hubble volume V varies with time t . To clarify such a modified thermodynamic relation, the present study applies a general formulation for cosmological equations in a flat Friedmann–Lemaître–Robertson–Walker (FLRW) universe to the first law of thermodynamics, using the Bekenstein–Hawking entropy SBHS_{\textrm{BH}} S BH and a dynamical Kodama–Hayward temperature TKHT_{\textrm{KH}} T KH . For the general formulation, both an effective pressure pep_{e} p e of cosmological fluids for dissipative universes (e.g., bulk viscous cosmology) and an extra driving term fΛ(t)f_{\Lambda }(t) f Λ ( t ) for non-dissipative universes (e.g., time-varying Λ(t)\Lambda (t) Λ ( t ) cosmology) are phenomenologically assumed. A modified thermodynamic relation is derived by applying the general formulation to the first law, which includes both pep_{e} p e and an additional time-derivative term f˙Λ(t)\dot{f}_{\Lambda }(t) f ˙ Λ ( t ) , related to a non-zero term of the general continuity equation. When fΛ(t)f_{\Lambda }(t) f Λ ( t ) is constant, the modified thermodynamic relation is equivalent to the formulation of the first law in standard cosmology. One side of this modified relation describes thermodynamic quantities in the bulk and can be divided into two time-derivative terms, namely ρ˙\dot{\rho } ρ ˙ and V˙\dot{V} V ˙ terms, where ρ\rho ρ is the mass density of cosmological fluids. Using the Gibbons–Hawking temperature TGHT_{\textrm{GH}} T GH , the other side of this relation, TKHS˙BHT_{\textrm{KH}} \dot{S}_{\textrm{BH}} T KH S ˙ BH , can be formulated as the sum of TGHS˙BHT_{\textrm{GH}} \dot{S}_{\textrm{BH}} T GH S ˙ BH and [(TKH/TGH)1]TGHS˙BH[(T_{\textrm{KH}}/T_{\textrm{GH}}) -1] T_{\textrm{GH}} \dot{S}_{\textrm{BH}} [ ( T KH / T GH ) - 1 ] T GH S ˙ BH , which are equivalent to the ρ˙\dot{\rho } ρ ˙ and V˙\dot{V} V ˙ terms, respectively, with the magnitude of the V˙\dot{V} V ˙ term being proportional to the square of the ρ˙\dot{\rho } ρ ˙ term. In addition, the modified thermodynamic relation for constant fΛ(t)f_{\Lambda }(t) f Λ ( t ) is examined by applying the equipartition law of energy on the horizon. This modified thermodynamic relation reduces to a kind of extended holographic-like connection when a constant TKHT_{\textrm{KH}} T KH universe (whose Hubble volume varies with time) is considered. The evolution of thermodynamic quantities is also discussed, using a constant TKHT_{\textrm{KH}} T KH model, extending a previous analysis (Komatsu in Phys Rev D 108:083515, 2023).
... Physical systems undergoing irreversible processes naturally produce entropy and dissipate energy [1]. In statistical physics, there have been significant efforts to establish theoretical relationships between the entropy production and the properties of atomistic molecular dynamics. ...
Preprint
Full-text available
Some microscopic dynamics are also macroscopically irreversible, dissipating energy and producing entropy. For many-particle systems interacting with deterministic thermostats, the rate of thermodynamic entropy dissipated to the environment is the average rate at which phase space contracts. Here, we use this identity and the properties of a classical density matrix to derive upper and lower bounds on the entropy flow rate with the spectral properties of the local stability matrix. These bounds are an extension of more fundamental bounds on the Lyapunov exponents and phase space contraction rate of continuous-time dynamical systems.They are maximal and minimal rates of entropy production, heat transfer, and transport coefficients set by the underlying dynamics of the system and deterministic thermostat. Because these limits on the macroscopic dissipation derive from the density matrix and the local stability matrix, they are numerically computable from the molecular dynamics. As an illustration, we show that these bounds are on the electrical conductivity for a system of charged particles subject to an electric field.
... In thermodynamics, entropy generation is directly related to the irreversibility of a process, such as plastic deformation or fracture. The total entropy change ΔS in a system is given by the relationship (Callen, 1985): ...
Article
Full-text available
Grain threshing is aimed at separating the grain from the inedible chaff. However, mechanical forces often damage grains, impacting their quality, market value, and germination ability. This comprehensive review examines theories and models developed to study and predict grain damage during threshing. These include contact theory, fracture mechanics models, discrete element modeling, and finite element analysis. This review delves into how these theories elucidate the influence of grain characteristics, such as moisture content and kernel size, on susceptibility to damage. It assesses how different machine parameters like threshing speed drum design and concave settings contribute to damage such as breakage, fissures, and internal cracks. We delve deeply into utilizing contact theory to estimate stress distribution when metal grains collide, employing fracture mechanics to understand crack initiation and propagation, and utilizing DEM and FEA to simulate how grains move within the thresher. By synthesizing knowledge from these modeling approaches, this review offers an understanding of the multifaceted nature of grain damage during threshing. They emphasize the significance of tuning settings and implementing suitable pre and post-threshing techniques to reduce waste and maintain top-notch grain quality for eating and seeding. This in-depth evaluation offers insights for scientists, engineers, and farming experts dedicated to enhancing the productivity and eco-friendliness of grain cultivation methods.
... Considering the analogy between the negativity of the government's gain and the negativity of the work done by a system in an isothermal environment, F (·) is the economic counterpart of the Helmholtz free energy. Helmholtz free energy is also a convex function that determines the direction of change in physical systems (Callen, 1985). Recently, the behavior of stochastic systems has been described by information thermodynamics (Ito and Sagawa, 2013). ...
Article
Full-text available
This study examines an optimal agent producing a consumption good and depreciating capital and trading capital without depreciation. Assuming that the prices of the depreciating capital are fixed and that the future prices of undepreciated capital are announced, this study demonstrates that the agent never loses profit on trading undepreciated capital if the agent's state converges to the initial state. Corollary to this result, we found a scalar potential that predicts the change direction of agents trading the undepreciated capital exclusively among them. The similarity between the scalar potential and the Helmholtz free energy suggests that stochastic economic models could be characterized by a framework similar to information thermodynamics.
... where, A represents the set of fluctuations of residue A, p(A) is the probability and H(A) quantifies the degree of uncertainty associated with those fluctuations. For correspondence between the thermodynamic entropy and the Shannon entropy, please see Chapter 17 of the book by Callen (Callen, 1985). ...
Preprint
Full-text available
This study explores the relationship between residue fluctuations and molecular communication in proteins, emphasizing the role of these dynamics in allosteric regulation. We employ computational tools including the Gaussian Network Model, mutual information, and interaction information, to analyze how stochastic interactions among residues contribute to functional interactions while also introducing noise. Our approach is based on the postulate that residues experience continuous stochastic bombardment from impulses generated by their neighbors, forming a complex network characterized by small-world scaling topology. By mapping these interactions through the Kirchhoff matrix framework, we demonstrate how conserved correlations enhance signaling pathways and provide stability against noise-like fluctuations. Notably, we highlight the importance of selecting relevant eigenvalues to optimize the signal-to-noise ratio in our analyses, a topic that has yet to be thoroughly investigated in the context of residue fluctuations. This work underscores the significance of viewing proteins as adaptive information processing systems, and emphasizes the fundamental mechanisms of biological information processing. The basic idea of this paper is the following: Given two interacting residues on an allosteric path, what are the contributions of the remaining residues on this interaction. This naturally leads to the concept of synergy, redundancy and noise in proteins, which we analyze in detail for three proteins CheY, Tyrosine Phosphatase and beta-Lactoglobulin.
... In this sense, the present approach brings together the results from different models under one general principle consistent with LIT. Now, according to LIT [31,32], the rate of entropy generation isṠ = α q α F α , the sum of products of each flux q α and its associated thermodynamic force, or affinity F α . In the simple case of a heat flux q between two heat reservoirs, the corresponding thermodynamic force may be defined as: ...
Preprint
There is intense effort into understanding the universal properties of finite-time models of thermal machines---at optimal performance---such as efficiency at maximum power, coefficient of performance at maximum cooling power, and other such criteria. In this letter, a {\it global} principle consistent with linear irreversible thermodynamics is proposed for the whole cycle---without considering details of irreversibilities in the individual steps of the cycle. This helps to express the total duration of the cycle as τQˉ2/ΔtotS\tau \propto {\bar{Q}^2}/{\Delta_{\rm tot} S}, where Qˉ\bar{Q} models the effective heat transferred through the machine during the cycle, and ΔtotS\Delta_{\rm tot} S is the total entropy generated. By taking Qˉ\bar{Q} in the form of simple algebraic means (such as arithmetic and geometric means) over the heats exchanged by the reservoirs, the present approach is able to predict various standard expressions for figures of merit at optimal performance, as well as the bounds respected by them. It simplifies the optimization procedure to a one-parameter optimization, and provides a fresh perspective on the issue of universality at optimal performance, for small difference in reservoir temperatures. As an illustration, we compare performance of a partially optimized four-step endoreversible cycle with the present approach.
... As is shown in Fig. 8, it is always negative. For a thermodynamic system, to be stable equilibrium, it requires C P ≥ C V ≥ 0 [45]. Therefore, we conclude that the uncharged BHQ is thermodynamically unstable. ...
Preprint
We study the thermodynamic stabilities of uncharged and charged black holes surrounded by quintessence (BHQ) by means of effective thermodynamic quantities. When the state parameter of quintessence ωq\omega_q is appropriately chosen, the structures of BHQ are something like that of black holes in de Sitter space. Constructing the effective first law of thermodynamics in two different ways, we can derive the effective thermodynamic quantities of BHQ. Especially, these effective thermodynamic quantities also satisfy Smarr-like formulae. It is found that the uncharged BHQ is always thermodynamically unstable due to negative heat capacity, while for the charged BHQ there are phase transitions of the second order. We also show that there is a great deal of difference on the thermodynamic properties and critical behaviors of BHQ between the two ways we employed.
... The Dieterici equation of state is given by (Callen, 1985): ...
Preprint
The nature of dark matter (DM) and dark energy (DE) which is supposed to constitute about 95% of the energy density of the universe is still a mystery. There is no shortage of ideas regarding the nature of both. While some candidates for DM are clearly ruled out, there is still a plethora of viable particles that fit the bill. In the context of DE, while current observations favour a cosmological constant picture, there are other competing models that are equally likely. This paper reviews the different possible candidates for DM including exotic candidates and their possible detection. This review also covers the different models for DE and the possibility of unified models for DM and DE. Keeping in mind the negative results in some of the ongoing DM detection experiments, here we also review the possible alternatives to both DM and DE (such as MOND and modifications of general relativity) and possible means of observationally distinguishing between the alternatives.
... It has been shown that out-of-equilibrium processes there are always two independent important quantities, namely the spectral function and the statistical propagator, this is related to the fact that propagators will not only depend on the difference of time between two events but also on the "center of mass" time. In equilibrium, these two quantities are related via the Kubo-Martin-Schwinger relation [3] , or the Fluctuation-Dissipation Theorem [4,5] . Conditions which does not apply in an out-of-equilibrium scenario. ...
Preprint
We show how to compute the spectral function for a scalar theory in two different scenarios: one which disregards back-reaction i.e. the response of the environment to the external particle, and the other one where back-reaction is considered. The calculation was performed using the Kadanoff-Baym equation through the Keldysh formalism. When back-reaction is neglected, the spectral function is equal to the equilibrium one, which can be represented as a Breit-Wigner distribution. When back-reaction is introduced we observed a damping in the spectral function of the thermal bath. Such behavior modifies the damping rate for particles created within the bath. This certainly implies phenomenological consequences right after the Big-Bang, when the primordial bath was created.
... The derivation of the temperature in thermodynamics is related to the maximum of the total entropy of a system (under various constraints). This way its reciprocal, 1/T is an integrating factor to the heat in order to obtain a total differential of the entropy [42,43]. Here we follow the same strategy considering a vectorial integrating factor A a : ...
Preprint
Relativistic thermodynamics is constructed from the point of view of special relativistic hydrodynamics. A relativistic four-current for heat and a general treatment of thermal equilibrium between moving bodies is presented. The different temperature transformation formulas of Planck and Einstein, Ott, Landsberg and Doppler appear upon particular assumptions about internal heat current.
... The probability Π(x) is dominated by the configurations x = {ρq} that maximize the sum P q f (ρq) under the constraint of a fixed ρ0 = 1/Q P Q q=1 ρq. To perform this maximization procedure, we follow standard physics methods used in the study of phase transitions (like liquid-vapor coexistence [13]), which can be summarized as follows. If f (ρ) coincides with its concave hull at a given density ρ0, then the state of the city is homogeneous, and all blocks have a density ρ0. ...
Preprint
Linking the microscopic and macroscopic behavior is at the heart of many natural and social sciences. This apparent similarity conceals essential differences across disciplines: while physical particles are assumed to optimize the global energy, economic agents maximize their own utility. Here, we solve exactly a Schelling-like segregation model, which interpolates continuously between cooperative and individual dynamics. We show that increasing the degree of cooperativity induces a qualitative transition from a segregated phase of low utility towards a mixed phase of high utility. By introducing a simple function which links the individual and global levels, we pave the way to a rigorous approach of a wide class of systems, where dynamics is governed by individual strategies.
Preprint
In this paper, we explore the extreme distance problems in classical point processes , focusing on the asymptotic behavior of the maximum and minimum distances between points in a point process. We investigate several well-known point processes, such as Poisson processes and renewal processes, and analyze the distribution of extreme distances. Our results provide insights into the behavior of point processes in various limiting regimes, with applications to fields such as spatial statistics and random geometry.
Chapter
Full-text available
Mit den Energiedichten der elastischen Verformung und des elektrostatischen Feldes wurden in den beiden vorangehenden Kapiteln zwei physikalische Größen hergeleitet, die für die Ausführungen im vorliegenden Kapitel von zentraler Bedeutung sind. Mit der physikalischen Beschreibung elastischer Dielektrika mit piezoelektrischen Eigenschaften eng verbunden ist ein tiefergehendes Verständnis des gekoppelten thermodynamischen, elektrischen und mechanischen Verhaltens von Materie auf makroskopischer Ebene. Beginnend mit einer Einführung der wichtigsten thermodynamischen Grundbegriffe wird im Anschluss der Fokus auf den ersten Hauptsatz der Thermodynamik gelegt, welcher dazu dient, das elektromechanische Verhalten eines Festkörpers im Kontext der Thermodynamik aus einer andersartigen Perspektive heraus näher zu beleuchten und zu untersuchen. Diese Art der Betrachtungsweise ermöglicht es schließlich, die dem Anwendungsfall entsprechenden thermodynamischen Potenziale und die daraus resultierenden piezoelektrischen Zustandsgleichungen mit den dazugehörigen Materialkoeffizienten abzuleiten. Basierend auf den Matrizen der Materialkoeffizienten eines beliebigen anisotropen Materials (triklin, kein Symmetriezentrum) werden zum Ende des Kapitels die entsprechenden Matrizen der Materialkoeffizienten für das Bleizirkonat-Bleititanat-System PZT dargestellt.
Article
Full-text available
Starting from Ioffe’s description of a thermoelectric converter, we recover the optimal working points of conversion: the point of maximum efficiency and the one of maximal power. Inspired by biological converters’ optimization, we compute a third optimal point associated with cost of energy (COE). This alternative cost function corresponds to the amount of heat exchanged with the cold reservoir per unit of electric current used. This work emphasizes the symmetry between the efficiency and performance coefficient of the electric generator and heat pump modes. It also reveals the relation between their optimal working points.
Book
Full-text available
This work introduces the Core–Ring Photon Model (CRPM), a novel framework that reinterprets the photon as a composite system consisting of a mass-bearing core and an orbiting massless charge. By integrating classical electromagnetism, quantum mechanics, and general relativity, the CRPM naturally derives fundamental relations such as E = h f and p = h/λ, while addressing experimental anomalies in polarization, scattering, and gravitational interactions. Extensions of the model incorporate thermodynamic and advanced algebraic concepts, laying a robust foundation for a unified theory that spans multiple disciplines.
Article
Full-text available
Trans-membrane water transport and co-transport is ubiquitous in cell biology. Integrated over all the cell’s H2O transporters and co-transporters, the rate of homeostatic, bidirectional trans-cytolemmal water “exchange” is synchronized with the metabolic rate of the crucial Na⁺,K⁺-ATPase (NKA) enzyme: the active trans-membrane water cycling (AWC) phenomenon. Is AWC futile, or is it consequential? Conservatively representative literature metabolomic and proteinomic results enable comprehensive free energy (ΔG) calculations for the many transport reactions with known water stoichiometries. Including established intracellular pressure (Pi) magnitudes, these reveal an outward trans-membrane H2O barochemical ΔG gradient comparable to that of the well-known inward Na⁺ electrochemical ΔG gradient. For most co-influxers, these two gradients are finely balanced to maintain intracellular metabolite concentration values near their consuming enzyme Michaelis constants. Our analyses include glucose, glutamate⁻, gamma-aminobutyric acid (GABA), and lactate⁻ transporters. 2%–4% Pi alterations can lead to disastrous metabolite concentrations. For the neurotransmitters glutamate⁻ and GABA, very small astrocytic Pi changes can allow/disallow synaptic transmission. Unlike the Na⁺ and K⁺ electrochemical steady-states, the H2O barochemical steady-state is in (or near) chemical equilibrium. The analyses show why the presence of aquaporins (AQPs) does not dissipate trans-membrane pressure gradients. A feedback loop inherent in the opposing Na⁺ electrochemical and H2O barochemical gradients regulates AQP-catalyzed water flux as integral to AWC. A re-consideration of the underlying nature of Pi is also necessary. AWC is not a futile cycle but is inherent to the cell’s “NKA system”—a new, fundamental aspect of biology. Metabolic energy is stored in the trans-membrane water barochemical gradient. Graphical Abstract
Chapter
Aluminum matrix composites (AMCs) are widely used in the aerospace, automotive, and medical industries due to their low cost, high mechanical properties, and excellent design performance. While the significance of research into the primary fabrication processes and property characterization of these materials is well recognized, research into their secondary processing technologies—such as machining, welding/joining, and plastic forging—has also become critically important and requires urgent attention. This chapter addresses techniques for welding AMCs, specifically SiCp/A356, with the aim of pinpointing key information related to the characteristics of welded joints in AMCs.
Preprint
We present a class of relativistic fluid models for cold and dense matter with bulk viscosity, whose equilibrium equation of state is polytropic. These models reduce to Israel-Stewart theory for small values of the viscous stress Π\Pi. However, when Π\Pi becomes comparable to the equilibrium pressure P, the evolution equations "adjust" to prevent the onset of far-from-equilibrium pathologies that would otherwise plague Israel-Stewart. Specifically, the equations of motion remain symmetric hyperbolic and causal at all times along any continuously differentiable flow, and across the whole thermodynamic state space. This means that, no matter how fast the fluid expands or contracts, the hydrodynamic equations are always well-behaved (away from singularities). The second law of thermodynamics is enforced exactly. Near equilibrium, these models can accommodate an arbitrarily complicated dependence of the bulk viscosity coefficient ζ\zeta on both density and temperature.
Chapter
Supramolecular structures encompass molecular assemblies without formation of covalent bonds. The assemblies are held together by hydrogen bonding or by electrostatic forces. These weak binding forces enable incessant formation and dissociation of the supramolecular complex, implying that the supramolecular systems are dynamic even after they achieve equilibrium. Formation and dissociation of supramolecular complexes follow the general laws of kinetics and equilibrium for chemical reactions, which are explained in this chapter.
ResearchGate has not been able to resolve any references for this publication.