Science topic

Nonequilibrium Statistical Physics - Science topic

Explore the latest questions and answers in Nonequilibrium Statistical Physics, and find Nonequilibrium Statistical Physics experts.
Questions related to Nonequilibrium Statistical Physics
  • asked a question related to Nonequilibrium Statistical Physics
Question
45 answers
Dear RG community members, this pedagogical thread is related to the most difficult subject among the different fields that physics uses to describe nature, i.e. the physical kinetics (PK). Physical Kinetics as a subject is defined as a “method to study physical systems involving a huge number of particles out of equilibrium”.
The key role is given by two physical quantities:
  • The distribution function f (r, p, t), where r is a vector position, p is a linear momentum and t is the time for the function f which describes a particle in an ensemble.
  • The collision or scattering term W (p, p¨) gives the probability of a particle changing its linear momentum from the value p to the value during the collision.
If the following identity is satisfied for the distribution function df (r, p, t) / d t = 0, then we can directly link PK to the Liouville equation in the case that the distribution function does not depend on time directly. Physics students are tested on that, at the end of an advanced course in classical mechanics, when reading about the Poisson brackets.
However, is important to notice that not all phys. syst. are stationary and not always the identity df /d t = 0 follows, i.e., the distribution function - f is not always time-independent, i.e., f (r, p) is just true for some cases in classical and non-relativistic quantum mechanics, and the time dependence “t” is crucial for the majority of cases in our universe, since is out of equilibrium.
In addition, physical kinetics as a “method to study many-particle systems” involves the knowledge of 4 physics subjects: classical mechanics, electrodynamics, non-relativistic quantum mechanics & statistical mechanics.
The most important fact is that it studies the scattering/collision of particles without linear momentum conservation p, where: the time dependence & the presence of external fields are crucial to study any particular physical phenomena. That means that PK is the natural method to study out of equilibrium processes where the volume of the scattering phase space is not conserved & particles interact/collide with each other.
If the phase scattering space vol is not conserved, then we have the so-called out of equilibrium distribution function which follows the general equation:
df (r, p, t) / d t = W (p,p¨), (1)
where: d/dt = ∂/∂t + . ∂/∂r + p´. ∂/∂p, with units of t -1or ω/(2π).
The father of physical kinetics is Prof. Ludwig Eduard Boltzmann (1844 – 1906) [1]. He was able to establish the H theorem which is the basis for the PK subject and also he wrote the main equation (1), i.e., the Boltzmann equation to describe the out of equilibrium dynamics of an ideal gas. & in d/dt are derivatives, p¨ in W is another momentum position
Another physicist who established the first deep understanding and condensed the subject into a book was Prof. Lev Emmanuilovich Gurevich (1904 - 1990). He was the first to point out that the kinetic effects in solids, i.e., metals and semiconductors are determined by the "phonon wind", i.e., the phonon system is in an unbalanced state [2]
Physical kinetics has 3 main approaches:
  • The qualitative approach involves the evaluation of several physical magnitudes taking into account the order of magnitude for each of them.
  • The second approach is the theoretical approach which involves complicated theoretical solutions of the kinetic equation using different approximations for the scattering integral such as the t approximation. For graduate courses, I follow [8], an excellent textbook by Prof. Frederick Reif. For undergraduate teaching, I followed the brief introduction at the end of Vol V of Berkeley Phys C.
  • The numerical approach since most problems involving PK requires extensive numerical and complicated self-consistent calculations.
The fields where PK is useful are many:
  • The physics of normal metals and semiconductors out of equilibrium.
  • The hydrodynamics of reacting gases & liquids, quantum liquids, and quantum gases at very low temperatures.
  • The physics of superconductors, phase transitions, and plasma physics among others.
There is a quantum analog to the classical Boltzmann equation, we ought to mention three cases: the density matrix equation for random fields, the density matrix equation for quantum particles, and the Wigner distribution function. Main graph 1 is adapted from [4] to the English language, LB picture from [7], and LG picture from [3].
Any contributions to this thread are welcome, thank you all.
References:
2. Fundamentals of physical kinetics by L. Gurevich. State publishing house of technical and theoretical literature, 1940. pp 242
3. Lev Emmanuilovich Gurevich. Memories of friends, colleagues, and students. Selected Works, by Moisey I. Kaganov et. at (1997) pp 318. ISBN:5-86763-117-6. Publishing house Petersburg Institute of Nuclear Physics. RAS
4. Белиничер В.В. Физическая кинетика. Изд-во НГУ.Новосибирск.1996.
5. Lifshitz E., Pitaevskii L. 1981. Physical Kinetics. Vol. 10, (Pergamon Press).
6. Thorne, K. S. & Blandford, R. D., Modern Classical Physics: Optics, Fluids, Plasmas, Elasticity, Relativity, & Statistical Physics (2017) (Princeton University Press).
8. Fundamentals of Statistical and Thermal Physics: F. Reif Mc Graw-Hill, 1965
Relevant answer
Answer
Yes, Prof.
Zachary Knutson
, you are right. Perturbation theory works sometimes, but we must agree that there is not a unique approach to solving the kinetic equation.
I apologize for the late reply.
The reason why perturbation does not work always is that there are many physical phenomena that involve processes out of equilibrium with chaoticity and randomness, in addition to.
  • Nonlinearity.
  • Self consistency.
  • Many bodies.
  • Scattering involving those many bodies
So sometimes if the problem can be linearized, then perturbation can be used, but sometimes.
Let us see, for example, superconductivity by only considering the T different from the 0 K case (Matsubara frequencies is usually and correctly named) and only taking into account the first term in the zero temperature energy gap (that takes into account self-consistency, or another example is the Vlasov equation in the case of linearization for plasmas is a wonderful example of linearization where perturbation theory applies as you stated.
Best Regards.
  • asked a question related to Nonequilibrium Statistical Physics
Question
9 answers
A  phase transition of order k is mathematically characterized by a loss of regularity of free energy f: f is k-1 differentiable but not k differentiable. There are many examples of first and second order phase transitions in experiments and in models. There are also cases where f is C^{\infty} but not analytic (Griffith singularities).
But are their known example of phase transition of order k, k>2 ?
A third order phase transition would mean that quantities like susceptibility or heat capacity are not differentiable with respect to parameters variations. But I have no idea of what this means physically.
Relevant answer
Answer
Dear Prof. Bruno Cessac, in addition to all the interesting answers in this thread, there is a paper that explains historically the evolution of the Ehrenfest classification of the phase transitions, it might be good to add it since it talks about the Pippard extension of the classification when there are singular points in the specific heat at the Tc as in the ferromagnetic/antiferromagnetic-to-paramagnetic transitions in Ni (ferrom.), MnO (antiferrom.) and other crystals.
  • asked a question related to Nonequilibrium Statistical Physics
Question
8 answers
Dear all:
I hope this question seems interesting to many. I believe I'm not the only one who is confused with many aspects of the so called physical property 'Entropy'.
This time I want to speak about Thermodynamic Entropy, hopefully a few of us can get more understanding trying to think a little more deeply in questions like these.
The Thermodynamic Entropy is defined as: Delta(S) >= Delta(Q)/(T2-T1) . This property is only properly defined for (macroscopic)systems which are in Thermodynamic Equilibrium (i.e. Thermal eq. + Chemical Eq. + Mechanical Eq.).
So my question is:
In terms of numerical values of S (or perhaps better said, values of Delta(S). Since we know that only changes in Entropy can be computable, but not an absolute Entropy of a system, with the exception of one being at the Absolute Zero (0K) point of temperature):
Is easy, and straightforward to compute the changes in Entropy of, lets say; a chair, or a table, our your car, etc. since all these objects can be considered macroscopic systems which are in Thermodynamic Equilibrium. So, just use the Classical definition of Entropy (the formula above) and the Second Law of Thermodynamics, and that's it.
But, what about Macroscopic objects (or systems), which are not in Thermal Equilibrium ? Maybe, we often are tempted to think about the Entropy of these Macroscopic systems (which from a macroscopic point of view they seem to be in Thermodynamic Equilibrium, but in reality, they have still ongoing physical processes which make them not to be in complete thermal equilibrium) as the definition of the classical thermodynamic Entropy.
what I want to say is: What would be the limits of the classical Thermodynamic definition of Entropy, to be used in calculations for systems that seem to be in Thermodynamic Equilibrium but they aren't really? perhaps this question can also be extended to the so called regime of Near Equilibrium Thermodynamics.
Kind Regards all !
Relevant answer
Answer
Dear Franklin Uriel Parás Hernández Some comments about your interesting thread:
1. At very low temperatures, entropy behaves according to Nernst's theorem
I copy the wiki-web inf. but you also find the same information in Academicians: L. Landau and E. Lifshitz Vol. 5 Vol:
The third law of thermodynamics or Nerst theorem, states that the entropy of a system at zero absolute temperature is a well-defined constant. Other systems have more than one state with the same, lowest energy, and have a non-vanishing "zero-point entropy".
2. Lets try to put Delta Q = m C Delta T, into the expression: Delta(S) >= Delta(Q)/(T2-T1) . What we do obtain? something missing then?
you see, physical chemistry and statistical physics look at entropy in a different subtle way.
3. Delta S = Kb Ln W2/W1 where W is the total number of micro-states of the system, then what is W1 and W2 concerning Delta S?
4. Finally, look at the following paper by Prof. Leo Kadanoff concerning the meaning of entropy in physical kinetics (out of equilibrium systems): https://jfi.uchicago.edu/~leop/SciencePapers/Entropy_is3.pdf
  • asked a question related to Nonequilibrium Statistical Physics
Question
3 answers
Let's say I have a material with a stable phase A that transitions to another stable phase B at temperature T. If the material also has a metastable phase A', I know that if I cool B fast enough I would get:
B -> A'
and if I cool very slowly I'd get
B-> A.
If I were to look at the interface between B and the new phase and somehow extract all possible information (energy/mass transfer, etc), what signal would indicate to me that I would expect A' (metastable) to form over A?
Relevant answer
Answer
When you refer to thermal gradients you propably refer to solidification processes. For these the temperature gradients are very important as the interface velocity is usually determined crucialy by the transport of the latent heat. Here, of course, special literature exist, but beyond what is taught in general lectures I cannot help much. Your reference to "metallurgy books" seems that there you looked at the soldification sections.
If you are more interested in solid-solid transformations, usually the thermal gradient is less important, simply becausee the latent heat associated with the transformations is much lower than for solidification (the invariant parameter is actually the transformation entropy, connected with the transformation enthalpy via the equilibrium transformation temperature). In that case there is a huge amount of concepts to describe phase transformation kinetics. Porter Easterling is a good starter (having also solidification sections). Models are of course on very different levels. If you want to find much more, you may refer to Christian.
  • asked a question related to Nonequilibrium Statistical Physics
Question
1 answer
If the particles inside the material collide within a very small time, we will have a very small time uncertainty Delta t (time uncertainty is the uncertainty on the instant of time, where the Event does happen) also. By Heisenberg's uncertainty relation
Delta E * Delta t >= hbar/2
the energy uncertainty Delta E becomes large also. We therefore can estimate the particle in some Energy state around the energy it would classicaly have (we have a band broadness given by Delta E). I know the Quantum Zeno effect, where extremely frequent interaction of a particle with some other will undergo (almost) no time Evolution. Because Delta E is very big in this case, collisions with classical Energy transfer epsilon will have no significant and observable effect when
epsilon < Delta E.
So only collisions with really high value of epsilon do contribute to observable effects, however, These Kind of collisions are rare. Therefore, there will be very little temporal changes observed.
Question: Let the particle be trapped in an external potential with higher Magnitude than the particle's kinetic Energy expected classical and "on-shell". If the Energy spectrum of a particle is broadening in a material with very high collision frequencies, will there be a higher probability for Quantum Tunneling out of the potential?
I think yes, for a short time. Matter-antimatter pairs can Pop out for a very short time, These pair is consisting of virtual particles. But virtual particles have effects on the Dynamics. Maybe there are also effects outside the barrier due to virtual particles Tunneling out (how would this effect look like?)?
Which other quantum effects we observe in material with high collision frequency? Maybe also vacuum polarization even if the characteristic energy scales for that effect are far lower?
Relevant answer
Answer
I believe we do observe quantum tunneling in material with high collision frequency but they will not be observable or not at least observable with our current technology. During the collision the time gap will be extremely small and thus will affect the emission of collided substituents. As their collision will create disturbance in surrounding and thus this disturbance will create favorable condition for creation of matter antimatter pair, which will diminished as breaking virtual particles.
  • asked a question related to Nonequilibrium Statistical Physics
Question
21 answers
I want to know, is negative T state only conceptually catches one's eye or truely significant to help us understand thermodynamics ?
  • In early days Purcell et al, and in my university textbooks, negtive T in spin degree of freedom in NMR system was mentioned;
  • In 2013, S. Braun et al perform an experiment in cold atoms and realize an inversed enegy level population for motional degeree of freedom. (http://science.sciencemag.org/content/339/6115/52)
  • Many disputes about the Boltzman entropy or Gibbs entropy, as I sknow, especially Jörn Dunkel et al(https://www.nature.com/articles/nphys2815); they insist on Gibb's entropy is physical and argue that negative T is wrong.
  • After that, many debates emerges, I read several papers, they all agree with conventional Boltzman entropy.
Does anyone has comments about this field ?
Is it truely fascinating or just trival to realize a population inversion state——negtive temperature ?
or anyone has clarification of the Carnot engine work between a negtive T and positive T substance?
Any comments and discussions are welcome.
Relevant answer
Answer
I just read this interesting and lively discussion on negative temperatures. I must confess that I have not read the paper by Abraham and Penrose and I promise to do so during the holidays. I am glad that discussions on the pure thermodynamic issue has arisen, independently of the entropy formulae of statistical physics (by the way, in my view, there is only one “entropy”, Boltzmann’s, as discussed say in Landau and Lifshitz, but that’s another discussion!). So, it is nice to hear arguments based on thermodynamics only.
Although I have not been able to follow the whole thread, I think I side with Struchtrup point of view. I have always had trouble with extending “equilibrium” concepts (such as entropy and temperature) to non equilibrium and/or metastable states. I believe it is dangerous. Certainly, there are many situations where there are no ambiguities, but when there are one should refrain to simply extend the well founded equilibrium concepts, with all their assumptions … About ten years ago there were claims in respected journals about negative heat capacities in nano systems, for instance.
I do not think I have much to say here now, except to recall that, indeed, in order to accommodate for negative temperatures, Ramsey had to change Kelvin-Planck statement of the Second law. Actually, he inverted it (this, I pretended to explain in the appendix of my paper). Then, of course, with the assumption of negative temperatures reservoirs, everything logically follows. However, before arguing about their stability, something very strange happens if equilibrium negative temperatures exist: heat can be converted into work as a sole result (with efficiency one), and the opposite is impossible! hence, there is no need for Carnot engines at all and friction can’t happen! We should be suspicious of this … the argument that reservoirs at negative temperatures are unstable allows us to find the root of the failure, namely, the tacit assumption by Ramsey that stable negative temperature reservoirs exist.
Certainly, the states created in the experiments by Purcell etal, and the many later on, do exist, they are metastable states and can have very long relaxation times to “normal” equilibrium states. But this does not suffice to say that they are in equilibrium for that time, and less to say that they have actually achieved negative temperatures. It may be useful to use those terms but it may also be deeply misleading.
  • asked a question related to Nonequilibrium Statistical Physics
Question
1 answer
a
Relevant answer
Answer
Yes
  • asked a question related to Nonequilibrium Statistical Physics
Question
3 answers
I have read that a drawback with Edgeworth series expansion is that "... they can be inaccurate, especially in the tails, due to mainly two reasons: (1) They are obtained under a Taylor series around the mean. (2) They guarantee (asymptotically) an absolute error, not a relative one. This is an issue when one wants to approximate very small quantities, for which the absolute error might be small, but the relative error important."
So my resulting question is if there are any attractive alternative ways of approximating stochastic variables with some corresponding method that still is useful in the tails of the distribution, and does not (for example) result in negative probabilities (which is mentioned as another drawback with the approach).
Relevant answer
Answer
Another criticism of the Edgeworth (and Gram-Charlier) series is that they do not always (usually?) converge--they are similar to asymptotic expansions).
Instead of expanding the density, you could expand a function that transforms it to normality. The Cornish-Fisher expansions (for such a transformation and its inverse) achieve this. These are based on the Edgeworth expansion. They might also fail to converge but still give approximations.
  • asked a question related to Nonequilibrium Statistical Physics
Question
2 answers
e.g.function f(x,y,z)=xyz-xy-xz-yz, where x,y,z are all belong to (0,1),when the f takes its extremum, we can get x=y=z. Then is this property suitable for every 
functions of symmetric polynomial?
Relevant answer
Answer
  1. Thanks for your reply. I also find the counter-example with my problem, which is just a special case of symmetric polynomial.
  • asked a question related to Nonequilibrium Statistical Physics
Question
2 answers
In simulation of TFETS......most of the references out there refer to self consistent solution of continuity equation and possion equation for carrier statistics calculation.However, is there any problem in following the NEGF formalism used in normal MOSFETs as normal MOSFET and TFET are similar structures; one is N-i-N and the other one is P-i-N?
The final current will be calculated using band to band tunneling equations.
Relevant answer
Answer
write to
SATISH TURKANE PATIL "<satish_turkane@yahoo.co.in>
  • asked a question related to Nonequilibrium Statistical Physics
Question
2 answers
Why we can't use KAM theory to obtain the recurrence time?
Relevant answer
Answer
First of all, there isn't any ``paradox''. Next the KAM theorem doesn't provide a way for obtaining the recurrence time of any system-it implies that, under certain assumptions, this time isn't infinite, that's all. How to actually compute it is another issue. So the answer is that, while the KAM theorem may apply to the FPU system, what it implies may not be useful for addressing certain issues. The reason is that while the theorem implies the existence of tori-that describe the periodic motion, whose period is the recurrence time-it doesn't provide any way of expressing the tori in terms of the original phase space coordinates, i.e. of constructing them.
  • asked a question related to Nonequilibrium Statistical Physics
Question
5 answers
The thermal rate coefficient can be obtained from the reactive cross section (σ(Ecoll)):
k(T) = c(T)×∫P(T,Ecoll)Ecollσ(Ecoll)dEcoll
where Ecoll is the relative collision energy and c(T) is a constants at a given temperature and P(T,Ecoll) is the statistical weight.
In normal case Boltzmann statistic is used for the calculation of statistical weights. But Boltzmann statistic is valid when the temperature is high and the particles are distinguishable. At ultralow temperatures (T< 10K) we should use the appropriate quantum statistic (Fermi or Bose).
What kind of quantum statistic should be used in the collision of a
radical[spin = 1/2] + closed shell molecule (spin=0)
at ultralow temperatures?
What is the form of P(T,Ecoll) in this case?
Relevant answer
Answer
First, let me stress that the activation energy is not a well defined microscopic quantity, but a convenient parameter in which we can hide our ignorance on the details of the many different possibilities for the individual reactions. 
This having been said, the importance of quantum effects at a given temperature can be assessed by comparing the corresponding thermal energy with the characteristic energies for the different statistics, that is, the zero point energy for enclosing the particle in a given volume, which, for Fermions is known as the Fermi energy and for Bosons as the condensation energy.
As an example, let us take the Fermion case. For electrons in sodium metal, with a particle density of 2.65 x 10^28 /m^3, the Fermi energy is 3.24 eV and the Fermi temperature is 37'700 K, proportional to (N/V)^(2/3) and inversely proportional to the mass of the particle. I am not an expert in solution chemistry (if the concept makes sense  at all at such low temperatures), but I think it is fair to assume that, just on steric grounds, the number density of the reacting molecules is at least two orders of magnitude smaller, which divides the sodium result by a factor of the order of 20. Then, we know that the mass of the nucleon is 1836 times larger than that of the electron, and already for a small molecule like methane, this means division by further factor, this time of the order of 30'000. So you see that the degeneracy temperature at which quantum effects become important is below 0.1 K. Exactly the same reasoning holds for the boson case.
In summary, there is no need to change anything in your treatment.
Best regards, René Monnier
  • asked a question related to Nonequilibrium Statistical Physics
Question
3 answers
I'm studying an wave system inspired Hamiltonian model with short and long interactions. In particular, I've found a curious size dependent phase transition depending on how strong is my short-range coupling. When N is large, and the short-range coupling remains the same of the small N (where I've found the phase transition), the system remains homogeneous. There is a simple explanation for that?  
Relevant answer
Answer
One possibility which is worth considering (but no guarantee that it applies to your case), is that by changing the domain size you alter the range of unstable wavenumbers, so that some may disappear for certain ranges of box size. This is a common phenomenon in Vlasov-like equations. You could try to perform such an analysis for your particular case, and see if it explains the behavior.
  • asked a question related to Nonequilibrium Statistical Physics
Question
3 answers
Any reference calculation the diffusion coefficient of a colloidal system under a temperature gradient ?
Relevant answer
Answer
While not as specific as the answer above, I would look at books on non-equilibrium thermodynamics, particularly those discussing the Onsager reciprocity relations. Under a thermal gradient, the diffusion coefficient contains both an ordinary Fick's law type component but also a, usually an order or more magnitude smaller, thermodiffusion coefficient which is an additional diffusion gradient induced by nonequilibrium coupling between the thermal gradient and the concentration gradient. This is the coefficient in the Onsager reciprocity relation which is the cross-term in the system of two equations (i.e. the right diagonal X12 and X21 components of the thermal force coefficient matrix; X11 and X22 are the normal Fickian diffusion and Fourier heat equation coefficients)
  • asked a question related to Nonequilibrium Statistical Physics
Question
5 answers
Let x and y be the state vectors of two dynamical systems, respectively and we have:
dx(t)/dt=F(x(t))
dy(t)/dt=G(y(t),x(t)),
where x is the driver system and is the response system.
According to the literature (e.g. http://journals.aps.org/pre/abstract/10.1103/PhysRevE.61.5142), given a certain coupling strength between the driver and the response, the maximal Lyapunov exponent (correction: it should be conditional Lyapunov exponent) is negative when the response system synchronizes with the driver system. See the attached figure (from the reference paper). 
Now my question is: What algorithm should I use to calculate conditional Lyapunov exponent for the purpose of detecting synchronization?
Relevant answer
Answer
I believe that "conditional Lyapunov exponents" and "transversal Lyapunov exponents" are the same quantity. If so, you want to compute all Lyapunov exponents associated to transversal directions to the synchronization manifold (S). Since you are dealing with a master-slave coupling, the linearised system (which provides the master stability function, obtained by linearising the vector field around a typical trajectory embedded in S) is block triangular. Then the conditional Lyapunov exponents would be given by the eigenvalues associated to the lower-right block (\partial y(t)\partial y). If your coupling function is simple enough, then you probably can find an algebraic expression for the conditional Lyapunov exponents in terms of the (ordinary) Lyapunov spectra of the x dynamical system. You can numerically compute these using the above mentioned Wolf's algorithm. If you wish, you also can compute the Lyapunov spectrum of the full system (dim = dim(x) + dim(y)) and exclude from your analysis those Lyapunov exponents of the driver. Basically, if the driver dynamic has n positive Lyapunov exponents (n=1 for a chaotic system and n>1 for a hyper-chaotic system), it should be sufficient compute, by Wolf's algorithm or any other, the n+1 largest Lyapunov exponents for a reference trajectory embedded in S. If the (n+1)-th exponent is negative, then you know that all transverse (conditional) Lyapunov exponents are negative. It worth to remember that you should consider "typical" perturbations to the reference trajectory. In other words, let the initial conditions (lets say, z(0)=(x(0),0) ) of the reference trajectory be embedded in S and the initial conditions of all other trajectories be around z(0) but with non-null components on y directions. Technically, answering your question: no, I don't know any specific algorithm to compute conditional Lyapunov exponents, but I believe, as Shekatkar said, it should not be that hard to adapt any existent algorithm that computes ordinary Lyapunov exponents.
  • asked a question related to Nonequilibrium Statistical Physics
Question
6 answers
I wanted to look at the analytical solutions to some quantum mechanical systems and follow along with the math step by step.
I was able to find (at least I think), just that for the H2+ ion, but that is a singular electron, i.e.: no electron correlation. Has this been solved for H2 or some other multi-electron system and if so could I be pointed to the relevant solutions.
Relevant answer
Answer
General analytical solution of 3-body problem can not be obtained in principle as the general solution is unstable (highly sensitive to small changes of the initial conditions). The understanding of this fact lead to the Dynamical Chaos paradigm and development of statistical methods of analysis of such solution (e.g. S. M. Ulam, On some statistical properties of dynamical system, 1961 and 55 years of active investigations)
  • asked a question related to Nonequilibrium Statistical Physics
Question
5 answers
Here's an except from 'Into The Cool: Energy Flow, Thermodynamics, and Life' that got me thinking:
"Trained as a physical chemist, Alfred Lotka worked for an insurance company as a statistical analyst and in his spare time was a student of biology. Almost a generation ahead of his peers, Lotka suggested that life was a dissipative metastable process. By this he meant that, although stable and mistaken for a 'thing,' life was really a process. Living matter was in continuous flux, kept from equilibrium by energy provided by the sun. Lotka stressed that life on Earth was an open system."
Just how far along is our understanding of open systems?
Relevant answer
Answer
"life was really a process", "life on Earth was an open system."
First impression is that somebody messed up definitions or did not care about them. A system can be a lot of things. Here, one could think of system in the sense of thermodynamics or more philosophical of the concept from system theory. Either way, a system is surely not a process. Also, things like "almost a generation ahead of his peers" are usually unjustified praise soly intended to impress the reader. Call me close-minded, but after this short paragraph I don't think anything interesting will follow. Probably the usual vioaltion of the concept of entropy.
  • asked a question related to Nonequilibrium Statistical Physics
Question
7 answers
I want to know if a Markov process far from equilibrium corresponds to a non-equilibrium thermodynamics process or whether they have something in common?
Relevant answer
Answer
Actually, a time-homogenous Markov Chain rigorously converges to a unique probability distribution, which can be fixed e.g. by the detailed balance condition. If this distribution is chosen to be the Boltzmann distribution, the MC converges to the Gibbs equilibrium measure, i.e. thermal equilibrium. You can also choose this distribution to be something completely different, e.g. a steady-state non-equilibrium distribution corresponding to some physical system. This is how you make a connection to non-equilibrium thermodynamics.