Science topic
Nonequilibrium Statistical Physics - Science topic
Explore the latest questions and answers in Nonequilibrium Statistical Physics, and find Nonequilibrium Statistical Physics experts.
Questions related to Nonequilibrium Statistical Physics
Dear RG community members, this pedagogical thread is related to the most difficult subject among the different fields that physics uses to describe nature, i.e. the physical kinetics (PK). Physical Kinetics as a subject is defined as a “method to study physical systems involving a huge number of particles out of equilibrium”.
The key role is given by two physical quantities:
- The distribution function f (r, p, t), where r is a vector position, p is a linear momentum and t is the time for the function f which describes a particle in an ensemble.
- The collision or scattering term W (p, p¨) gives the probability of a particle changing its linear momentum from the value p to the value p¨ during the collision.
If the following identity is satisfied for the distribution function df (r, p, t) / d t = 0, then we can directly link PK to the Liouville equation in the case that the distribution function does not depend on time directly. Physics students are tested on that, at the end of an advanced course in classical mechanics, when reading about the Poisson brackets.
However, is important to notice that not all phys. syst. are stationary and not always the identity df /d t = 0 follows, i.e., the distribution function - f is not always time-independent, i.e., f (r, p) is just true for some cases in classical and non-relativistic quantum mechanics, and the time dependence “t” is crucial for the majority of cases in our universe, since is out of equilibrium.
In addition, physical kinetics as a “method to study many-particle systems” involves the knowledge of 4 physics subjects: classical mechanics, electrodynamics, non-relativistic quantum mechanics & statistical mechanics.
The most important fact is that it studies the scattering/collision of particles without linear momentum conservation p, where: the time dependence & the presence of external fields are crucial to study any particular physical phenomena. That means that PK is the natural method to study out of equilibrium processes where the volume of the scattering phase space is not conserved & particles interact/collide with each other.
If the phase scattering space vol is not conserved, then we have the so-called out of equilibrium distribution function which follows the general equation:
df (r, p, t) / d t = W (p,p¨), (1)
where: d/dt = ∂/∂t + r´. ∂/∂r + p´. ∂/∂p, with units of t -1or ω/(2π).
The father of physical kinetics is Prof. Ludwig Eduard Boltzmann (1844 – 1906) [1]. He was able to establish the H theorem which is the basis for the PK subject and also he wrote the main equation (1), i.e., the Boltzmann equation to describe the out of equilibrium dynamics of an ideal gas. r´ & p´ in d/dt are derivatives, p¨ in W is another momentum position
Another physicist who established the first deep understanding and condensed the subject into a book was Prof. Lev Emmanuilovich Gurevich (1904 - 1990). He was the first to point out that the kinetic effects in solids, i.e., metals and semiconductors are determined by the "phonon wind", i.e., the phonon system is in an unbalanced state [2]
Physical kinetics has 3 main approaches:
- The qualitative approach involves the evaluation of several physical magnitudes taking into account the order of magnitude for each of them.
- The second approach is the theoretical approach which involves complicated theoretical solutions of the kinetic equation using different approximations for the scattering integral such as the t approximation. For graduate courses, I follow [8], an excellent textbook by Prof. Frederick Reif. For undergraduate teaching, I followed the brief introduction at the end of Vol V of Berkeley Phys C.
- The numerical approach since most problems involving PK requires extensive numerical and complicated self-consistent calculations.
The fields where PK is useful are many:
- The physics of normal metals and semiconductors out of equilibrium.
- The hydrodynamics of reacting gases & liquids, quantum liquids, and quantum gases at very low temperatures.
- The physics of superconductors, phase transitions, and plasma physics among others.
There is a quantum analog to the classical Boltzmann equation, we ought to mention three cases: the density matrix equation for random fields, the density matrix equation for quantum particles, and the Wigner distribution function. Main graph 1 is adapted from [4] to the English language, LB picture from [7], and LG picture from [3].
Any contributions to this thread are welcome, thank you all.
References:
2. Fundamentals of physical kinetics by L. Gurevich. State publishing house of technical and theoretical literature, 1940. pp 242
3. Lev Emmanuilovich Gurevich. Memories of friends, colleagues, and students. Selected Works, by Moisey I. Kaganov et. at (1997) pp 318. ISBN:5-86763-117-6. Publishing house Petersburg Institute of Nuclear Physics. RAS
4. Белиничер В.В. Физическая кинетика. Изд-во НГУ.Новосибирск.1996.
5. Lifshitz E., Pitaevskii L. 1981. Physical Kinetics. Vol. 10, (Pergamon Press).
6. Thorne, K. S. & Blandford, R. D., Modern Classical Physics: Optics, Fluids, Plasmas, Elasticity, Relativity, & Statistical Physics (2017) (Princeton University Press).
8. Fundamentals of Statistical and Thermal Physics: F. Reif Mc Graw-Hill, 1965




A phase transition of order k is mathematically characterized by a loss of regularity of free energy f: f is k-1 differentiable but not k differentiable. There are many examples of first and second order phase transitions in experiments and in models. There are also cases where f is C^{\infty} but not analytic (Griffith singularities).
But are their known example of phase transition of order k, k>2 ?
A third order phase transition would mean that quantities like susceptibility or heat capacity are not differentiable with respect to parameters variations. But I have no idea of what this means physically.
Dear all:
I hope this question seems interesting to many. I believe I'm not the only one who is confused with many aspects of the so called physical property 'Entropy'.
This time I want to speak about Thermodynamic Entropy, hopefully a few of us can get more understanding trying to think a little more deeply in questions like these.
The Thermodynamic Entropy is defined as: Delta(S) >= Delta(Q)/(T2-T1) . This property is only properly defined for (macroscopic)systems which are in Thermodynamic Equilibrium (i.e. Thermal eq. + Chemical Eq. + Mechanical Eq.).
So my question is:
In terms of numerical values of S (or perhaps better said, values of Delta(S). Since we know that only changes in Entropy can be computable, but not an absolute Entropy of a system, with the exception of one being at the Absolute Zero (0K) point of temperature):
Is easy, and straightforward to compute the changes in Entropy of, lets say; a chair, or a table, our your car, etc. since all these objects can be considered macroscopic systems which are in Thermodynamic Equilibrium. So, just use the Classical definition of Entropy (the formula above) and the Second Law of Thermodynamics, and that's it.
But, what about Macroscopic objects (or systems), which are not in Thermal Equilibrium ? Maybe, we often are tempted to think about the Entropy of these Macroscopic systems (which from a macroscopic point of view they seem to be in Thermodynamic Equilibrium, but in reality, they have still ongoing physical processes which make them not to be in complete thermal equilibrium) as the definition of the classical thermodynamic Entropy.
what I want to say is: What would be the limits of the classical Thermodynamic definition of Entropy, to be used in calculations for systems that seem to be in Thermodynamic Equilibrium but they aren't really? perhaps this question can also be extended to the so called regime of Near Equilibrium Thermodynamics.
Kind Regards all !
Let's say I have a material with a stable phase A that transitions to another stable phase B at temperature T. If the material also has a metastable phase A', I know that if I cool B fast enough I would get:
B -> A'
and if I cool very slowly I'd get
B-> A.
If I were to look at the interface between B and the new phase and somehow extract all possible information (energy/mass transfer, etc), what signal would indicate to me that I would expect A' (metastable) to form over A?
If the particles inside the material collide within a very small time, we will have a very small time uncertainty Delta t (time uncertainty is the uncertainty on the instant of time, where the Event does happen) also. By Heisenberg's uncertainty relation
Delta E * Delta t >= hbar/2
the energy uncertainty Delta E becomes large also. We therefore can estimate the particle in some Energy state around the energy it would classicaly have (we have a band broadness given by Delta E). I know the Quantum Zeno effect, where extremely frequent interaction of a particle with some other will undergo (almost) no time Evolution. Because Delta E is very big in this case, collisions with classical Energy transfer epsilon will have no significant and observable effect when
epsilon < Delta E.
So only collisions with really high value of epsilon do contribute to observable effects, however, These Kind of collisions are rare. Therefore, there will be very little temporal changes observed.
Question: Let the particle be trapped in an external potential with higher Magnitude than the particle's kinetic Energy expected classical and "on-shell". If the Energy spectrum of a particle is broadening in a material with very high collision frequencies, will there be a higher probability for Quantum Tunneling out of the potential?
I think yes, for a short time. Matter-antimatter pairs can Pop out for a very short time, These pair is consisting of virtual particles. But virtual particles have effects on the Dynamics. Maybe there are also effects outside the barrier due to virtual particles Tunneling out (how would this effect look like?)?
Which other quantum effects we observe in material with high collision frequency? Maybe also vacuum polarization even if the characteristic energy scales for that effect are far lower?
I want to know, is negative T state only conceptually catches one's eye or truely significant to help us understand thermodynamics ?
- In early days Purcell et al, and in my university textbooks, negtive T in spin degree of freedom in NMR system was mentioned;
- In 2013, S. Braun et al perform an experiment in cold atoms and realize an inversed enegy level population for motional degeree of freedom. (http://science.sciencemag.org/content/339/6115/52)
- Many disputes about the Boltzman entropy or Gibbs entropy, as I sknow, especially Jörn Dunkel et al(https://www.nature.com/articles/nphys2815); they insist on Gibb's entropy is physical and argue that negative T is wrong.
- After that, many debates emerges, I read several papers, they all agree with conventional Boltzman entropy.
Does anyone has comments about this field ?
Is it truely fascinating or just trival to realize a population inversion state——negtive temperature ?
or anyone has clarification of the Carnot engine work between a negtive T and positive T substance?
Any comments and discussions are welcome.
I have read that a drawback with Edgeworth series expansion is that "... they can be inaccurate, especially in the tails, due to mainly two reasons: (1) They are obtained under a Taylor series around the mean. (2) They guarantee (asymptotically) an absolute error, not a relative one. This is an issue when one wants to approximate very small quantities, for which the absolute error might be small, but the relative error important."
So my resulting question is if there are any attractive alternative ways of approximating stochastic variables with some corresponding method that still is useful in the tails of the distribution, and does not (for example) result in negative probabilities (which is mentioned as another drawback with the approach).
e.g.function f(x,y,z)=xyz-xy-xz-yz, where x,y,z are all belong to (0,1),when the f takes its extremum, we can get x=y=z. Then is this property suitable for every
functions of symmetric polynomial?
In simulation of TFETS......most of the references out there refer to self consistent solution of continuity equation and possion equation for carrier statistics calculation.However, is there any problem in following the NEGF formalism used in normal MOSFETs as normal MOSFET and TFET are similar structures; one is N-i-N and the other one is P-i-N?
The final current will be calculated using band to band tunneling equations.
Why we can't use KAM theory to obtain the recurrence time?
The thermal rate coefficient can be obtained from the reactive cross section (σ(Ecoll)):
k(T) = c(T)×∫P(T,Ecoll)Ecollσ(Ecoll)dEcoll
where Ecoll is the relative collision energy and c(T) is a constants at a given temperature and P(T,Ecoll) is the statistical weight.
In normal case Boltzmann statistic is used for the calculation of statistical weights. But Boltzmann statistic is valid when the temperature is high and the particles are distinguishable. At ultralow temperatures (T< 10K) we should use the appropriate quantum statistic (Fermi or Bose).
What kind of quantum statistic should be used in the collision of a
radical[spin = 1/2] + closed shell molecule (spin=0)
at ultralow temperatures?
What is the form of P(T,Ecoll) in this case?
I'm studying an wave system inspired Hamiltonian model with short and long interactions. In particular, I've found a curious size dependent phase transition depending on how strong is my short-range coupling. When N is large, and the short-range coupling remains the same of the small N (where I've found the phase transition), the system remains homogeneous. There is a simple explanation for that?
Any reference calculation the diffusion coefficient of a colloidal system under a temperature gradient ?
Let x and y be the state vectors of two dynamical systems, respectively and we have:
dx(t)/dt=F(x(t))
dy(t)/dt=G(y(t),x(t)),
where x is the driver system and y is the response system.
According to the literature (e.g. http://journals.aps.org/pre/abstract/10.1103/PhysRevE.61.5142), given a certain coupling strength between the driver and the response, the maximal Lyapunov exponent (correction: it should be conditional Lyapunov exponent) is negative when the response system synchronizes with the driver system. See the attached figure (from the reference paper).
Now my question is: What algorithm should I use to calculate conditional Lyapunov exponent for the purpose of detecting synchronization?

I wanted to look at the analytical solutions to some quantum mechanical systems and follow along with the math step by step.
I was able to find (at least I think), just that for the H2+ ion, but that is a singular electron, i.e.: no electron correlation. Has this been solved for H2 or some other multi-electron system and if so could I be pointed to the relevant solutions.
Here's an except from 'Into The Cool: Energy Flow, Thermodynamics, and Life' that got me thinking:
"Trained as a physical chemist, Alfred Lotka worked for an insurance company as a statistical analyst and in his spare time was a student of biology. Almost a generation ahead of his peers, Lotka suggested that life was a dissipative metastable process. By this he meant that, although stable and mistaken for a 'thing,' life was really a process. Living matter was in continuous flux, kept from equilibrium by energy provided by the sun. Lotka stressed that life on Earth was an open system."
Just how far along is our understanding of open systems?
I want to know if a Markov process far from equilibrium corresponds to a non-equilibrium thermodynamics process or whether they have something in common?