Science topic

# Non-Equilibrium Physics - Science topic

Explore the latest questions and answers in Non-Equilibrium Physics, and find Non-Equilibrium Physics experts.

Questions related to Non-Equilibrium Physics

Feynman's parton model (as presented by, for example, W.-Y. P. Hwang, 1992, enclosed) seems to bridge both conceptions, but they do come across as mutually exclusive theories. The S-matrix program, which goes back to Wheeler and Heisenberg (see D. Bombardelli's Lectures on S-matrices and integrability, 2016, enclosed), is promising because - unlike parton theories - it does not make use of perturbation theory or other mathematically flawed procedures (cf. Dirac's criticism of QFT in the latter half of his life).

Needless to say, we do not question the

*usefulness*of the quark hypothesis to classify the zoo of unstable particles (Particle Data Group), nor the massive investment to arrive at the precise measurements involved in the study of high-energy reactions (as synthesized in the Annual Reviews of the Particle Data Group), but the award of the Nobel Prize of Physics to CERN researchers Carlo Rubbia and Simon Van der Meer (1984), or Englert and Higgs (2013), seems to award 'smoking gun physics' only, rather than providing any ontological proof for virtual particles.To trigger the discussion, we attach our own opinion. For a concise original/eminent opinion on this issue, we refer to Feynman's discussion of high-energy reactions involving kaons (https://www.feynmanlectures.caltech.edu/III_11.html#Ch11-S5), in which Feynman (writing in the early 1960s and much aware of the new law of conservation of strangeness as presented by Gell-Man, Pais and Nishijima) seems to favor a mathematical concept of strangeness or, more to the point, a property of particles rather than an existential/ontological concept. Our own views on the issue are summarized in

Preprint Ontology and physics

(see the Annexes for our approach of modeling reactions involving kaons). Dear all:

I hope this question seems interesting to many. I believe I'm not the only one who is confused with many aspects of the so called physical property 'Entropy'.

This time I want to speak about

**Thermodynamic Entropy**, hopefully a few of us can get more understanding trying to think a little more deeply in questions like these.The Thermodynamic Entropy is defined as:

**Delta(S) >= Delta(Q)/(T**. This property is only properly_{2}-T_{1})**defined for (macroscopic)systems**which are**in Thermodynamic Equilibrium**(i.e. Thermal eq. + Chemical Eq. + Mechanical Eq.).So

**my question is**:In terms of numerical values of S (or perhaps better said, values of Delta(S). Since we know that only changes in Entropy can be computable, but not an absolute Entropy of a system, with the exception of one being at the Absolute Zero (0K) point of temperature):

Is easy, and straightforward to compute the changes in Entropy of, lets say; a chair, or a table, our your car, etc. since all these objects can be considered macroscopic systems which are in Thermodynamic Equilibrium. So, just use the

**Classical definition of Entropy**(the formula above)**and the Second Law of Thermodynamics**, and that's it.But, what about Macroscopic objects (or systems), which are

**not in Thermal Equilibrium ?**Maybe, we often are tempted to think about the Entropy of these Macroscopic systems (which from a macroscopic point of view they seem to be in Thermodynamic Equilibrium, but in reality, they have still**ongoing physical processes which make them not to be in complete thermal equilibrium**) as the definition of the classical thermodynamic Entropy.what I want to say is:

**What would be the limits of the classical Thermodynamic definition of Entropy**, to be used in calculations for systems that**seem to be in Thermodynamic Equilibrium but they aren't**really? perhaps this question can also be extended to the so called regime of Near Equilibrium Thermodynamics.Kind Regards all !

The asymptotic values of the intensity-intensity correlation function (g2(t)-1)are supposed to be related to beta in the Siegert relation below. This from my reading of the literature is supposed to be instrument dependent. For our instrument an ALV DLS instrument with an ALV 5000 correlator the value is 0.7-0.8. For a polymer in good solvent we observe the value to be 0.7. I have noticed that for my gel samples swelling in 0.1 M NaCl the value I am obtaining is ~ 0.3 - 0.4 . Which is definitely below the truly acceptable value for the instrument.

- What could possibly be the reason for the low asymptotic values of the intensity-intensity correlation function (g2(t)-1) in the case of my gels?
- What are the factors that control the asymptotic values of the intensity-intensity correlation function (g2(t)-1) ?
- What could be a suggested remedy to improve this value to the accepted value?

These two economic analyses are based on the second law of thermodynamics.

Dear Research-Gaters,

It might be a very trivial question to you : 'What does the term 'wrong dynamics ' actually mean ?'. I have heard that term often times, when somebody presented his/her, her/his results. As it seems to me, the term 'wrong dynamics' is an argument, which is often applicable to bring up arguments that a simulation result might be not very useful. But what does that argument mean in physical quantities ? It that argument related to measures such as correlation functions, e.g. velocity autocorrelation, H-bond autocorrelation or radial distribution functions ? Can 'wrong dynamics' be visualized in terms of a too fast decay in any of those correlation functions in comparison with other equilibrium simulations, or can it simply be measured by deviations of the potential energies, kinetic energies and/or the root-mean square deviation from the starting structure ? At the same time, thermodynamical quantities such as free-energies might not be affected by the term 'wrong dynamics'. Finally, I would like to ask what the term 'wrong dynamics' means, if I used non-equilibrium simulations which are actually completely non-Markovian, i.e. history-independent and out-of equilibrium (Metadynamics, Hyperdynamics). Thank you for your answers. Emanuel

I am looking to combine Monte Carlo and Molecular dynamics in a simulation. How they can be combined? In general, how to keep the time evolution of the system correctly

Santo

In case of non-Brownian suspensions, how does one confirm the presence of shear banding. I am using a Couette geometry. While reading through literature I came across the mention of a "positive slope" in the yielding region of the flow curve which appears due to the curvature of the couette geometry. For parallel and cone plate geometries this region is flat or displays a negative slope. However, what I am confused about is whether that positive slope needs to have a certain "minimum" value or any change in slope qualifies as banding? How much does the slope change vis a vis the initial low shear yielding slope. Is there maybe some ratio below or above which it qualifies as shear banding?

The stress ramp flow curve shows three regions in my case. A gradually yielding region at lower shear rates, a lower slope region around yielding and again a high slope region beyond the yielding. Am I definitely seeing shear banding or are there some other confirmations that I need to address before I conclude so. Also what are any other interpretations other than shear banding, for such systems?

Dansgaard [1964] say that if the evaporation takes place under non-equilibrium conditions, e.g. fast evaporation, the evaporating surface water probably has a negative d-excess. Besides, relatively low mean condensation temperature for the heaviest rain (i.e. very high clouds) also cause a negative d-excess. If so, the heavy rainfall from tropical cyclone (very high clouds) hereby has a negative d-excess. Am I right? If yes, will the surface seawater have a negative d-excess due to the heavy rainfall? For example, tropical surface seawater suffering from heavy rainfall in the ITCZ?

Note that negative d-excess in seawater of the Mexico Gulf (-3 ‰), especially Mediterranean Sea (lower to -9 ‰) from attached Figure [Schmidt et al., 2007JGR], as i understand it, it may be ascribed to the annual precipitation of Mexico Gulf is more than 1000 mm and the intense evaporation (non-equilibrium) in the Mediterranean Sea, respectively. Is it right?

BTW, can anyone send me some literature about d-excess of the seawater? Thanks a lot!

Equilibrium is the most important method of analysis in economics. It has a long tradition that started from the 18th century with French scholars such as A.-N. Isnard and N.-F. Canard and elaborated by L. Walras as a real method of analysis. Existence of an equilibrium began to be studied in 1930's Wien and was completed by Arrow and Debreu's demonstration. Equilibrium still remains today the major framework of almost all economic analyses. State-of-the-art macroeconomics is normally discussed by a Dynamic Stochastic General Equilibrium (DSGE) model.

P. Krugman once argued that, without models, economics becomes a collection of metaphors and historical details (Krugman 1997, p.75). To avoid this, it is necessary to make a formal mathematical model which, in his opinion, usually contains two principles: maximization and equilibrium.

Despite of Krugman's compelling argument, we see

**many economists contest the usefulness of equilibrium concept**. They sometimes argue that the equilibrium framework is the very source of all derailments of the present-day economics.My opinion from old days is that

- it is necessary to replace equilibrium by some other concept, and
- the best solution would be the concept of
**dissipative structure**.

Do you agree with me? Or do you have any other ideas?

We are trying to use optical emission spectroscopy to determine our n

_{e}, T_{e}, and gas temperatures. There has been some evidence that you can use Boltzmann statistics and analyses on non-thermal equilibrium plasmas BUT the arguments are not clear and there are no references to back their claim. Any thoughts?Any reference calculation the diffusion coefficient of a colloidal system under a temperature gradient ?

The basis for the phase field modeling is minimization of a functional presenting some thermodynamic potential (e.g., free energy), which is conserved in the system considered. Therefore, time evolution of the system described by the phase field is not the real kinetics, but just some pathway the system proceeds to the equilibrium state.

It is like using the Metropolis Monte Carlo for minimization of the energy of a system. The final state might correspond to some local or global minimum, but the way the system relaxes to it is not the real kinetics. The real kinetic pathway should be described by the kinetic Monte Carlo approach.

Therefore, the question of applicability of the phase field methods to non-equilibrium problems arises.

**Are these methods applicable for micro-structure evolution under irradiation?**I am aware about the large number of publications on the void and gas bubble growth under irradiation. However,

**I am interested in justification of this approach from the ground principles**, not just*"use it because the others do so"*.I would enjoy discussion on the topic, many thanks for your replies.

I am looking for examples of studies of non-equilibrium systems, preferrably where maximum entropy production has been used (applied studies or purely mathematical).

Assuming quasi-static processes, we showed universal properties of efficiency for small temperature differences, in the paper: arXiv:1412.0547

Here's an except from 'Into The Cool: Energy Flow, Thermodynamics, and Life' that got me thinking:

"Trained as a physical chemist, Alfred Lotka worked for an insurance company as a statistical analyst and in his spare time was a student of biology. Almost a generation ahead of his peers, Lotka suggested that life was a dissipative metastable process. By this he meant that, although stable and mistaken for a 'thing,' life was really a process. Living matter was in continuous flux, kept from equilibrium by energy provided by the sun. Lotka stressed that life on Earth was an open system."

Just how far along is our understanding of open systems?

Given any positive number N>0, define a set S to be {1,2,3,...2N}. Suppose we randomly draw N number (exclusively) out of this set S, and compute the sum of these N numbers, call it A. Apparently A has a finite support (between N(N+1)/2: the sum of the first N numbers, to N(3N+1): the sum of the last N numbers). What is the distribution of the number A?

Specifically, I want to set Neumann boundary conditions on the Poisson equation, but the resulting matrix is singular and hence cannot be inverted.

I believe nearly all books concerning statistical mechanics cover Hamiltonian system. Then naturally Liouville's theorem and Boltzmann-Gibbs distribution are discussed. I don't see the connection between Hamiltonian system and Boltzmann-Gibbs distribution. The former describes a deterministic dynamics of one particle while the latter refers to statistical law of a large amount of particles which seems to be random.