Science topic

Non-Equilibrium Physics - Science topic

Explore the latest questions and answers in Non-Equilibrium Physics, and find Non-Equilibrium Physics experts.
Questions related to Non-Equilibrium Physics
  • asked a question related to Non-Equilibrium Physics
Question
17 answers
Feynman's parton model (as presented by, for example, W.-Y. P. Hwang, 1992, enclosed) seems to bridge both conceptions, but they do come across as mutually exclusive theories. The S-matrix program, which goes back to Wheeler and Heisenberg (see D. Bombardelli's Lectures on S-matrices and integrability, 2016, enclosed), is promising because - unlike parton theories - it does not make use of perturbation theory or other mathematically flawed procedures (cf. Dirac's criticism of QFT in the latter half of his life).
Needless to say, we do not question the usefulness of the quark hypothesis to classify the zoo of unstable particles (Particle Data Group), nor the massive investment to arrive at the precise measurements involved in the study of high-energy reactions (as synthesized in the Annual Reviews of the Particle Data Group), but the award of the Nobel Prize of Physics to CERN researchers Carlo Rubbia and Simon Van der Meer (1984), or Englert and Higgs (2013), seems to award 'smoking gun physics' only, rather than providing any ontological proof for virtual particles.
To trigger the discussion, we attach our own opinion. For a concise original/eminent opinion on this issue, we refer to Feynman's discussion of high-energy reactions involving kaons (https://www.feynmanlectures.caltech.edu/III_11.html#Ch11-S5), in which Feynman (writing in the early 1960s and much aware of the new law of conservation of strangeness as presented by Gell-Man, Pais and Nishijima) seems to favor a mathematical concept of strangeness or, more to the point, a property of particles rather than an existential/ontological concept. Our own views on the issue are summarized in (see the Annexes for our approach of modeling reactions involving kaons).
Relevant answer
Answer
In the first half of the 20th century some theorists (e.g. Heisenberg, Brouwer, etc.) tried to develop the concept that space itself has a metric (the minimal length scale of discrete space). The size of the metric was thought to be ≈ 1 x 10-15 m because of the size of the minimal wave length of electromagnetic waves and the diameter of particles. However, 1 x 10-15 m is too large in relation to both amplitudes of 1 electromagnetic wave so the minimal length scale must be ≈ 0,5 x 10-15 m or a bit smaller. The consequence is that we cannot detect phenomena smaller than ≈ 0,5 x 10-15 m.
We are aware of the existence of discrete space because without a spatial structure there are no observable differences in the universe. But observable reality is created by discrete space and we know it because of the spatial differentiation of force fields (the general concept of QFT). The consequence is that the nature of everything we can observe and detect are mutual relations (“proved” by the formalism of QM). Thus we don’t measure the bare existence of the spatial units of discrete space, we measure the mutual interactions between the units, the exchange of variable properties.
It is obvious that these mutual interactions of the units of discrete space cannot “split” a unit of discrete space (even the ancient Greek philosophers reasoned some 2500 years ago that there is a limitation on reductionism). The magic word in theoretical physics to solve the problem is “asymptotic freedom”. We really like it to give problems a fuzzy name so we can keep our ambiguous concepts.
Recently astronomers have observed large regions of gravitational polarization in the early universe (Cosmic Microwave Background radiation by the BICEP2 Collaboration). That is a problem because the CMB radiation is the exchange of electromagnetic waves between the Hydrogen atoms in the early universe. So how is it possible that there are already regions of huge gravitational fields if there are no stars, etc.? There are also observations of “full grown” galaxies that already existed about 0,7 billions years after the proposed “big-bang”. These galaxies have an enormous black hole in the centre so cosmologists have termed these black holes “primordial” black holes. But thanks to the BICEP2 measurements we now know that our universe created the enormous black holes first, before there was the creation of Hydrogen atoms.
Now there is your question about the existence of quarks. Are they real or are they the result of tricky theoretical constructions with the help of “asymptotic freedom”? Moreover, can QCD elucidate why vacuum space created enormous black holes before the Hydrogen atoms emerged from vacuum space around the black holes? I will read your paper about ontology and physics. ;-)
With kind regards, Sydney
  • asked a question related to Non-Equilibrium Physics
Question
8 answers
Dear all:
I hope this question seems interesting to many. I believe I'm not the only one who is confused with many aspects of the so called physical property 'Entropy'.
This time I want to speak about Thermodynamic Entropy, hopefully a few of us can get more understanding trying to think a little more deeply in questions like these.
The Thermodynamic Entropy is defined as: Delta(S) >= Delta(Q)/(T2-T1) . This property is only properly defined for (macroscopic)systems which are in Thermodynamic Equilibrium (i.e. Thermal eq. + Chemical Eq. + Mechanical Eq.).
So my question is:
In terms of numerical values of S (or perhaps better said, values of Delta(S). Since we know that only changes in Entropy can be computable, but not an absolute Entropy of a system, with the exception of one being at the Absolute Zero (0K) point of temperature):
Is easy, and straightforward to compute the changes in Entropy of, lets say; a chair, or a table, our your car, etc. since all these objects can be considered macroscopic systems which are in Thermodynamic Equilibrium. So, just use the Classical definition of Entropy (the formula above) and the Second Law of Thermodynamics, and that's it.
But, what about Macroscopic objects (or systems), which are not in Thermal Equilibrium ? Maybe, we often are tempted to think about the Entropy of these Macroscopic systems (which from a macroscopic point of view they seem to be in Thermodynamic Equilibrium, but in reality, they have still ongoing physical processes which make them not to be in complete thermal equilibrium) as the definition of the classical thermodynamic Entropy.
what I want to say is: What would be the limits of the classical Thermodynamic definition of Entropy, to be used in calculations for systems that seem to be in Thermodynamic Equilibrium but they aren't really? perhaps this question can also be extended to the so called regime of Near Equilibrium Thermodynamics.
Kind Regards all !
Relevant answer
Answer
Partition function - A dual character .
  • asked a question related to Non-Equilibrium Physics
Question
13 answers
The asymptotic values of the intensity-intensity correlation function (g2(t)-1)are supposed to be related to beta in the Siegert relation below. This from my reading of the literature is supposed to be instrument dependent. For our instrument an ALV DLS instrument with an ALV 5000 correlator the value is 0.7-0.8. For a polymer in good solvent we observe the value to be 0.7. I have noticed that for my gel samples swelling in 0.1 M NaCl the value I am obtaining is ~ 0.3 - 0.4 . Which is definitely below the truly acceptable value for the instrument.
  1. What could possibly be the reason for the low asymptotic values of the intensity-intensity correlation function (g2(t)-1) in the case of my gels?
  2. What are the factors that control the asymptotic values of the intensity-intensity correlation function (g2(t)-1) ?
  3. What could be a suggested remedy to improve this value to the accepted value?
Relevant answer
Answer
The intercept beta is not related to the actual signal to noise ratio, as was shown by Hocker Litster and Smith more than 40 years ago. Beta is determined by how many coherence areas you are looking at. However, if you open or close the apertures in your collecting optics, within reason, while leaving the laser power and photomultiplier tube gain the same (latter has different phrasings for different photodetectors), beta will change, but the accuracy of your spectral measurement will not. If you open apertures and lower the laser power (as with filters) the count rate may stay the same but the accuracy of the measurement will go down. If you want more accuracy, increase the measrement time.
The fundamental signal to noise ratio is determined by photocounts per relaxation time per coherence area. (Note that coherence is not an on-off thing, so we say per coherence area, but the idea that there is 'one' coherence area misses the point.) As a corollary scattering power goes inversely as wavelength to the fourth power, but shorter wavelength photons each carry more energy, and the relaxation time and size of the coherence areas each scale as wavelength squared, so in net all things being the same you are better off at longer wavelengths, an effect oft-times swamped by the wavelength sensitivity of your detector.
I would be more concerned that you are using some normalization and claiming that the long-time limit of your spectrum is "1", There is noise in the measurement used to normalize, and as a result this claim is imprecise.
  • asked a question related to Non-Equilibrium Physics
Question
27 answers
These two economic analyses are based on the second law of thermodynamics. 
Relevant answer
  • asked a question related to Non-Equilibrium Physics
Question
7 answers
Dear Research-Gaters,
It might be a very trivial question to you : 'What does the term 'wrong dynamics ' actually mean ?'. I have heard that term often times, when somebody presented his/her, her/his results. As it seems to me, the term 'wrong dynamics' is an argument, which is often applicable to bring up arguments that a simulation result might be not very useful. But what does that argument mean in physical quantities ? It that argument related to measures such as correlation functions, e.g. velocity autocorrelation, H-bond autocorrelation or radial distribution functions ? Can 'wrong dynamics' be visualized in terms of a too fast decay in any of those correlation functions in comparison with other equilibrium simulations, or can it simply be measured by deviations of the potential energies, kinetic energies and/or the root-mean square deviation from the starting structure ? At the same time, thermodynamical quantities such as free-energies might not be affected by the term 'wrong dynamics'. Finally, I would like to ask what the term 'wrong dynamics' means, if I used non-equilibrium simulations which are actually completely non-Markovian, i.e. history-independent and out-of equilibrium (Metadynamics, Hyperdynamics). Thank you for your answers. Emanuel
Relevant answer
Answer
Start by an obvious remark: the fact that two given Markov process tend to the same (given) equilibrium does not mean that they do so in the same way. In particular, their dynamical behaviour may be quite distinct.
Now Newtonian mechanics will generate some kind of equilibrium. It is often easier to reach that same equilibrium by a stochastic process (Monte-Carlo). Then we are guaranteed that the equilibrium properties will in fact be the same. Static correlation functions, for example, will be correct. However, the dynamical properties need not. Thus the velocity autocorrelation function (the product of v at time t with v at time t+ tau averaged over t) need not have any clear connection. Indeed, in some MC models, there are no velocities!. To the extent that the system is classical, the Newtonian dynamics is the ``correct one'' and the MC is fake. The results that should be trusted are thus the Newtonian ones.
However, in many cases, MC will give dynamics that are, in some sense, qualitatively close to what Newtonian dynamics gives. Nevertheless, such issues must be treated with considerable care.
  • asked a question related to Non-Equilibrium Physics
Question
11 answers
I am looking to  combine Monte Carlo and Molecular dynamics in a simulation. How they can be combined? In general, how to keep the time evolution of the system correctly
Santo
Relevant answer
Answer
Yes, it is possible, you can also check the following article.
Regards,
Ender
  • asked a question related to Non-Equilibrium Physics
Question
3 answers
In case of non-Brownian suspensions, how does one confirm the presence of shear banding. I am using a Couette geometry. While reading through literature I came across the mention of a "positive slope" in the yielding region of the flow curve which appears due to the curvature of the couette geometry. For parallel and cone plate geometries this region is flat or displays a negative slope. However, what I am confused about is whether that positive slope needs to have a certain "minimum" value or any change in slope qualifies as banding? How much does the slope change vis a vis the initial low shear yielding slope. Is there maybe some ratio below or above which it qualifies as shear banding?
The stress ramp flow curve shows three regions in my case. A gradually yielding region at lower shear rates, a lower slope region around yielding and again a high slope region beyond the yielding. Am I definitely seeing shear banding or are there some other confirmations that I need to address before I conclude so. Also what are any other interpretations other than shear banding, for such systems?
Relevant answer
Answer
You mean like the figure below? If yes, see the reference there.
  • asked a question related to Non-Equilibrium Physics
Question
1 answer
Dansgaard [1964] say that if the evaporation takes place under non-equilibrium conditions, e.g. fast evaporation, the evaporating surface water probably has a negative d-excess. Besides, relatively low mean condensation temperature for the heaviest rain (i.e. very high clouds) also cause a negative d-excess. If so, the heavy rainfall from tropical cyclone (very high clouds) hereby has a negative d-excess. Am I right? If yes, will the surface seawater have a negative d-excess due to the heavy rainfall? For example, tropical surface seawater suffering from heavy rainfall in the ITCZ?
Note that negative d-excess in seawater of the Mexico Gulf (-3 ‰), especially Mediterranean Sea (lower to -9 ‰) from attached Figure [Schmidt et al., 2007JGR], as i understand it, it may be ascribed to the annual precipitation of Mexico Gulf is more than 1000 mm and the intense evaporation (non-equilibrium) in the Mediterranean Sea, respectively. Is it right?
BTW, can anyone send me some literature about d-excess of the seawater? Thanks a lot!
Relevant answer
Answer
I think the negative d-excess in surface seawater is related to the intensity of sea surface evaporation, not precipitation. We know that the volume of precipitation is very smaller than the volume of sea.
  • asked a question related to Non-Equilibrium Physics
Question
44 answers
Equilibrium is the most important method of analysis in economics. It has a long tradition that started from the 18th century with French scholars such as A.-N. Isnard and N.-F. Canard and elaborated by L. Walras as a real method of analysis. Existence of an equilibrium began to be studied in 1930's Wien and was completed by Arrow and Debreu's demonstration. Equilibrium still remains today the major framework of almost all economic analyses. State-of-the-art macroeconomics is normally discussed by a Dynamic Stochastic General Equilibrium (DSGE) model.
P. Krugman once argued that, without models, economics becomes a collection of metaphors and historical details (Krugman 1997, p.75). To avoid this, it is necessary to make a formal mathematical model which, in his opinion, usually contains two principles: maximization and equilibrium.
Despite of Krugman's compelling argument, we see many economists contest the usefulness of equilibrium concept. They sometimes argue that the equilibrium framework is the very source of all derailments of the present-day economics.
My opinion from old days is that
  1. it is necessary to replace equilibrium by some other concept, and
  2. the best solution would be the concept of dissipative structure.
 Do you agree with me? Or do you have any other ideas?
Relevant answer
Answer
Dear Yoshinori and company,
All of you are making a common mistake about the concept of equilibrium in modern economics. There are two concepts that are not used interchangeably: equilibrium in the real world out our windows and equilibrium as a property of formal mathematical models.
I would say that the main problem in modern economics is that our departments have be taken over by the mathematics department's culture. Realism is less important than elegant proofs.
I have discussed some of the problems of A-D GE models in Chapter 2 of my 2014 book: Model Building in Economics: Its purposes and Llimitations (Cambridge U.P.). And I am currently working on a book that expands on that chapter. If any of you are interested, contact me at bolandla@yahoo.ca and I will send you the Outline and Preface.
LB
  • asked a question related to Non-Equilibrium Physics
Question
8 answers
We are trying to use optical emission spectroscopy to determine our ne, Te, and gas temperatures. There has been some evidence that you can use Boltzmann statistics and analyses on non-thermal equilibrium plasmas BUT the arguments are not clear and there are no references to back their claim. Any thoughts?
Relevant answer
Answer
Boltzmann plot technique is only an approach to relate the spectra to the level population. This can always be done. What you do not know is the ground state, therefore it is not possible to determine the distribution and as a consequence the temperature. Moreover, you can measure the electron gas properties through Stark broadening, but there is no correlation with the gas temperature. 
Even if you have a Boltzmann distribution from the spectra, you are missing information on the ground state and therefore you cannot say that the plasma is in equilibrium.
I have attached a reference showing the evolution of the distributions in the plasma under LIBS conditions.
  • asked a question related to Non-Equilibrium Physics
Question
3 answers
Any reference calculation the diffusion coefficient of a colloidal system under a temperature gradient ?
Relevant answer
Answer
While not as specific as the answer above, I would look at books on non-equilibrium thermodynamics, particularly those discussing the Onsager reciprocity relations. Under a thermal gradient, the diffusion coefficient contains both an ordinary Fick's law type component but also a, usually an order or more magnitude smaller, thermodiffusion coefficient which is an additional diffusion gradient induced by nonequilibrium coupling between the thermal gradient and the concentration gradient. This is the coefficient in the Onsager reciprocity relation which is the cross-term in the system of two equations (i.e. the right diagonal X12 and X21 components of the thermal force coefficient matrix; X11 and X22 are the normal Fickian diffusion and Fourier heat equation coefficients)
  • asked a question related to Non-Equilibrium Physics
Question
6 answers
The basis for the phase field modeling is minimization of a functional presenting some thermodynamic potential (e.g., free energy), which is conserved in the system considered. Therefore, time evolution of the system described by the phase field is not the real kinetics, but just some pathway the system proceeds to the equilibrium state.
It is like using the Metropolis Monte Carlo for minimization of the energy of a system. The final state might correspond to some local or global minimum, but the way the system relaxes to it is not the real kinetics. The real kinetic pathway should be described by the kinetic Monte Carlo approach.
Therefore, the question of applicability of the phase field methods to non-equilibrium problems arises. Are these methods applicable for micro-structure evolution under irradiation?
I am aware about the large number of publications on the void and gas bubble growth under irradiation. However, I am interested in justification of this approach from the ground principles, not just "use it because the others do so".
I would enjoy discussion on the topic, many thanks for your replies.
Relevant answer
Answer
Both the second and the first order phase transitions are defined and treated in the framework of thermodynamics. The major difference between the two types of transitions is the existence of the thermodynamic barrier (>> kT) in the case of first order transitions. Sufficiently large stochastic fluctuations are required for such transitions to occur. Stochastic fluctuations can be also treated thermodynamically through the fluctuation-dissipation theorem. Thermodynamic description of the first order transitions is applicable  for the cases of both the sharp and diffuse interfaces between the ambient phase and a new phase nucleus. For example, in the theory of crystallization the diffuse interface approach is more accurate than the sharp interface approximation, which is valid only when the interface thickness is negligible compared with the nucleus size. Under irradiation conditions, two kinds of "forces" are continuously competing with each other. The first one is the external irradiation, which drives the system away from the equilibrium. The second is the internal thermodynamic force moving the system towards the equilibrium. This force can be described thermodynamically through, for example, the chemical potential. In this description both the diffuse and the sharp interface methodologies can be employed. The major problem with the phase-field in the present state is the correct formulation of the thermodynamic "force" through the corresponding thermodynamic potential.
  • asked a question related to Non-Equilibrium Physics
Question
11 answers
I am looking for examples of studies of non-equilibrium systems, preferrably where maximum entropy production has been used (applied studies or purely mathematical).
Relevant answer
Answer
Dear Joakim,  all our studies last fifteen years related to surfaces and interfaces rely on the irreversible thermodynamics.  We have published about thirty articles dealing with the application of theory for various problems in microelectronics, grain boundary grooving, quantum dots etc.  In our paper on the grain boundary grooving under the uni-axial tension shows that initial stage is control by the Maximum entropy production but later it switches to minimum entropy production hypothesis, where system approach to the non-equilibrium stationary state  as claimed by Prigogine for good reason. (SEE PHILMAG PAPER). Therefore,  Ogurtani's  theory shows clearly that   Maximum and Minimum entropy hypothesis aren't  mutually inconclusive but just the opposite they  follow each others consecutively as the dynamical system goes from the transition state to the non equilibrium stationary or chaotic state, which ends up with  the catastrophic failure of the system such as the grain boundary fracture under the uniaxial tension stress system.  
The thermo-mathematical foundation of this rigorous theory has been established in Papers No: 2, 3, and 4 below using the two different methods for the formulation of  irreversible thermodynamics of surfaces and interfaces: micro discrete elements method (2 and 3), and the variation formulation. In paper 7  I have enlarged the variation method to cover up  the thickness dependent  specific surface Helmholtz free energy case, which is needed for the quantum dots. In paper 4, I have treated the cases for the effects of the elastostatic (stress) and electrostatic (electromigration) fields   on the surface morphological evolution, which touches the certain inconsistencies in the literature related to the treatment of isobaric and isochoric systems, properly. Best Regards.
  • asked a question related to Non-Equilibrium Physics
Question
2 answers
Assuming quasi-static processes, we showed universal properties of efficiency for small temperature differences, in the paper: arXiv:1412.0547
Relevant answer
Answer
Classical thermodynamics tells us that work extracted is maximal for a reversible process. Thus we should focus on a process which preserves the total entropy of the source and sink. Secondly, as @Remi Cornwall mentioned the temperatures of the two finite reservoirs approach each other. So we may continue to extract work till the two systems come to a common temperature. These two facts should be sufficient to compute the work extracted (from energy conservation) and the heat lost by the source, which would determine the efficiency of the "optimal" process.
  • asked a question related to Non-Equilibrium Physics
Question
5 answers
Here's an except from 'Into The Cool: Energy Flow, Thermodynamics, and Life' that got me thinking:
"Trained as a physical chemist, Alfred Lotka worked for an insurance company as a statistical analyst and in his spare time was a student of biology. Almost a generation ahead of his peers, Lotka suggested that life was a dissipative metastable process. By this he meant that, although stable and mistaken for a 'thing,' life was really a process. Living matter was in continuous flux, kept from equilibrium by energy provided by the sun. Lotka stressed that life on Earth was an open system."
Just how far along is our understanding of open systems?
Relevant answer
Answer
"life was really a process", "life on Earth was an open system."
First impression is that somebody messed up definitions or did not care about them. A system can be a lot of things. Here, one could think of system in the sense of thermodynamics or more philosophical of the concept from system theory. Either way, a system is surely not a process. Also, things like "almost a generation ahead of his peers" are usually unjustified praise soly intended to impress the reader. Call me close-minded, but after this short paragraph I don't think anything interesting will follow. Probably the usual vioaltion of the concept of entropy.
  • asked a question related to Non-Equilibrium Physics
Question
19 answers
Given any positive number N>0, define a set S to be {1,2,3,...2N}. Suppose we randomly draw N number (exclusively) out of this set S, and compute the sum of these N numbers, call it A. Apparently A has a finite support (between N(N+1)/2: the sum of the first N numbers, to N(3N+1): the sum of the last N numbers). What is the distribution of the number A?
Relevant answer
Answer
It wouldn't be one of your common distributions, e.g.,
Binomial or Poisson. This is a special case of sampling
w/o replacement from a finite population. No doubt
the distribution has been characterixzed. Check the
internet, and/or Johnson and Katz. It would be easy to
compute the mean and standard deviation. I would guess
that the limiting distribution as N -> infinity is
normal.
  • asked a question related to Non-Equilibrium Physics
Question
7 answers
Specifically, I want to set Neumann boundary conditions on the Poisson equation, but the resulting matrix is singular and hence cannot be inverted.
Relevant answer
Answer
Hi Akshay,
You may want to check out the following article by Supriyo Datta (S. Datta, Superlattices and Microstructures, 28, 253 (2000) which provide some discussion of the potential in the leads.
Datta's two books would also be a good source as well.
For transport calculations where there is a potential drop of V across the device, you can specify that the chemical potential in the two leads are shifted by +V/2 or -V/2 respectively. Then when you make the charge and potential self-consistent in the system, the potential at the two leads is able to adjust to the appropriate levels. Please remember to include enough layers of your metal leads in your "device region" so that the self energies for your leads are truly representative of the bulk materials.
I also recall that Datta may have another paper where he discusses the difference between fixed and floating potentials in greater detail.
Best regards,
Derek
  • asked a question related to Non-Equilibrium Physics
Question
11 answers
I believe nearly all books concerning statistical mechanics cover Hamiltonian system. Then naturally Liouville's theorem and Boltzmann-Gibbs distribution are discussed. I don't see the connection between Hamiltonian system and Boltzmann-Gibbs distribution. The former describes a deterministic dynamics of one particle while the latter refers to statistical law of a large amount of particles which seems to be random.
Relevant answer
Answer
The answer lies in the ensemble theory. You can derive the canonical and grand canonical ensembles from the microcanonical one, see e.g. the book by L. E. Reichl. The key step is to assume ergodicity; however, it should be noted that this cannot be rigorously proven in general.