Science topic

# Statistical Physics - Science topic

Explore the latest questions and answers in Statistical Physics, and find Statistical Physics experts.
Questions related to Statistical Physics
• asked a question related to Statistical Physics
Question
Dear all,
I have a technical question regarding the self-diffusion coefficient of water in an equilibrium state using Einstein relation in molecular dynamics simulation. If we consider an equilibrated medium of water/polymer, water molecules have Brownian motion as a result of thermal fluctuations. So their self-diffusion movements, related to the Einstein relation between diffusion coefficient and mobility are fully accounted for. But in addition to thermal fluctuations, an equilibrium fluid system has pressure fluctuations. At any instant, the pressure on one side of a volume element is not the same as the pressure on the opposite surface of the volume element, and the volume element will move as a whole in the direction of lower pressure. These pressure fluctuations are not included in the simulations. In macroscopic (but linear, i.e., small forces and flows) flow conditions, they would give rise to a flow described by the linearized Navier-Stokes equation. Isn't this correct? how does Einstein relation consider it? is it logical to use Einstein relation in this situation? Can you discuss it briefly?
Thanks a lot
Try the following article *, you have two different substances and a surface that plays a role, but not 2 isotopes of the same substance where the question of self-diffusion and pressure gradients are relevant:
* Isotope effect for self-diffusion in liquid lithium and tin Belashchenko D., Polyanskii R. & Pavlov R. Russian Journal of Physical Chemistry A. 2002. Т. 76. № 3. С. 454-461
We ought to remember that:
• 1st Fick´s law applies to diffusion
• 2nd Fick´s law applies to convection
I do not see how convection 2nd Ficks´ law can be used for your particular system
Best Regards.
• asked a question related to Statistical Physics
Question
Dear All,
Coagulating (aggregating, coalescing) systems surround us. Gravitational accretion of matter, blood coagualtion, traffic jams, food processing, cloud formation - these are all examples of coagulation and we use the effects of these processes every day.
From a statistical physics point of view, to have full information on aggregating system, we shall have information on its cluster size distribution (number of clusters of given size) for any moment in time. However, surprisingly, having such information for most of the (real) aggregating systems is very hard.
An example of the aggregating system for which observing (counting) cluster size distribution is feasible is the so-called electrorheological fluid (see https://www.youtube.com/watch?v=ybyeMw1b0L4 ). Here, we can simply observe clusters under the microscope and count the statistics for subsequent points in time.
However, simple observing and counting fails for other real systems, for instance:
• Milk curdling into cream - system is dense and not transparent, maybe infra-red observation could be effective?
• Blood coagulation - the same problem, moreover, difficulties with accessing living tissue, maybe X-ray could be used but I suppose that resolution could be low; also observation shall be (at least semi-) continuous;
• Water vapor condensation and formation of clouds - this looks like an easy laboratory problem but I suppose is not really the case. Spectroscopic methods allow to observe particles (and so estimate their number) of given size but I do not know the spectroscopic system that could observe particles of different (namely, very different: 1, 10, 10^2, ..., 10^5, ...) sizes at the same time (?);
• There are other difficulties for giant systems like cars aggregating into jams on a motorway (maybe data from google maps or other navigation system but not all of the drivers use it) or matter aggregating to form discs or planets (can we observe such matter with so high resolution to really observe clustering?).
I am curious what do you think of the above issues.
Do you know any other systems where cluster size distributions are easily observed?
Best regards,
Michal
Dear Johan!
I want to recommend you papers of my husband)) He study aggregation of blood particals by scanning flow cytometry and also produced (I think, new and fruitfull) kinetic model for this processes. It is his profile https://www.researchgate.net/profile/Vyacheslav-Nekrasov
As I understand, key papers is
1 Brownian aggregation rate of colloid particles with several active sites
2 Kinetic turbidimetry of patchy colloids aggregation: latex particles immunoagglutination
3 Mathematical modeling the kinetics of cell distribution in the process of ligand–receptor binding
4 Kinetics of the initial stage of immunoagglutionation studied with the scanning flow cytometer
But you can write him directly))
Best wishes, Anna
• asked a question related to Statistical Physics
Question
Once I obtain the Ricatti equations to solve the moments equation I can't find the value of the constants. How can I obtain the value of these constants? Have these values already been reported for the titanium dioxide?
You may possibly mean the method of moments (deriving moments of mass) to solve a set of Smoluchowski coagulation equations as described in, e.g., "A Kinetic View of Statistical Physics" (Chapter 5) by Krapivsky et al.?
Definitely, you shall provide more details and/or the mentioned equations themselves. A lot of different expressions are called as Smoluchowski equations.
• asked a question related to Statistical Physics
Question
Imagine there is a surface, with points randomly spread all over it. We know the surface area S, and the number of points N, therefore we also know the point density "p".
If I blindly draw a square/rectangle (area A) over such surface, what is the probability it'll encompass at least one of those points?
P.s.: I need to solve this "puzzle" as part of a random-walk problem, where a "searcher" looks for targets in a 2D space. I'll use it to calculate the probability the searcher has of finding a target at each one of his steps.
Thank you!
@Jochen Wilhelm, the solutions are not equivalent because
For Poisson: P(at least one point) = 1 - P(K=0) = 1 - e^(-N/S*A)
For Binomial: P(at least one point) = 1 - ( (S - A)/S )^N
The general formula for the Binomial case is the following:
P(the rectangle encompasses k points)=(N choose k) ( A/S )^k ( (S - A)/S )^(N - k)
• asked a question related to Statistical Physics
Question
Hello Dear colleagues:
it seems to me this could be an interesting thread for discussion:
I would like to center the discussion around the concept of Entropy. But I would like to address it on the explanation-description-ejemplification part of the concept.
i.e. What do you think is a good, helpul explanation for the concept of Entropy (in a technical level of course) ?
A manner (or manners) of explain it trying to settle down the concept as clear as possible. Maybe first, in a more general scenario, and next (if is required so) in a more specific one ....
Kind regards !
Dear F. Hernandes
The Entropy (Greek - ἐντροπία-transformation, conversion, reformation, change) establishes the direct link between MICRO-scopic state (in other words orbital) of some (any) system and its MACRO-scopic state parameters (temperature, pressure, etc).
This is the Concept (from capital letter).
Its main feature – this is the ONLY entity in natural sciences that shows the development trend of any self-sustained natural process. It is the state function; it isn’t the transition function. That is why the entropy is independent from the transition route, it depends only from the initial state A and final state B for any system under consideration. Entropy has many senses.
In the mathematical statistics, the entropy is the measure of uncertainty of the probability distribution.
In the statistical physics, it presents the probability (so-caled *statistical sum*) of the existence of some (given) microscopic state (*statistical weight*) under the same macroscopic characteristics. This means that the system may have different amount of information, the macroscopic parameters being the same.
In the information approach, it deals with the information capacity of the system. That is why, the Father of Information theory Claude Elwood Shannon believed that the words *entropy* and *information* are synonyms. He defined entropy as the ratio of the lost information to the whole of information volume.
In the quantum physics, this is the number of orbitals for the same (macro)-state parameters.
In the management theory, the entropy is the measure of uncertainty of the system behavior.
In the theory of the dynamic systems, it is the measure of the chaotic deviation of the transition routes.
In the thermodynamics, the entropy presents the measure of the irreversible energy loss. In other words, it presents system’s efficiency (capacity for work). This provides the additivity properties for two independent systems.
Gnoseologically, the entropy is the inter-disciplinary measure of the energy (information) devaluation (not the price, but rather the very devaluation).
This way, the entropy is many-sided Concept. This provides unusual features of entropy.
What is the entropy dimension? The right answer depends on the approach. It is dimensionless figure in the information approach (Shannon defined it as the ratio of two uniform values; therefore it is dimensionless by definition). On the contrary, in the thermodynamics approach it has a dimension (energy to temperature J/K)
Is entropy parameter (fixed number) or this is a function? Once again, the proper answer depends on the approach (point of view). It is a number in the mathematical statistics (logarithm of the number of the admissible (unprohibited) system states, well-known sigma σ). At the same time, this is the function in the quantum statistics. Etc., etc.
So, be very cautious when you are operating with entropy.
Best wishes,
Emeritus Professor V. Dimitrov vasili@tauex.tau.ac.il
• asked a question related to Statistical Physics
Question
I'm writing my dissertation about economic dynamics of inequality and i'm going to use econophysics as a emprical method.
Dear Mehmet,
Some formal similarities between equilibrium statistical mechanics and economics may exist, but we should be very suspicious of any direct comparisons. Of course, in some instances the mathematical solutions used in statistical mechanics may be of some use in economics from a practical viewpoint, I would not read too much into this. My sense is that the relationship between both is that there is incomplete information about the microscopic state of the system. See e.g., this paper by my advisor:
There is a fair bit of literature on using entropy to model systems in physics and economics.
• asked a question related to Statistical Physics
Question
Tragically, in 1906, Boltzmann committed suicide and many believe that the statistical mechanics was the cause. He provided the current definition of entropy, interpreted as a measure of statistical disorder of a system His student, Paul Ehrenfest, carrying on Boltzmann's work, died similarly in 1933. William James, in 1909, found dead in his room probably due to suicide. Bridgman, the statistical physics pioneer, committed suicide in 1961. Gilbert Lewis, took cyanide in 1987after not getting a Nobel prize.
I agree with you, Dr. Jiří Kroc, thank you for the explanation. I taught the subject of Stat. Mechan. for several years at both levels, undergraduate and graduate. I still remember how hard it was to introduce it.
• asked a question related to Statistical Physics
Question
The occupation number of bosons can be any number from zero to infinity, guiding us to the Bose-Einstein statistics. On the other hand, for example, a classical wave can be considered a superposition of any number of sine or cosine waves. Isn't it similar to say the occupation number of a classical wave can be any number from zero to infinity and utilizing Bose-Einstein statistics for classical waves in particular and classical fields in general?
Dear Rasoul Kheiri, your question is interesting and two-fold, I explain:
As you state, the occupation number of bosons (and the ground boson state) can allow infinite numbers of particles (infinite modes for classical waves), but the integer-spin of bosons is an entire number. This is why I mean by two-fold.
I elaborate, for solid-state physics, phonons have a zero spin, but for electromagnetic waves seen as photons, the spin is equal to one. Phonons are longitudinal in nature, photons are transversal in nature.
In EMW, we have the coherent states for photons, which are quantum in nature but that shows some features as the Poisson distribution and not the Bose-Einstein distribution which reflects the entire spin. Those fields are all within the harmonic oscillator - HO approximation.
In addition, the second quantization in terms of creation and annihilation operators with commuting algebraic properties is needed.
Furthermore, to show the spin nature (structure) of the photon, further, the QED should be used, their relativistic origin, or at least the Klein Gordon equation instead of the Schroedinger equation.
• asked a question related to Statistical Physics
Question
In information theory, the entropy of a variable is the amount of information contained in the variable. One way to understand the concept of the amount of information is to tie it to how difficult or easy it is to guess the value. The easier it is to guess the value of the variable, the less “surprise” in the variable and so the less information the variable has.
Rényi entropy of order q is defined for q ≥ 1 by the equation,
S = (1/1-q) log (Σ p^q)
As order q increases, the entropy weakens.
Why we are concerned about higher orders? What is the physical significance of order when calculating the entropy?
You may look at other entropies, search articles from Prof. Michèle Basseville on entropy (of probability measures)
• asked a question related to Statistical Physics
Question
May I ask a question on thermodynamic? We know that U(V,T) (caloric eq. of state) and S(P,V) (thermodynamic eq of state) can both be derived from thermodynamic potentials (U F G H) and the fundamental relations. However, U(V,T) doesn't hold full thermodynamic info of the system as U(S,V) does, yet S(P,V) also holds full thermodynamic info of the system.
In which step in derivation to get U(T,V) from U(S,V) lost the thermodynamic info? (the derivation is briefly：1.  derive U=TdS+ PdV on V, 2. replace the derivative using Maxwell eq. and 3. finally substitute ideal gas eq or van der waal eq)
Why the similar derivation to get S(P,V) retain full thermodynamic info?
Even if we only have U(T,V), can't we get P using ideal gas eq, then calculate the S by designing  reversible processes from (P0,V0,T0) to (P',V',T')? If we can still get S, why U(T,V) doesn't have full thermodynamic info?
Natural variables for U: S, V, Ni (for simple systems)
Natural variables for S: U, V, Ni (for simple systems)
T is the partial derivative of U with respect to S, maintaining V and Ni constants:
T=(∂U/∂S)V,Ni
If from the fundamental relationship U=U(S,V,N) you replace S by T just by solving for S in T=T(U,S,V,Ni) and substituting in U=U(S,V,N), then, you loose information because you are replacing a variable with a derivative with respect to that variable.
This problem is worked out with the Lengendre transform. If you search on internet, you may find simple examples: how to change y=f(x) to z=g(p), where p is ∂y/∂x=2x, in the two ways, the incorrect one (calculation of p and plain substitution of p by removing x) and the correct one (calculation of p and applying the Legendre transform, z=g(p)=px-f). In fact, because you do not loose information with the Legendre transform, you may go backwards from z to y, which is not possible with the incorrect way.
Therefore, it is a mathematical "trick".
Applying the Legendre transform to U, with respect to T and S, you get a new thermodynamic potential F=U-TS, the Helmholtz energy, whose natural variables are T, V, and Ni. Beware of the minus sign applied to the Legendre transform (i.e., F is not equal to TS-U, but U-TS).
For a system at constant U, V, and Ni, any possible process will maximize S.
For a system at constant S, V, and Ni, any possible process will minimize U, but not F.
For a system at constant T, V, and Ni, any possible process will minimize F, but not U.
The Legendre transform connecting two thermodynamic potentials parallels the Laplace transform connecting the corresponding partition functions.
The Legendre transform is not only employed in Thermodynamics and Statistical Physics, but also in Classical Mechanics and other fields.
• asked a question related to Statistical Physics
Question
Maxwell Boltzman distribution is ni/gi =e-(εi−µ)/kT. In quantum mechanical case, +/-1 is added at the end of kT. (+) sign is for Fermi-Dirac distribution and (-) is for Bose-Einstein distribution. I want to know what is the physical significance of these signs and how can we relate this to classical (Boltzman) distribution.
The partition function derived for Boson & Fermion differs in the range of particle including even /odd number distribution - The entropy of B.E / F.D statistics differs - REF : https://demonstrations.wolfram.com/BoseEinsteinFermiDiracAndMaxwellBoltzmannStatistics/
We can calculate delS = f In ( T.P ) . Fermi-dirac Probability is something higher than B.E Stat so expected entropy of F.D is something higher the B.E stat .
• asked a question related to Statistical Physics
Question
Dear all:
I hope this question seems interesting to many. I believe I'm not the only one who is confused with many aspects of the so called physical property 'Entropy'.
This time I want to speak about Thermodynamic Entropy, hopefully a few of us can get more understanding trying to think a little more deeply in questions like these.
The Thermodynamic Entropy is defined as: Delta(S) >= Delta(Q)/(T2-T1) . This property is only properly defined for (macroscopic)systems which are in Thermodynamic Equilibrium (i.e. Thermal eq. + Chemical Eq. + Mechanical Eq.).
So my question is:
In terms of numerical values of S (or perhaps better said, values of Delta(S). Since we know that only changes in Entropy can be computable, but not an absolute Entropy of a system, with the exception of one being at the Absolute Zero (0K) point of temperature):
Is easy, and straightforward to compute the changes in Entropy of, lets say; a chair, or a table, our your car, etc. since all these objects can be considered macroscopic systems which are in Thermodynamic Equilibrium. So, just use the Classical definition of Entropy (the formula above) and the Second Law of Thermodynamics, and that's it.
But, what about Macroscopic objects (or systems), which are not in Thermal Equilibrium ? Maybe, we often are tempted to think about the Entropy of these Macroscopic systems (which from a macroscopic point of view they seem to be in Thermodynamic Equilibrium, but in reality, they have still ongoing physical processes which make them not to be in complete thermal equilibrium) as the definition of the classical thermodynamic Entropy.
what I want to say is: What would be the limits of the classical Thermodynamic definition of Entropy, to be used in calculations for systems that seem to be in Thermodynamic Equilibrium but they aren't really? perhaps this question can also be extended to the so called regime of Near Equilibrium Thermodynamics.
Kind Regards all !
1. At very low temperatures, entropy behaves according to Nernst's theorem
I copy the wiki-web inf. but you also find the same information in Academicians: L. Landau and E. Lifshitz Vol. 5 Vol:
The third law of thermodynamics or Nerst theorem, states that the entropy of a system at zero absolute temperature is a well-defined constant. Other systems have more than one state with the same, lowest energy, and have a non-vanishing "zero-point entropy".
2. Lets try to put Delta Q = m C Delta T, into the expression: Delta(S) >= Delta(Q)/(T2-T1) . What we do obtain? something missing then?
you see, physical chemistry and statistical physics look at entropy in a different subtle way.
3. Delta S = Kb Ln W2/W1 where W is the total number of micro-states of the system, then what is W1 and W2 concerning Delta S?
4. Finally, look at the following paper by Prof. Leo Kadanoff concerning the meaning of entropy in physical kinetics (out of equilibrium systems): https://jfi.uchicago.edu/~leop/SciencePapers/Entropy_is3.pdf
• asked a question related to Statistical Physics
Question
I had my BSc in Physics, MSc and PhD in Energy System Engineering. I worked as a assistant lecturer and a lecturer for three years teaching different maths courses, statistics and Physics. I had to move from where I was lecturing before because of my family and now I am in search of a Lecturing or Post-Doc opportunity. Most of the job adverts I see are more specific on a particular field. I am beginning to wonder if my diversification is a disadvantage.
Also, if there is a post-doc or lecturing opportunity at your university, I won't mind applying.
Dear Prof. Olusola Bamisile You made an important point not always seen in academy.
In my case I have a PhD in theoretical solid state physics, but I worked for the oil industry, in cross-listed subjects.
Nowdays with the use of Artificial intelligence and Deep Learning, cross-knowledge fields will be more relevant to I&D. Regards.
• asked a question related to Statistical Physics
Question
Considering that mean field theory approaches have been used for neuronal dynamics, and that renormalization group theory has been used in other networks to describe their properties, I wanted to know whether it is useful or interesting to describe the behavior of a neuronal system based on its critical exponents. Thank you in advance.
Critical exponents describe properties that don't depend on whether the network describes brain function or some other system.
If the network does show scale free behavior, then critical exponents can be defined and they characterize the behavior of certain quantities.
That's why it's wrong to ask whether it's useful or interesting to describe the behavior of a neuronal system based on its critical properties-but whether it does have such properties at all.
• asked a question related to Statistical Physics
Question
Let's just say we're looking at the classical continuous canonical ensemble of a harmonic oscillator, where:
H = p^2 / 2m + 1/2 * m * omega^2 * x^2
and the partition function (omitting the integrals over phase space here) is defined as
Z = Exp[-H / (kb * T)]
and the average energy can be calculated as proportional to the derivative of ln[Z].
Equipartion theorem says that each independent coordinate must contribute R/2 to the systems energy, so in a 3D system, we should get 3R. My question is does equipartion break down if the frequency is temperature dependent?
Let's say omega = omega[T], then when you take the derivative of Z to calculate the average energy. If omega'[T] is not zero, then it will either add or detract from the average kinetic energy and therefore will disagree with equipartition. Is this correct?
Drew> Z = Exp[-H / (kb * T)], and the average energy can be calculated as proportional to the derivative of ln[Z].
The exact formula, easy to prove, is ⟨H⟩ = -∂ln(Z)∂β, where β = 1/(kBT). However, as you probably already have noted, that is mathematically correct only when H is independent of β (i.e. temperature T).
One may easily imagine situations where the parameters of the Hamiltonian actually depend on temperature, because one is dealing with a phenomenological "effective" description^*, not taking into account the physics which leads to this temperature dependence. However, if such a dependence is large enough to make any difference, the standard thermodynamic interpretation^** of ln Z breaks down, and thereby all sacred relations of thermodynamics. Which is the absolutely last thing we should consider violating in physics.
If you want to escape the usual equipartition principle, this is easily violated by non-quadratic terms in a classical Hamiltonian, or introduction of quantum mechanics (without which even Hell would freeze over, due to its infinite heat capacity).
^*) Which in practise is always the case, since we don't even know what is going on at extremely small scales, and (mostly) don't have to worry about sub-atomic scales.
^**) ln Z = -β F = -β(U-TS), where F is the Helmholtz free energy.
PS. The very first answer to this question should be viewed as an attempt to repeat the notorious Sokal hoax, https://en.wikipedia.org/wiki/Sokal_affair (often perpetrated on RG).
• asked a question related to Statistical Physics
Question
These 'entropies' depend upon a parameter, which can be varied between two limits. In those limits they reduce to the Shannon-Gibbs and Hartley-Boltzmann entropies. If such entropies did exist they could be derived from the maximum-entropy formalism where the Lagrange multiplier would be identified as the parameter. Then, like all the other Lagrange multipliers, the parameter would have to be given a thermodynamic interpretation as an intensive variable which would be uniform and common to all systems, like the temperature and chemical potential. The Renyi and Havdra-Charvat entropies cannot be derived from the maximum-entropy formalism. Thus, there can be no entropy that can be parameter dependent, and whose parameter would be different for different systems.
What if we have several parameters? Then the situation can be described with fuzzy Shannon entropy ( see my paper in Journal of Physics & Astronomy, 2016- Approach with different entropies...)
• asked a question related to Statistical Physics
Question
Statistical physics uses thermostat idea to describe small energy variations in a big system. Can thermostat be a set or real oscillators with linear interaction with statistical system?
• asked a question related to Statistical Physics
Question
One of the central themes in Dynamical Systems and Ergodic Theory is that of recurrence, which is a circle of results concerning how points in measurable dynamical systems return close to themselves under iteration. There are several types of recurrent behavior (exact recurrence, Poincaré recurrence, coherent recurrence , ...) for some classes of measurability-preserving discrete time dynamical systems. P. Johnson and A. Sklar in [Recurrence and dispersion under iteration of Čebyšev polynomials. J. Math. Anal. Appl. 54 (1976), no. 3, 752-771] regard the third type („ coherent recurrence” for measurability-preserving transformations) as being of at least equal physical significance, and this type of recurrence fails for Čebyšev polynomials. They also found that there is considerable evidence to support a conjecture that no (strongly) mixing transformation can exhibit coherent recurrence. (This conjecture has been proved by R. E. Rice in [On mixing transformations. Aequationes Math. 17 (1978), no. 1, 104-108].)
For “the definition of coherent recurrence (for measure/ measurability-preserving transformations) ” see, e.g., in: 1) [P. Johnson and A. Sklar, J. Math. Anal. Appl. 54 (1976), no. 3, 752-771], 2) [R. E. Rice, Aequationes Math. 17 (1978), no. 1, 104-108], 3) H. Fatkić, “O vjerovatnosnim metričkim prostorima i ergodičnim transformacijama (with a summary in English)” on ResarchGate; 4) [ B. Schweizer, A. Sklar, Probabilistic metric spaces, North-Holland Ser. Probab. Appl. Math., North-Holland, New York, 1983; second edition, Dover, Mineola, NY, 2005].
• asked a question related to Statistical Physics
Question
Suggest the model and methodology to estimate the diffusion coefficient of Fission Products in nuclear fuel in scenario like breach of clad etc.
• asked a question related to Statistical Physics
Question
I see lots of papers dealing with application of statistical physics to financial systems. But what are the basic models? There is little point in defining a model, solving it, and finding a answer. Can anybody give me a good starting point?
I guess the two most simple models would be a local/current trend plus one of the two most basic stochastic processes. For the current trend you could assume that the current growth, stagnation or loss is continuing in the near future. The stochastic process is then either Gaussian white noise fitted to the past or the generalisation being alpha-stable noise fitted to the past (Gaussian corresponding to alpha=2).
The basic philosophical question that you have to answer for youself is then the following: Do you assume that sudden jumps in prices are due to single unpredictable events (crises, rumors etc.) that have to be build in by hand later on? (Then you choose Gaussian white noise + manually added jumps.) Or do assume that smaller and larger crises happen from time to time and you want to model how often crises happen and how big they are? (In this case you choose alpha-stable noise with the parameter alpha fitted in a way that the frequency and the typical height of the jumps describe the past well enough.)
These would be the two most simple models that come to my mind.
• asked a question related to Statistical Physics
Question
We know the ergodic definition and know the ergodic mappings. But what is the ergodic process?
A random process is said to be ergodic if the time averages of the process tend to the appropriate ensemble averages. This definition implies that with probability 1, any ensemble average of {X(t)} can be determined from a single sample function of {X(t)}. Clearly, for a process to be ergodic, it has to necessarily be stationary. But not all stationary processes are ergodic.
• asked a question related to Statistical Physics
Question
There are similar resummations in statistical physics: see
Dear Mykola
Best regards
Dikeos Mario
• asked a question related to Statistical Physics
Question
Hi all,
To calculate residence time from potential of mean force (PMF), we use stable state picture. Here a reaction state, product state are defined.  This is done from radial distribution function. The time taken to move from reaction state to product state is designated as t and residence time is given by,
1-P(t) = e^{-t/tau}, tau is the residence time,
P(t) is the probability  that it moves from reaction state to product state,
t= time taken to move from reaction state to product state. How to calculate P(t)?
The way I read it, it should simply be P(t)=1-e^{-t/tau}.
• asked a question related to Statistical Physics
Question
Dear Research-Gaters,
It might be a very trivial question to you : 'What does the term 'wrong dynamics ' actually mean ?'. I have heard that term often times, when somebody presented his/her, her/his results. As it seems to me, the term 'wrong dynamics' is an argument, which is often applicable to bring up arguments that a simulation result might be not very useful. But what does that argument mean in physical quantities ? It that argument related to measures such as correlation functions, e.g. velocity autocorrelation, H-bond autocorrelation or radial distribution functions ? Can 'wrong dynamics' be visualized in terms of a too fast decay in any of those correlation functions in comparison with other equilibrium simulations, or can it simply be measured by deviations of the potential energies, kinetic energies and/or the root-mean square deviation from the starting structure ? At the same time, thermodynamical quantities such as free-energies might not be affected by the term 'wrong dynamics'. Finally, I would like to ask what the term 'wrong dynamics' means, if I used non-equilibrium simulations which are actually completely non-Markovian, i.e. history-independent and out-of equilibrium (Metadynamics, Hyperdynamics). Thank you for your answers. Emanuel
Start by an obvious remark: the fact that two given Markov process tend to the same (given) equilibrium does not mean that they do so in the same way. In particular, their dynamical behaviour may be quite distinct.
Now Newtonian mechanics will generate some kind of equilibrium. It is often easier to reach that same equilibrium by a stochastic process (Monte-Carlo). Then we are guaranteed that the equilibrium properties will in fact be the same. Static correlation functions, for example, will be correct. However, the dynamical properties need not. Thus the velocity autocorrelation function (the product of v at time t with v at time t+ tau averaged over t) need not have any clear connection. Indeed, in some MC models, there are no velocities!. To the extent that the system is classical, the Newtonian dynamics is the correct one'' and the MC is fake. The results that should be trusted are thus the Newtonian ones.
However, in many cases, MC will give dynamics that are, in some sense, qualitatively close to what Newtonian dynamics gives. Nevertheless, such issues must be treated with considerable care.
• asked a question related to Statistical Physics
Question
How can I find open source model of landslides related disasters. I want learn something about the process of  developing such kind of model including statistical or physical based models.
• asked a question related to Statistical Physics
Question
Dear All
What is the best/simplest sampling method in Monte Carlo Simulation (MCS)? Do different sampling methods significantly differ in computational time of MCS?What is the best stopping criterion for MCS?
Kind Regards
if to speak about pseudorandom number generators for computer-based simulation using the Monte Carlo method, in my research https://www.researchgate.net/publication/314232345_MEX_function_for_multivariate_analysis_of_reliability_indices_depending_on_maintenance_periodicity_of_radio_communication_equipment, I apply a generator based on  L'Ecuyer algorithm with a long period of about 10^8. Then the Inverse transform sampling is used in order to convert random numbers from a uniform distribution into a required probability distribution.
Kind regards,
Alexander
• asked a question related to Statistical Physics
Question
For magnetic systems, Rushbrooke inequality is a direct consequence of the thermodynamic relation between  CH, CV and isothermal susceptibility, their positivity, and the definition of the critical exponent alpha as [controlling the behavior of CH  as function of the reduced distance from the critical temperature..
In the case of fluid system, the usual definition of alpha refers to the constant volume specific heat (CV).
However, the role played by CV in the thermodynamic relation between CP, CV and isothermal compressibility is not the same as CH. Some additional hypothesis has to be made in order to derive the  R. inequality  for fluid systems or am I missing something trivial ?
Wasn't this issue addressed for PVT systems in an open access article published by Elsner (2014) in Engineering 6, 789-826?
• asked a question related to Statistical Physics
Question
Brain research utilizes diverse measurement techniques which probe diverse spatial scales of neural activity. The majority of human brain research occurs at macroscopic scales, using techniques like functional magnetic resonance imaging (fMRI) and electroencephalography (EEG), while microscopic electrophysiology and imaging studies in animals probe scales down to single neurons.  A major challenge in brain research is to reconcile observations at these different scales of measurement.  Can we identify principles of neural network dynamics that are consistent across different observational length scales?
In recent experimental studies at different scales of observations, power-law distributed observables and other evidence suggest that the cerebral cortex operates in a dynamical regime near a critical point.  Scale-invariance - a fundamental feature of critical phenomena - implies that dynamical properties of the system are independent of the scale of observation (with appropriate scaling).  Thus, if the cortex operates at criticality, then we expect self-similar dynamical structure across a wide-range of spatial scales. Renormalization group is a mathematical tool that is used to study the scale invariance in equilibrium systems and recently, in dynamical systems with non-equilibrium critical steady-state. In the context of neural dynamics,  renormalization group ideas suggest that the dynamical rules governing the large-scale cortical dynamics may be the same as dynamics at smaller spatial scales (with appropriate coarse graining procedures).
• asked a question related to Statistical Physics
Question
I came up this question because I see a difference when simulating the same A+B<-> C type reaction using Copasi and a particle-based stochastic simulator. Copasi is for well-mixed system, doesn't considering diffusion rate of reactants and solves ODEs to get steady state. I used reaction kinetics rate constant from literature, and noticed a difference in steady state in these outputs of two simulators. I wonder theoretically, whether such a discrepancy should exist, in other words, whether a well-mixed system steady state would be affected by reactants diffusion speed? Thanks.
It is well known that, if you look at such reactions, for example in the case in which the initial concentrations of A and B are the same, there arise spontaneous separation of the two species in growing domains. This leads to a slower decay than what would be predicted for a homogeneous system. Thus, for a homogeneous system in the case of equal initial concentrations, one has a concentration decaying as 1/t, whereas in the case of finite diffusivities, one has an asymptotic decay law of t^(-d/4), where d is the dimension of the system. It turns out to be rather difficult to observe this in 3 dimensions, but in one and 2 dimensions, it is rather straightforward. See among others
F. Leyvraz and S. Redner,  Spatial structure in diffusion-limited two-species annihilation", Phys. Rev. A 46, 3132 (1992)
and references therein.
• asked a question related to Statistical Physics
Question
I am looking to  combine Monte Carlo and Molecular dynamics in a simulation. How they can be combined? In general, how to keep the time evolution of the system correctly
Santo
Yes, it is possible, you can also check the following article.
Regards,
Ender
• asked a question related to Statistical Physics
Question
It is interesting by the determination of the atomic pressure in solids.
I read some paper about the viral stress which was introduced by Lutsko (J. Appl. Phys. 64 (3), 1988) using the local momentum flux:
dp(r)/dt = - div s(r)
where p(r) is the momentum and s(r) the stress
Following the calculus, it is not clear for me if Lutsko uses the Lagrange of Euler description but I supposed a Lagrangien description. But in this case, I am not sure of the physical meaning of s(r). This point has been discussed by Zhou (Proc. R. Soc. Lond. A (2003) 459, 2347–2392) and in this website :
In the absence of volumic forces, in continuum mechanics, the Newton's law is:
\rho d2u(r)/dt2 = - div s(r)
with u(r) and s(r) the displacement and the Cauchy stress. This equation is valid in the Euler description.
I am confused about the right way to get the atomic stress.
Does someone know about that point ? How can I determine the atomic stress properly ?
Dear Mirko,
Expressing the equation in Lagrangian or Eulerian description is just the matter of pulling back or pushing forward the equation.
As you mentioned, the traditional balance of linear momentum that you have written is in Eulerian description and it can be certainly written in Lagrangian form.
In fact, it depends on how you treat the other variables in your formulations. If the main variables like displacement or velocity is considered in the material configuration and all the derivatives in an equation are with respect to material coordinates, your equation will be Lagrangian. Eulerian descriptions refers to spatial coordinate, of course.
Based on your equation, the evaluated stress will be either Eulerian or Updated Lagrangian or totally Lagrangian.
• asked a question related to Statistical Physics
Question
Feynman-Kac formula points out an SDE corresponds to a FPE. Given an SDE, I can simulate numerically by following the equation, then I can plot the histogram of the simulated single particle. However, FPE provides probabilistic evolution which bases on large amount of particles. Do these two results match? Or whether or not the ergodicity guarantees the matching?
check out Gardiner's handbook. it helped me a lot with understanding this link
Crispin W. Gardiner, Handbook of stochastic methods: For Physics, Chemistry and the Natural Sciences
• asked a question related to Statistical Physics
Question
I am trying to symbolically evaluate the expectation of a complicated multivariate random variable expression. The multivariate distribution is Gaussian which is easily (but abstractly) specified
Hi Dr Kleeman,
I believe that the attached link is exactly what you are looking for. It will give you the exact expectation symbolically in terms of the mean and variance for the random variables. I am also attaching the relevant reference to the toolbox in the link.
Feel free to contact me should you require further information. I have MATLAB and Mathematica source codes to the toolbox.
My best wishes to you,
Arvind
• asked a question related to Statistical Physics
Question
e.g.function f(x,y,z)=xyz-xy-xz-yz, where x,y,z are all belong to (0,1),when the f takes its extremum, we can get x=y=z. Then is this property suitable for every
functions of symmetric polynomial?
1. Thanks for your reply. I also find the counter-example with my problem, which is just a special case of symmetric polynomial.
• asked a question related to Statistical Physics
Question
When system is described in terms of transfer function it's widely known that closed loop transfer function is: Y(s)/X(s)=G(s)/1+G(s)H(s). I'm looking for equivalent equation but in state space representation.
I attach a suggestion, please let me know whether this is like what you were looking for.
• asked a question related to Statistical Physics
Question
A noise term, eta is added to the Cahn-Hillard Equation which is as follows.
d(phi)/dt=a1 (nabla^2) (phi) -a2 (phi) -a4 (phi^3) + eta
and eta is usually defined as
<eta(x,t) eta(x',t')> = A diracdelta(x-x') diracdelta(t-t')
which suggests that the noise terms are not correlated in time and space.
However, I am confused on how to implement this in the original equation. Do we just use a random number generator?
Many thanks.
Many thanks. I'll have a look at the references.
• asked a question related to Statistical Physics
Question
Why we can't use KAM theory to obtain the recurrence time?
First of all, there isn't any paradox''. Next the KAM theorem doesn't provide a way for obtaining the recurrence time of any system-it implies that, under certain assumptions, this time isn't infinite, that's all. How to actually compute it is another issue. So the answer is that, while the KAM theorem may apply to the FPU system, what it implies may not be useful for addressing certain issues. The reason is that while the theorem implies the existence of tori-that describe the periodic motion, whose period is the recurrence time-it doesn't provide any way of expressing the tori in terms of the original phase space coordinates, i.e. of constructing them.
• asked a question related to Statistical Physics
Question
Why we need to calculate inverse participation ration in our system,what it tells physically about the system,and why we should calculate it?Is there any specific reason to calculate this quantity physically?
Thanks Nikos for our kind reply.Even i have seen many people are calculating IPR or say PR also in graphene system also,with applied strain.Or even some people are GIPR or NIPR,as i have seen that when we are looking at PR with sample sites,we used to see that states a e loalized near band center only,which is physically acceptable also,can you please refer me some papers where people have calculated IPR ,with proper explanation.
Warm regards
Surender Pratap
• asked a question related to Statistical Physics
Question
some items use 5-point Likert scale, some 7-point Likert scale, etc., and some items are in semantic differential scale
Hi,
To use EFA, the items in the scale should be in the same steps level. Thus, you cant use items with 5 points together with items with 7 points.
Regards
• asked a question related to Statistical Physics
Question
Thouless formula works in quasi dimensional case or in any dimensions.
I am trying in case of quasi 1-dimensional case and localization length is not coming exact.
i want to do in case of when disorder is there in our system.
thanks
• asked a question related to Statistical Physics
Question
The thermal rate coefficient can be obtained from the reactive cross section (σ(Ecoll)):
k(T) = c(T)×∫P(T,Ecoll)Ecollσ(Ecoll)dEcoll
where Ecoll is the relative collision energy and c(T) is a constants at a given temperature and P(T,Ecoll) is the statistical weight.
In normal case Boltzmann statistic is used for the calculation of statistical weights. But Boltzmann statistic is valid when the temperature is high and the particles are distinguishable. At ultralow temperatures (T< 10K) we should use the appropriate quantum statistic (Fermi or Bose).
What kind of quantum statistic should be used in the collision of a
radical[spin = 1/2] + closed shell molecule (spin=0)
at ultralow temperatures?
What is the form of P(T,Ecoll) in this case?
First, let me stress that the activation energy is not a well defined microscopic quantity, but a convenient parameter in which we can hide our ignorance on the details of the many different possibilities for the individual reactions.
This having been said, the importance of quantum effects at a given temperature can be assessed by comparing the corresponding thermal energy with the characteristic energies for the different statistics, that is, the zero point energy for enclosing the particle in a given volume, which, for Fermions is known as the Fermi energy and for Bosons as the condensation energy.
As an example, let us take the Fermion case. For electrons in sodium metal, with a particle density of 2.65 x 10^28 /m^3, the Fermi energy is 3.24 eV and the Fermi temperature is 37'700 K, proportional to (N/V)^(2/3) and inversely proportional to the mass of the particle. I am not an expert in solution chemistry (if the concept makes sense  at all at such low temperatures), but I think it is fair to assume that, just on steric grounds, the number density of the reacting molecules is at least two orders of magnitude smaller, which divides the sodium result by a factor of the order of 20. Then, we know that the mass of the nucleon is 1836 times larger than that of the electron, and already for a small molecule like methane, this means division by further factor, this time of the order of 30'000. So you see that the degeneracy temperature at which quantum effects become important is below 0.1 K. Exactly the same reasoning holds for the boson case.
In summary, there is no need to change anything in your treatment.
Best regards, René Monnier
• asked a question related to Statistical Physics
Question
In Galit Shmueli's "To Explain or Predict," https://www.researchgate.net/publication/48178170_To_Explain_or_to_Predict, on pages 5 and 6, there is a reference to Hastie, Tibshirani, and Freedman(2009), for statistical learning, which breaks expected square of the prediction error into the two parts of a variance of a prediction error and the square of the bias due to model misspecification.  (Variance - bias tradeoff is discussed in Hastie, et.al. and other sources.)
An example of another kind of variance bias tradeoff that comes to mind would be the use of cutoff or quasi-cutoff sampling for highly skewed establishment surveys using model-based estimation (i.e., prediction from regression in such a cross-sectional survey of a finite population).  The much smaller variance obtained is partially  traded for a higher bias applied to small members of the population that should not be very much of the population totals (as may be studied by cross validation and other means).  Thus some model misspecification will often not be crucial, especially if applied to carefully grouped (stratified) data.
[Note that if a BLUE (best linear unbiased estimator) is considered desirable, it is the estimator with the best variance, so bias must be considered under control, or you have to do something about it.]
Other means to tradeoff variance and bias seem apparent:  General examples include various small area estimation (SAE) methods.  -
Shrinkage estimators tradeoff increased bias for lower variance.
Are there other general categories of applications that come to mind?
Do you have any specific applications that you might share?
Perhaps you may have a paper on ResearchGate that relates to this.
Any example of any kind of bias variance tradeoff would be of possible interest.
Thank you.
• asked a question related to Statistical Physics
Question
Hello, I am trying to carry out a spatio-temporal analysis of some data recorded at monthly intervals at different locations using gstat package of Edzer Pebesma.The data is recorded once a month for few years. The problem is that despite following all the instructions in many documents e.g. st.pdf by Edzer Pebesma and many variants such as Benedikt I am still unable to create the required spatio-temporal object. Can anybody help me in this regard? I will upload the data if somebody turns up to help. The data is simple reading of SO2 at 33 places and 12 readings per year for 4 years. What I want is an analysis like this link http://www.r-bloggers.com/spatio-temporal-kriging-in-r/.
Have you tired geoR package ?
• asked a question related to Statistical Physics
Question
I want know how to calculate the Maximal Lyapunov Exponent of a Hamiltonian system numerically. Then, yes, I want calculate the Maximal Lyapunov exponent of particles trajectories.
I'm using this paper as reference: http://chaos.utexas.edu/manuscripts/1085774778.pdf. The Maximal Lyapunov Exponent (MLE) is proportional to the logarithm of phase space separation. Then, for a chaotic system, we should have an straight line when plot the MLE vs time.
I found something like a log curve with saturation at MLE ~0.2 and a little 'noise' (figure). I investigated the behavior of this system with Poincaré sections, which shows chaos.
The behavior that I found in the figure, is due some computational problem, like finite size effects or time scale? How can I interpret this behavior?
ps. I'm limiting the separation of the particles do 10^-4 and starting the test system (a copy, except for a particle with a shifted position by 10^-2) after reference system relaxation.
As Kevin correctly stated, to calculate the maximal exponent you don't need to worry about any orthogonalization schemes. This is because if one Lyapunov vector has a greater exponent than another, then its growth is exponentially greater than the other, so the maximal exponent will quickly dominate over all others since it has the most positive value. Thus, any random initial perturbation will give you the maximal exponent (unless, of course, your initial perturbation has absolutely no components along the maximal vector).
Now, the problem of saturation can be handled in one of two ways.
The most obvious method of finding the MLE is to evolve one (or more) MD simulation with a slight perturbation in initial conditions compared to an "ideal" (i.e. unperturbed) simulation, and then track the norm of the phase space difference vector between each perturbed simulation and the ideal. The growth rate of this vector is an estimate of the MLE. However, this may saturate or give you non-ideal behavior due to 1) numerical overflow, 2) nonlinear effects if perturbations grow too large, or 3) the fact that perturbations can't grow larger than the chaotic attractor of the system (though this won't be relevant in Hamiltonian systems which don't have an attractor). To avoid all of this, you can periodically rescale your perturbation to a small-enough size and calculate its growth rate anew, then take the average of the short-time growth rates as your estimate of the MLE. However, the Lyapunov analysis looks at the local dynamics of your system, thus you need to use infinitesimally small perturbations. Consequently, ANY finite-sized perturbation that you evolve in MD will have nonlinear effects and will give you just an approximation of the linear behavior, even with small forces and timesteps.
The second, more robust method is to work in tangent space, and this doesn't have the caveat mentioned at the end of the paragraph above. To work in tangent space, you would linearize your force and directly evolve a set of perturbations using this force. Since you're working with linearized forces now, you don't need to worry about nonlinear effects due to perturbation size (so you can always rescale to 1 or whatever size suits your program) and you won't get saturation when the perturbation reaches the size of your attractor (if you have a dissipative system), since you're always looking at the local behavior of your system. I believe this is the method of Benettin et al mentioned by Igor above. You can find details of the numerical recipe for doing this in the very accessible book by Nayfeh and Balachandran:
Ali H.Nayfeh and Balakumar Balachandran, Applied Nonlinear Dynamics: Analytical, Computational, and Experimental Methods, Wiley-Interscience, NY, Jan 1995
Hope this helps. All the best!
• asked a question related to Statistical Physics
Question
Correlated sources of noise live in between regularity and randomness. How to define a bath in context?
The notion of temperature is problematic if the system is not in thermodynamic equilibrium. At thermodynamic equilibrium different sources of noise are normally uncorrelated. Therefore your question may not have any proper answer. At least you need have to specify your system in more detail if there should be a meaningful answer. For instance your proposal might be valid if the correlation is very weak so that the system is close to equilibrium.
• asked a question related to Statistical Physics
Question
In the 2D case, we cannot neglect high-order terms in the expansion, because they have the same canonical dimensions, therefore all terms are essential in the renormalization procedure (they are relevant). Thus, from that point of view, the phi^4-approximation  breaks down in 2D.
Stam, I have become familiar with the papers sent by you. Thank you!
• asked a question related to Statistical Physics
Question
I am studying the kappa-statistics, however I'm having difficulty expanding the kappa-exponential and writing it in a more compact (in terms of sums and/or product) form.
Ok. Thanks!
• asked a question related to Statistical Physics
Question
The basis for the phase field modeling is minimization of a functional presenting some thermodynamic potential (e.g., free energy), which is conserved in the system considered. Therefore, time evolution of the system described by the phase field is not the real kinetics, but just some pathway the system proceeds to the equilibrium state.
It is like using the Metropolis Monte Carlo for minimization of the energy of a system. The final state might correspond to some local or global minimum, but the way the system relaxes to it is not the real kinetics. The real kinetic pathway should be described by the kinetic Monte Carlo approach.
Therefore, the question of applicability of the phase field methods to non-equilibrium problems arises. Are these methods applicable for micro-structure evolution under irradiation?
I am aware about the large number of publications on the void and gas bubble growth under irradiation. However, I am interested in justification of this approach from the ground principles, not just "use it because the others do so".
I would enjoy discussion on the topic, many thanks for your replies.
Both the second and the first order phase transitions are defined and treated in the framework of thermodynamics. The major difference between the two types of transitions is the existence of the thermodynamic barrier (>> kT) in the case of first order transitions. Sufficiently large stochastic fluctuations are required for such transitions to occur. Stochastic fluctuations can be also treated thermodynamically through the fluctuation-dissipation theorem. Thermodynamic description of the first order transitions is applicable  for the cases of both the sharp and diffuse interfaces between the ambient phase and a new phase nucleus. For example, in the theory of crystallization the diffuse interface approach is more accurate than the sharp interface approximation, which is valid only when the interface thickness is negligible compared with the nucleus size. Under irradiation conditions, two kinds of "forces" are continuously competing with each other. The first one is the external irradiation, which drives the system away from the equilibrium. The second is the internal thermodynamic force moving the system towards the equilibrium. This force can be described thermodynamically through, for example, the chemical potential. In this description both the diffuse and the sharp interface methodologies can be employed. The major problem with the phase-field in the present state is the correct formulation of the thermodynamic "force" through the corresponding thermodynamic potential.
• asked a question related to Statistical Physics
Question
How do i plot a graph of hydrodynamic distribution function vs hydrodynamic radius? As i have read from the literature there are two ways. Either do a cumulant analysis or do a Laplace inversion. If suppose i do a cumulant analysis, i get mean size, polydispersity and skewness. So i have the x axis of the graph ( Hydrodynamic radius) how do i find the y axis(hydrodynamic radius distribution) from this cumulant fitted data?
Thanks for the help and time in advance.
A cumulant analysis gives you the moments of the distribution. So you need to 1) find the equation for the distribution in terms of the moments (mean, variance, skewness...), 2) Plug the moments that your analysis gives into the equation and 3) Calculate the distribution at an appropriate range of radii.
• asked a question related to Statistical Physics
Question
Molecules of  usual gas move stochastically. This stochasticity is a result of interaction between gas molecules (collisions of molecules). Let us imagine that molecules of the gas interacts by means of some force field. In this case the character of stochasticity of molecular motion changes. Let us take into account, that the wave function is a method of description of any ideal fluid motion (See  “Spin and wave function as attributes of ideal fluid.” J. Math. Phys.40, 256 -278, (1999). Electronic version http://gasdyn-ipm.ipmnet.ru/~rylov/swfaif4.pdf ). Is it possible to choose such a force field, that the gas with such interaction between molecules be described by the Klein-Gordon (or Schroedinger) equation? In other words, is it possible a classical description of quantum particles?
Dear Humam,
You are right. There is a connection between quantum phenomena and the classical ones. But I should like to stress, that the quantum phenomena can be explained from the viewpoint of classical dynamics. It is very important, because in this case there is no necessity to quantize all in the world. In particular, one does not need to consider quantum gravitation, which is considered by some theorists as the main problem of  the elementary particle theory.
• asked a question related to Statistical Physics
Question
I know about standard error, but not getting idea about the asymptotic standard error and how it is related to standard error.
Asymptotic standard error is an approximation to the standard error, based upon some mathematical simplification.
For example, we know from the Central Limit Theorem that the mean of n samples taken from independent identically distributed random numbers with finite variance converges in distribution to a normal distribution.  The theorem doesn't guarantee that the means of a finite sample are normally distributed, but we often calculate the standard error of the mean under the simplifying assumption that the means ARE normally distributed.  Emmanuel''s formula for the standard error is one such approximation.
• asked a question related to Statistical Physics
Question
It is an old Weis-Adler result that the Boole mapping R∋x→x-1/x∈R is a Lebesgue measure preserving and ergodic. What one can state about the mappings R²∋(x,y)→(x-1/y,y+1/x)∈R² and about R²∋(x,y)→(y-1/x,x+1/y)∈R² ?
The Conjecture  was recently claimed in the note attached.
Thanks!, -  really - Yes, I really overlooked and missed the x!  I have    corrected it!
Regards!
• asked a question related to Statistical Physics
Question
My experience with regression is primarily with regression through the origin, not necessarily with one regressor, but let us consider that. This is generally the case when examining establishment survey data, but do not limit yourself to such applications. As the size variable, the predictor x, becomes larger, one expects that the variance of y will become larger. Thus a coefficient of heteroscedasticity, gamma, may be used in regression weights of the form w = 1/x^2gamma, so that, for example, for the classical ratio estimator where gamma = 0.5, we have w = 1/x. As Ken Brewer (formerly, Australian National University) has theorized, and I and perhaps others have found experimentally, for an establishment survey, such gamma should generally be estimated to be between 0.5 and 1.0. (There are multiple methods for estimating gamma.  See the papers at the links below.) This becomes an important part of the error structure.
But what of cases where data are not so limited to the first quadrant; cases where y may often be negative and so might the regressor, x?  OLS is often used, and discussion may instead be on influential data points. I would like to hear some example applications of the natural occurrence, or non-occurrence, of heteroscedasticity under such circumstances. I suppose that various subject matter applications may influence whether or not there will be heteroscedasticity.
Note that just because heteroscedasticity is not generally considered in a given area of application, does not mean that it does not exist, and perhaps should be considered in the error structure.
What are your experiences; thoughts? Thank you.
I come at this from a rather different perspective - that of random coefficient or as it is sometimes called multilevel model. There it is quite natural to specify and model a variance function that depends on explanatory variables.
Imagine you have pupils at level 1 nested in schools at level 2 and the response is a Maths score now and the predictor is the Maths score 3 years earlier on entry to the school. In the fixed part of the you would have the general mean line across all pupils and across all schools. At level 2 one could have (what is known as a random intercepts and random slopes model) such that there are bigger differences between schools in progress for children of lower ability on entry - that is your are explicitly modelling the between-school heterogeneity of the earlier Maths score.
Much less well- known is that you can also simultaneously model within-school, between pupil heterogeneity. Thus, the lower ability pupils may be more variable in their progress - this seems perfectly natural to me. It is important to do this as it is substantively interesting and you can get confounding of heterogeneity across levels, so the between variance can be mis-estimated when it is really within heteogeneity.
The gamma model is a general procedure that effectively works on the response and all the predictors (constant coefficient of variation model so that the variance is a function of the mean). In that regard it is like a logit/Binomial model that has inbuilt heterogeneity with greater heterogeneity expected when probababily of positive outcome is around 0.5, but narrower variation as the probability gets closet to 0 and 1 - heterogeneity is an integral part of the estimation of such models.
What I am talking about however allows differential heteogeneity for different predictors and it is even possible to have  differenential heterogeneity for categorical predictors - such as differential variability for boys and girls in their progress.
In such work, heterogeneity is expected and is not an aberration. eg you can have impact heterogeity where two drug treatments have the same mean lowering of blood pressure but one is more variable in its effect than an another. Of course you would try and account for this differential impact but is also useful to find that it is going on.
Here is a paper on Rgate that is a tutorial on these type of models
and here is a training manual for MLwin that can fit such models  with complex variance functions at any level
• asked a question related to Statistical Physics
Question
Espically in Bayesian method or Generalized least square method.
Hello -
Perhaps if you were to reissue your question in a more data and statistics oriented context, and included regression under the list of topics, you might get more suggestions from the 'statistical community' (where I belong). If you have any releasable data, especially if it could be shown graphically, that might be helpful in any discussion. However, just evaluating data, as discussed for another question under regression, someone (Theo Dijkstra) pointed out, can be misleading, so make sure any advice you get is not spurious, and fits with some theory under your 'subject matter.'
I was encouraged that it appeared that you might have realized that heteroscedasticity may be important. Many statisticians don't remember, or perhaps ever knew that. OLS is dreadfully overused. :-)
Finally, if someone suggests an hypothesis or 'significance' test, another historical accident of statistics, note that a p-value is a function of sample size and can be very misleading if one is used alone. I think it far more useful/practical to look at your regression coefficients and their standard errors - comparatively. But note that if you use multiple regression, that interactions can have some regressors masking others, etc.
Whatever the state of your question now, or any new one, I suggest you reach out to the statistical community, but take whatever you hear 'with a grain of salt.'
Could be fun!
Best wishes - Jim
• asked a question related to Statistical Physics
Question
I know there is a technique to measure the second virial coefficient for the interaction of dilute colloidal particles in a mixed solvent as described by the pioneers like  M. L. Kurnaz , J. V. Maher, etc; however, I need someone to expalin the technique in its simplest form because to some extent I am unfamiliar with the expressions and methods in this region. I will appreciate if someone helps me with this problem.
Thanks a lot dear Dr. Farid
• asked a question related to Statistical Physics
Question
What is changing in transfer laws (transfer of energy, mass, momentum) when dealing with phenomena in short time (about few ms) ?
I observed that  Fourier law, Fick law, Laplace law for bubbles, are not enough to describe reality when I try to simulate brief phenomena. It seems that all transport laws derive from Boltzmann equation.
My question is how to adapt these laws when I face a short time phenomenon.
Well LBM is a kind of direct numerical simulation where you do not need to use constitutive relationships for processes like heat transfer/mass transfer. All the physical phenomena are solved by the "collision integral" and the "relaxation time". Since, at the core, all physical phenomena can be explained by molecular motion and collisions between molecules e.g. heat transfer by conduction process; such macroscopic representations can be avoided if LBM or DNS is used.
• asked a question related to Statistical Physics
Question
I wanted to look at the analytical solutions to some quantum mechanical systems and follow along with the math step by step.
I was able to find (at least I think), just that for the H2+ ion, but that is a singular electron, i.e.: no electron correlation. Has this been solved for H2 or some other multi-electron system and if so could I be pointed to the relevant solutions.
General analytical solution of 3-body problem can not be obtained in principle as the general solution is unstable (highly sensitive to small changes of the initial conditions). The understanding of this fact lead to the Dynamical Chaos paradigm and development of statistical methods of analysis of such solution (e.g. S. M. Ulam, On some statistical properties of dynamical system, 1961 and 55 years of active investigations)
• asked a question related to Statistical Physics
Question
Here water' means a collection of the molecules which can actually boil', i.e. able to feature a liquid-gas phase transition. I believe the answer should be a finite number rather than infinity; cf. the definite answer for an ideal gas from the publication below.
Some years ago I was computing the long range interaction forces between colloidal particles and thin films using Lifshitz theory. The calculations used dielectric data taken from measurements of large macroscopic material. As I recall the results were meaningful for colloidal particles of diameter ~30 Angstrom. Assuming a molecule is ~1A this would suggest a droplet of a thousand or so water molecules might behave as a macroscopic system.
• asked a question related to Statistical Physics
Question
Can anybody recommend a statistical mechanics book that:
1) is suited for self-teaching in an undergrad level
2) includes solved problems
3) includes the math needed for each topic
What I have in mind is something similar to what "Quantum Chemistry" by Ira Levine does to QM: each of the chapters includes an intro explaining all the math you'll need for the chapter subject, considering the reader has only a good calculus background. I'm looking for something similar for statistical mechanics, aimed at those interested in molecular dynamics.
• asked a question related to Statistical Physics
Question
I'm going to perform a classical MD simulation on a graphene oxide nanosheet. Everything is ok when I run the simulation without considering coulomb interactions, but when I consider them, the volume diverges (constantly increases until infinity). What is going on here?
Every time the Coulomb potential is included in the calculation, it introduces infinities. When you consider the partition function of the hydrogen atom it diverges, if you consider the coulomb cross sections, it diverges and so on. It depends on the fact that the coulomb potential is very high also at high distance but also you can have problems at short distance. So you have to limit your potential in some ways. At short distance, you cannot have a point particle, but you should consider that these particles are spheres of the size of the De Broglie length. At long distance you can cut the potential at the Debye length or you can use the debye potential which avoid the divergencies. Attache find a paer treating similar problem
• asked a question related to Statistical Physics
Question
I heard this idea and I wanted to check, but there was no good source on the internet. A good reference would be appreciated
Let me just add a brief comment to Christian's answer. He refers to the statement that for two-dimensional systems with a continuous order parameter and sufficiently short range interactions the Mermin-Wagner theorem states that there cannot be any long-range order. This is not to be confused with a phase transition, though, because we know of examples where such systems undergo phase transitions without breaking their continuous symmetry. The best-known example is the Kosterlitz-Thouless transition in the classical XY model, which amounts to a vortex binding/unbinding transition that does not induce overall long range order.
• asked a question related to Statistical Physics
Question
Recent (November 2013) public debate videos are up on youtube; for description, see: http://www.ece.tamu.edu/~noise/HotPI_2013/HotPI_2013.html
Yes, today's science is often not about seeking the scientific truth but instead pushing and agenda and keeping the grants. As a friend (Mark Dykman) say people are often not interested to seek the truth about nature but instead publish in Nature. But these things must sometime change back to truth-searching because humanity needs real science otherwise civilization will have a catastrophic end. Where and when, who knows? One thing is sure; if, in the resent situation, agencies would double all the research moneys; it would be wasted because it would mostly grow the influence of "self-justifiers", publication costs and grant-size-expectations to get tenured positions, thus further diminish the output and influence of truth seekers. It would be much healthier to cut all existing funding and give \$10k to everybody who is publishing, just to survive.
• asked a question related to Statistical Physics
Question
Gennadiy, I think the answer to your question about the nature of the trial density matrix is contained in pages 128-129 of Tanaka's book, where the cumulant expansion is described. Note that the largest cluster function G^N(1,2,...,N) is nothing but the entropy term in the variational potential, which is expressed in terms of the trial density matrix of the whole system (see equation 6.12 in page 129 of the book). This cluster function can, in turn, be expressed as a sum of cumulant functions of 1, 2, 3, ...
N sites. This representation is exact in the sense that minimizing the variational potential with respect to the N-point density matrix one should obtain the exact equilibrium density. In practice one has to truncate the expansion to a low order, equivalent to the size of the maximum cluster that can be solved with the numerical resourses at hand. There is no a priori form for the trial density. More precisely, the form of the trial density is encoded in the cumulant expansion. But, in practice, you don't have to worry about it, just minimize the variational potential with respect to all the reduced densities and then apply the normalization and reducibility conditions, as explained in the book. Hope this helps.
• asked a question related to Statistical Physics
Question
I am working with a network composed of polymeric Gaussian chains. I would like to use the replica formalism to study the deformation properties of the network. My network is deformed affinely.
This is done in Phys. Rev. E 58, R24-R27 (1998), Phys. Rev. E 62, 8159 (2000).
• asked a question related to Statistical Physics
Question
I mean that most of the potentials (Lennard-Jones, DLVO etc..) have a spherical symmetry thus are not appropriate when getting close to a surface/interface. Indeed, the physics of a surface is not the same as the bulk.
It depends in the system and the level of approximation you want. Some potentials that are used in classical md simulations include:
The Tersoff potentials, used for simulations of carbon, silicon, germanium and a range of other materials, which include three atoms interactions.
The (and modified) embedded-atom method and Tight-Binding second moment approximation potentials, which includes the calculation of electron density around an atom from contributions of all surrounding atoms.
Some references:
J. Tersoff, phys. Rev. B, 39, 5566, 1989
Daw et al, Mat. Sci and Engr Rep, 9, 251, 1993
Cleri and Rosato, phys rev B, 48, 1993
Baskes, phys rev B, 46, 2727, 1992
ab-initio md simulations could be another choice, but of course it depends on the system size, since may be too expensive computationally.
• asked a question related to Statistical Physics
Question
The physical properties (for example, of a material) at the nanoscale are not the same as they are at the macroscopic scale. Thus, can we use thermodynamics and statistical physics to explain nanophysics properties? An example: the demonstration of the fluctuation-dissipation theorem uses temperature, but is it possible to define a temperature at the nanoscale? Another example: what about the ergodic assumption?
A collection of nanoparticles with or without interaction between them is not the same as the individual isolated nanoparticle. The real interest in nanophysics and nanotechnology lies in isolated or semi-isolated nanoparticles.The application statistical mechanics in real small nanoparticles is limited and approximate whereas statistical mechanics and thermodynamics can still be used intelligently albeit approximately for assemblies of namoparticles with weak and strong interactions.
A lot of research money has already been wasted for studying assembly of nanoparticles. This is really a very old problem of chemistry. The chemists often produced assembly of nano-size particles in chemical reactions which showed broadened x-ray diffraction peaks. They then tried to produce larger particles (bulk) for x-ray diffraction studies. Many of them did not have the possibility of using electron microscopy but some of them did use it. There was less reasearch money then. Now a huge number of material scientists are just repeating the works of the old solid state chemistry and flooding the literature with mostly uninteresting and known results.
For real nanoparticle research one should concentrate on isolated and semi-isolated particles. Here the application of traditional statistical mechanics or thermodynamics is questionable and can be done with modifications of definitions and assumptions and techniques. There are already many attempts along this directions.
I was just reading a very nice article by W.T. Coffey and Y.P. Kalmykov entitled "Thermal fluctuations of magnetic nanoparticles: Fifty years after Brown". Here it is
• asked a question related to Statistical Physics
Question
Suppose you have two non-stationary, interacting systems with weak coupling (or one spatially extended, e.g. brain, but with signals measured at different locations). Since systems are coupled, you obtain some degree of synchronization. Since systems are non-stationary, synchronization is time dependent. What parameters can be used to quantify this time-dependency of synchronization? What insight do they give regarding structure and dynamics of the system?
My favourite tool for synch is the Kuramoto order parameter (http://en.wikipedia.org/wiki/Kuramoto_model).
The main advantage of Kuramoto order parameter is in fact its flexibility: it is capable to capture the "degree of synchronization" for rotators, i.e. oscillators that can be mapped onto a circle. In principle it works for infinite coupled oscillators, I have never tried it with just two, but I think it can give some indications. Or perhaps if you have an extended system, you could try to sample the system in several, not just two, points. In any case, it is a relatively simple tool , I think it is worth trying.
• asked a question related to Statistical Physics
Question
One could do that via the virial theorem and the radial correlation function, but I was wondering if there is something more efficient available.
In this paper is given an expression of the pressure for the hard spheres from computer simultions. You may find it relevant for your study.
Miguel and Jackson, mol phys, 104, 22, 2006.
• asked a question related to Statistical Physics
Question
Out of equilibrium thermodynamics study often changes in a structure with a long time scale. Could first order transitions give informations to out of equilibrium evolving structures? I mean: the time scale in a first order transition tends to zero (for example liquid solid transition), but the involved systems are often more simple. I would suggest scientists to study first order transitions as if they were out of equilibrium (during the duration of the transition).
That's the reason why Becker-Döring cluster theory and its modifications (coagulation-fragmentation equations) is still quite popular amongst mathematicians and physicists. One investigates e.g. the first-order phase transition "gas -> liquid" via the formation of larger and larger clusters, formed out of a monomer bath (i.e. the gaseous phase). Here a cluster consists of several monomers sticking together. Once a critical clustersize is exceeded one identifies these "large" clusters with the occurance of droplets (i.e. the liquid phase).
The stochastic processes of cluster formation are usually described by master equations. For Markovian stochastic processes these master equations are represented by infinite systems of ordinary differential equations of 1st order in the time parameter t. And as you observed correctly, under certain conditions one recovers a time scale tending to zero, i.e. metastable states. Furthermore, as you also mentioned, the physical systems are far from equilibrium. Hence Onsager's reciprocity relations approach is not applicable. Oliver Penrose, a former post-doc of Lars Onsager, has dedicated quite some time researching the issues you mentioned in the 70s, 80s and 90s, also Kurt Binder, Joel Lebowitz and Enzo Olivieri to name but a few pioneers.
A drawback of this mathematical picture, however, is the need of a micro-physical theory for the transition rates. If one really had a serious theory of the transition rates for water and the "liquid -> solid" phase transition ("freezing"), say, then one could investigate a possible theoretical explanation for the Mpemba effect.
• asked a question related to Statistical Physics
Question
I believe nearly all books concerning statistical mechanics cover Hamiltonian system. Then naturally Liouville's theorem and Boltzmann-Gibbs distribution are discussed. I don't see the connection between Hamiltonian system and Boltzmann-Gibbs distribution. The former describes a deterministic dynamics of one particle while the latter refers to statistical law of a large amount of particles which seems to be random.
The answer lies in the ensemble theory. You can derive the canonical and grand canonical ensembles from the microcanonical one, see e.g. the book by L. E. Reichl. The key step is to assume ergodicity; however, it should be noted that this cannot be rigorously proven in general.
• asked a question related to Statistical Physics
Question
Is there a relation between the convergence rate of a Monte Carlo simulation and the entropy of the simulated system ?
With canonical MC there are multiple-spin sampling schemes, such as the Wolff cluster algorithm that beat critical slowing down. There are also many variants of MC schemes, such as Wang-Landau, multicanonical MC, etc. that can be much more efficient than canonical MC.
• asked a question related to Statistical Physics
Question