Science topic

Statistical Mechanics - Science topic

Statistical mechanics provides a framework for relating the microscopic properties of individual atoms and molecules to the macroscopic bulk properties of materials that can be observed in everyday life.
Questions related to Statistical Mechanics
  • asked a question related to Statistical Mechanics
Question
1 answer
How do the non-trivial zeros of the Riemann zeta function relate to the quantum chaotic behavior of high-dimensional systems, and what implications might this have for the study of quantum eigenstate thermalization hypothesis (ETH)?
The Riemann zeta function is a fundamental object in number theory, known for its deep connection to the distribution of prime numbers. One of the most intriguing aspects of this function is its non-trivial zeros, which lie along the "critical line" in the complex plane. The Riemann Hypothesis posits that all these zeros have a real part of 1/2, though this remains unproven. Interestingly, the behavior of these zeros has been found to share striking similarities with the statistical properties of eigenvalues in quantum systems, particularly in systems exhibiting quantum chaos. Quantum chaotic systems are those that, despite being governed by deterministic laws, display unpredictable behavior akin to classical chaotic systems when viewed in the quantum regime.
The quantum eigenstate thermalization hypothesis (ETH) is a concept in statistical mechanics that seeks to explain how isolated quantum systems can exhibit thermal equilibrium behavior, despite being in a pure quantum state. According to ETH, the individual eigenstates of a quantum system should mimic the properties of a thermal ensemble in the appropriate limit. The relevance of the question about the connection between the non-trivial zeros of the Riemann zeta function and quantum chaotic behavior arises from the possibility that insights from number theory might provide new perspectives on the statistical mechanics of quantum systems. If the distribution of these zeros is related to quantum chaotic systems, it could offer a novel approach to understanding the emergence of thermal behavior in quantum systems and even further our understanding of quantum-to-classical transitions.
Relevant answer
  • asked a question related to Statistical Mechanics
Question
4 answers
The short answer is yes.
Scientific education in the West throughout the 20th century was based on the assumption that Schrödinger's PDE is the only unified theory of energy fields (microscopic and macroscopic), which is false.
Schrödinger's PDE is fundamentally incomplete because it lives and operates in 3D geometric space plus real time as an external controller.
Our mother nature lives and functions in a unitary 4D x-t space with time as a dimensionless integer woven into geometric space.
We assume that a serious improvement in scientific teaching and research can be achieved by describing the 4D x-t unit space via a 4D statistical mechanics unit space or any other adequate representation.
Relevant answer
Answer
The Schrödinger Equation is right and our current Mathematics is incomplete.
The differentiation of discontinuous functions exist and is easy, to any order. See it online.
New solutions open to the Schrödinger equation with that, while keeping the old ones.
We live in 6D but a 3D slice is accurate when comoving. Then, mass represents the amount of matter. Even in 3D mass can be transported by massless particles -- even when they are isolated. One photon can recoil an atom.
  • asked a question related to Statistical Mechanics
Question
4 answers
The short answer is yes, it is absolutely true.
We first affirm that the matrix mechanics of W. Heisenberg, Max Born and P. Gordon (H-B-G) was born dead and destroyed by the Schrödinger equation in three years.
This is not surprising since the HBG matrix is ​​designed to resolve energy levels in the hydrogen atom and is therefore only a subset of SE PDE.
On the other hand, modern statistical mechanics (B matrix chains and Cairo techniques in 2020) has established itself as a giant capable of solving almost all physics and mathematics problems.
Here we can say that it is a unified field theory and Schrödinger's PDE is one of its subsets and not vice versa.
We predict that future scientific research over the next ten years will increasingly focus on the area of ​​modern matrix mechanics, at the cost of eliminating the Schrödinger equation.
Relevant answer
Answer
This is just a brief answer which aims on the one hand to clarify the question and its answer and on the other hand to thank our fellow contributors for their helpful answers.
It is true that extraordinary allegations require extraordinary evidence.
Unlike the Schrödinger equation and its derivatives, B matrix chains and Cairo statistical techniques are capable of numerically solving almost all energy fields [1,2,3,4,5] of quantum physics and from classical physics like the Poisson and Laplace PDE, heat diffusion equation, sound intensity in audio rooms in addition to pure mathematical problems such as differentiation and integration.
We assume that this makes it a unified field theory.
The references:
1-An efficient reformulation of Schrödinger's partial differential equation
Full text available
May 2024, Researchgate, IJISRT journal.
2-Is it time to reformulate the Poisson and Laplace partial differential equations?
Article
Full text available
June 2023, Researchgate, IJISRT journal.
3-A statistical numerical solution for the time-dependent 3D heat diffusion problem without the need for the PD heat equation or its FDM techniques.
Full text available
July 2021, Researchgate, IJISRT journal.
4-Theory and design of audio rooms -A statistical view, Full text available
July 2023, Researchgate, IJISRT journal.
4-Effective unconventional approach to statistical differentiation and statistical integration
Full text available
November 2022, Researchgate, IJISRT journal.
  • asked a question related to Statistical Mechanics
Question
7 answers
Schrödinger's PDE is too old (+100 years) and fundamentally incomplete but it has proven itself in almost all scientific fields.
He can't die or be fired all at once, at least in the next ten years.
However, in the long term, it is very likely that they will gradually disappear and be replaced by elegant and brilliant theories of modern statistical mechanics.
Relevant answer
Answer
Answer V (continued)
Once again, nature is beautiful and powerful.
It is capable of solving all its mathematics or physics problems in the simplest and fastest ways, and the 4D x-t Matrix B series solution is equally beautiful and wireframed.
In particular, probabilities and statistics that are well defined in the four-dimensional unit space of matrix strings B but belong to a missing part of classical theoretical mathematics and physics are the underlying reason for the superiority of the former.
The following example is a nice explanation of the power of solving the transition series of matrix B to find the sum of infinite algebraic series, which clearly exceeds the power of the Schrödinger equation or one of its derivatives.
Here we consider the statistical-physical solution to a purely mathematical formula.
given by,
Consider the physical statistical solution to a purely mathematical formula.
The question  given by:
Using matrix algebra,[3] how can we show that the series of infinite integers
[(1+x)/2]^N is equal to (1+x)/(1-x), ∀x∈[0,1 [?
We assume that all mathematicians and physicists know that mathematics is the language of physics. However, not all mathematicians and physicists admit that modern physics (classical physics supplemented by B-matrix strings) can be the language of mathematics and replace it in certain areas/situations, as in the case of this question.
A numerical example for validation, the sum of the entire series, 0.99 + 0.99^2 + 0.99^3 + . . . . +0.99^Ntends to 190 as N tends to infinity. The proof of this question is based on the transfer matrix D :
plus the following rule (principle 1) [14].[For positive symmetric physical power matrices, the sum of their eigenvalues ​​is equal to the eigenvalue of their sum of power series] The question arises whether a classical mathematical proof can also be found?
We assume that the proposed axiom or mathematical principle:
[For the real statistical transition matrices that follow,
D(N)= B+B^2+B^3 + . +B^N
the sum of their eigenvalues ​​(λ +λ^ 2 + λ^3+ . . ., where λ∈[0,1[) is equal to the eigenvalue of their sum of integer series (i.e. λ of D(N) ] .. ..Principle (1)
It is obvious that λ is the eigenvalue of the transition matrix B and λ^2 is the eigenvalue of the matrix B^2. . etc.
Note that principle (1) is true and can be validated by numerical calculation.
However, to our knowledge, there is no rigorous mathematical proof of principle (1).
Additionally, using matrix algebra and principle 1, you can prove some mathematical formulas such as:
i- the infinite power series [(1+x)/2]^N is equal to (1+x)/(1-x), ∀x∈[0,1[.
ii- Moreover, the infinite integer series [(1 + Mx) / (1 + M)] ^ N is equal to (1 + Mx) / M (1-x), ∀x element of [0, 1[ and M is a positive integer?
The five examples above show beyond doubt that Schrödinger PDE is a subset of statistical matrix mechanics and not the other way around.
Statistical matrix mechanics is concerned with the nature of quantum particles and their associated energy density, but Schrödinger's PDE describes the probability distribution of the energy density of a quantum system over possible energy levels.
This means that Schrödinger PDE and statistical matrix mechanics are apparently two different subjects. However, Schrödinger's PDE remains a subset of statistical matrix mechanics, in the sense that any solution for the energy density in a quantum system found by Schrödinger's PDE can also be obtained via appropriate statistical matrix mechanics. And more via many missing physical and mathematical solutions in Schrödinger's PDE are well defined and explained in statistical matrix mechanics.
3-Using matrix algebra, how to show that the infinite integer series [(1+x)/2]^N is equal to (1+x)/(1-x), ∀x∈[0,1[ ? , Researchgate, IJISRT journal, November 2023.
To be continued.
  • asked a question related to Statistical Mechanics
Question
22 answers
Contrary to the opinion of the iron guards of SE who believe that this is the only unified field theory, the truth is that modern statistical mechanics of Cairo techniques and its B matrix chain products is the theory unified fields. Which means it can be the source of all SE solutions or any other time-dependent PDE.
There are too many examples of validation, but rather difficult to find.
Relevant answer
Answer
Fundamentally, and this is what i find fascinating with the SE, it is that its solution contains discrete electron levels, the electron CANNOT be at any continuous distance from the nucleus, meaning the electron CAN ONLY occupy discrete or certain levels/distance from the nucleus, this leads the known periodic table of elements; so the SE predicts the existance of elements that are on the periodic table as per the standard model.
  • asked a question related to Statistical Mechanics
Question
4 answers
We assume this form is inappropriate and misleading in many situations.
Numerical statistical mechanics that works efficiently to solve the heat diffusion equation as well as Schrödinger's PDE predicts a more appropriate eigenmatrix form,
( [B] + Constant. V[I] )= λ ( [B] + Constant. V[I] )
with the principal eigenvalue λ = 1 which is equivalent to the principle of least action.
It is clear that the vector V replaces Ψ^2.
Relevant answer
Answer
For interested contributors:
Please take a look at,
Cairo Technics Solution of Schrödinger's Partial Differential Equation - Time Dependence, Researchgate, IJISRT Journal, March 24.
We assume there is a detailed answer.
  • asked a question related to Statistical Mechanics
Question
21 answers
Elementary explanations of the second law of thermodynamics refer to probabilities of system states and seem convincing. But not when considering time-reversals, because the same statistical arguments should also apply there but they produce contradictions regarding entropy increases with time. (I think the difficulty is in whether or not the assumption of statistical randomness is appropriate because it depends on what is given and maybe also on the direction of time but I'm not an expert and this doesn't answer my question anyway.) While reading some literature about the direction of time I learned that the direction of time and the second law of thermodynamics all come from a very low entropy immediately after the big bang, with increasing entropy produced by things that include gravitational clumping (e.g., the formation of black holes and the merging of black holes to produce larger black holes). I learned that this is responsible for the second law of thermodynamics but it seems to me that this is an incredibly large-scale thing. Given this explanation it seems amazing to me that we can randomly select a tiny piece of matter (large enough to be macroscopic but tiny from the point of view of human perception) and find that it obeys the laws of thermodynamics. Is there an explanation of how such large influences on entropy (e.g., objects produced by gravity clumping) can produce a second law that is so incredibly homogeneous that we find the law obeyed by all of the tiny specs of material?
Relevant answer
Answer
Not able to answer these I’m afraid but just to make one comment.
There are many entropies. The Second Law refers to Clausius Entropy. This entropy is meaningful only at thermal equilibrium. since the universe is not at equilibrium, we cannot define a quAntity called the Clausius entropy of the whole universe.
it is widely assumed that the Boltzmann entropy is numerically equal to the Clausius Entropy. This is a theoretical assumption, since the BE can only be calculated for ideal gases. But even here, it is only at equilibrium that BE = CE. As the system evolves towards equilibrium, S=klnW is not the entropy of that system. The CE has no meaning out of equilibrium. This is understood by reminding ourselves that CE is a state function and so is independent of the route taken by the system to that state. On the other hand, W is dependent on the rate of equilibration, which can be controlled. it is only At equilibrium that W is independent of kinetics.
  • asked a question related to Statistical Mechanics
Question
3 answers
Anybody is having solution to problems of Statistical Mechanics of Phase Transitions by J. M. Yeomans?
Relevant answer
Answer
You should do it yourself as an essential part of the learning process!
  • asked a question related to Statistical Mechanics
Question
2 answers
Similarity transformation to make a Hamiltonian diagonal look like U^{\dagger}HU. What about an observable and density matrix? How do they transform?
Relevant answer
Answer
Hermitian, unitary, and skew-Hermitian operators are all unitary diagonalizable.
  • asked a question related to Statistical Mechanics
Question
3 answers
Attaching mathematical expressions here is problematic. I am attaching the link to the question here.
Relevant answer
Answer
Yes the DoS can be calculated for a discrete number of states. For small number of atoms or molecules, when they are in a single quantum level, for example the ground energy level, or the first excited state.
The number of states can be discrete, a few ones or even only one, if the DoS is zero it menas that the number of states is constant and belong to the same type of degrees of freedom.
Please check:
Reif, F. 1966. Statistical Physics. Berkeley Physics Course. McGraw-Hill, New York, USA. Volume 5. pp.398.
Lu, T. and Chen, F. 2012. Multiwfn: A multifunctional wave function analyzer. Journal of Computational Chemistry. 33(5):580-592.
Kind Regards.
  • asked a question related to Statistical Mechanics
Question
3 answers
And can you reference articles or texts giving answers to this question?
Reviewing the literature would be helpful before considering whether to updatie the 2015 ideas.
Relevant answer
Answer
...Just a bit more to the answer by colleague V. V. Vedenyapin:
in the Boltzmann-Planck's S = k*ln(W), W = (1+Power[x,K]), with x=(T/Tc), T stands for the Kelvin's absolute temperature, Ts is temperature scale, and K - efficiency of the process under study.
About 100 years ago, in the Journal of American Chemical Society, Dr. George Augustus Linhart has published the formal statistical inference of the above fact.
I have tried to answer the poser you have posted here - consequently and in detail:
Shorter versions:
  • asked a question related to Statistical Mechanics
Question
37 answers
When studying statistical mechanics for the first time (about 5 decades ago) I learned an interesting postulate of equilibrium statistical mechanics which is: "The probability of a system being in a given state is the same for all states having the same energy." But I ask: "Why energy instead of some other quantity". When I was learning this topic I was under the impression that the postulates of equilibrium statistical mechanics should be derivable from more fundamental laws of physics (that I supposedly had already learned before studying this topic) but the problem is that nobody has figured out how to do that derivation yet. If somebody figures out how to derive the postulates from more fundamental laws, we will have an answer to the question "Why energy instead of some other quantity." Until somebody figures out how to do that, we have to accept the postulate as a postulate instead of a derived conclusion. The question that I am asking 5 decades later is, has somebody figured it out yet? I'm not an expert on statistical mechanics so I hope that answers can be simple enough to be understood by people that are not experts.
Relevant answer
Answer
You are totally right & thank you for that clarifying answer:
Non equilibrium Statistical Mechanics has to do a lot with damping of a particular case of springs, and also yes, the Boltzmann equation for a decay of a dilute gas is one of the examples (let say that even where the gradient of temperature can be neglected).
But when there are gradients of temperature, the exercise becomes more interesting as a numerical problem.
It took me 28 years since I took my first course on non equilibrium statistical mechanics to understand your remarkable statement.
Kind Regards.
  • asked a question related to Statistical Mechanics
Question
5 answers
One might argue: Animals increase their survivability by increasing the degrees of freedom available to them in interacting with their environment and other members of their species.
Right, wrong, or in between? Your views?
Are there articles discussing this?
Relevant answer
Answer
Also check please the following useful RG link:
  • asked a question related to Statistical Mechanics
Question
3 answers
Let's say if 12-6 equation is perfect, the most favored distance between two Ar atoms is the r(min).
My understanding for vdW radii is measured from some experiments, while 12-6 potential is more like an approximation. But if I want to link vdW radii to distance in 12-6 sigma/r/r(rmin), which one should it be?
The definition of vdW radii is the closest distance of two atoms, so should it be somewhere slightly smaller than sigma?
In the website, however, the sigma value is referred as the vdW radius.
Relevant answer
Answer
The van der Waals radius is itself an approximation of a sort, since atoms are not hard spheres in reality. Different experimental methods of estimation yield different values for the same species. One could say the LJ sigma parameter is in the same order of magnitude as the vdW radius, but sigma should be picked to produce the desired interatomic distance or some other property in the simulation, rather than precisely equal to the vdW radius.
  • asked a question related to Statistical Mechanics
Question
5 answers
No way to find out unless you do the actual proposed experiment:
This guy here:
claims he made an interference Galton board https://en.wikipedia.org/wiki/Galton_board experiment and got an interference pattern. This would explain that quantum randomness originates from determinism and is a result of hidden local variables possible in the photon's environment in contrast to the Bell inequality EPR experiment?
Note:
As a reminder this is nothing extremely new, actually a deterministic explanation of the quantum DS single photon experiment was previously demonstrated by this experimental application of the pilot-wave theory using bouncing droplets:
The photons which are epicenters of electromagnetic distortions when translating in space distort the EM mass fiel of the environment they move in.These distortions of the mass field environment are feedback at the photon as alterations in its motion trajectory. Photons as massless particles may pass through each other without being affected but dynamic EM flux coming from the mass field of their environment they interact with can affect their trajectory.
Relevant answer
Answer
Good points. However, I cannot see why the EM wig the photon lefts behind at its passage interacting with the matter field of its environment cannot be regarded as a local hidden variable.
  • asked a question related to Statistical Mechanics
Question
5 answers
Hamiltonian mechanics is a theory developed as a reformulation of classical mechanics and predicts the same outcomes as non-Hamiltonian classical mechanics. It uses a different mathematical formalism, providing a more abstract understanding of the theory. Historically, it was an important reformulation of classical mechanics, which later contributed to the formulation of statistical mechanics and quantum mechanics.
Relevant answer
Answer
This is my humble opinion about your interesting question:
  • In classical mechanics, we start from the Hamilton formulation leading us to the concept of the phase space, which is used in the microcanonical ensemble widely studied in statistical mechanics, & where energy must be preserved in order to have some classical perspective of what happens with millions of microscopical states.
  • In quantum mechanics, we have a sort of similar approach when we use elastic scattering theory which uses an energy conservation principle and a phase space, & which allows observing a classical perspective of the quantum micro world.
Interesting question, Best Regards.
  • asked a question related to Statistical Mechanics
Question
1 answer
Statistical mechanics considering interaction is attached to the second law of thermodynamics. Considering the influence of temperature on the interaction potential, statistical mechanics can prove that the second law of thermodynamics is wrong.
Relevant answer
Answer
If you apply quantum mechanics consistently without using semi-classical approximations you get for the partition function
Z=\sum_n \exp -\beta E_n
with n running over all N-particle energy eigenstates of the system. The E-n are temperature independent and no conflict arises with the second law of thermodynamics. I assumed here that the system is confined within an external potential.
  • asked a question related to Statistical Mechanics
Question
18 answers
A system of ideal gas can always be made to obey classical statistical mechanics by varying the temperature and the density. Now a wave packet for each particle is known to expand with time. Therefore after sufficient time has elapsed the gas would become an assembly of interacting wavelets and hence its properties would change since now it would require a quantum mechanical rather than classical description. The fact that a transition in properties is taking place without outside interference may point to some flaw in quantum mechanics. Any comments on how to explain this.
Relevant answer
Answer
Wigner probabilistic distributions, Prof. Sohail Khan
Best Regards.
  • asked a question related to Statistical Mechanics
Question
15 answers
I am asking this question on the supposition that a classical body may be broken down in particles which are so small in size that quantum mechanics is applicable on each of these small particles. Here number of particles tends to uncountable (keeping number/volume as constant).
Now statistical mechanics is applicable if practically infinite no. of particles are present. So if practically infinite number of infinitely small sized particles are there, Quantum Statistical Mechanics may be applied to this collection. (Please correct me if I have a wrong notion).
But this collection of infinitesimally small particles make up the bulky body, which can be studied using classical mechanics.
Relevant answer
Answer
There is no difference Prof. Manish Khare, we have two windows to watch the physical world, the classical & the quantum approaches, but there is a window, they are the Wigner probabilistic distributions.
Best Regards.
  • asked a question related to Statistical Mechanics
Question
4 answers
It is suggested that the Zero Point Energy (that causes measurable effects like the Casimir force and Van der Waals force) cannot be a source of energy for energy harvesting devices, because the ZPE entropy cannot be raised, as it is already maximal in general, and one cannot violate the second law of statistical mechanics. However, I am not aware of a good theoretical or empirical proof that ZPE entropy is at its highest value always and everywhere. So I assume that ZPE can be used as a source of energy in order to power all our technology. Am I wrong or right?
Relevant answer
Answer
It isn't the ``zero point energy'' that is the origin of either the Casimir force or the van der Waals force. First of all, these two forces don't have anything to do with each other: The van der Waals force is the classical force between electric dipoles, the Casimir force is the force that expresses the fluctuations of energy about its average value in the state where this average value is equal to zero.
That's why zero-point energy is a misnomer.
The entropy of any physical system, in flat spacetime, in the vacuum state vanishes, since the vacuum state of a quantum system in flat spacetime is unique.
  • asked a question related to Statistical Mechanics
Question
2 answers
If MD simulations converges to Boltzmann distributions ρ∼exp(−βϵ) after sufficiently long time why do we need MD simulations, as all the macroscopic quantities can be computed from the Boltzmann distribution itself. This question I am asking for short peptides of sequences of few amino acids.(tripeptide, tetrapeptide etc).
For instance in the given above (link) paper, they are using MD to generate Ramachandran distributions of conformations of pentapeptide at a constant temperature. So this should obey statistical mechanics. If it is so, then this should satisfy Boltzman distributions.So I should be able to write down the distributions using boltzmann weight as follows,
ρ({ϕi,ψi})∼exp(−βV({ϕi,ψi}))
.Here, all set of Ramachandran angle coordinates of the pentapeptides is given by {ϕi,ψi}{ϕi,ψi}.
Why should I run MD to get the same distributions?
Relevant answer
Answer
As
Behnam Farid
pointed out, you cannot know all the relationships (the functional form V({ϕi,ψi})) between the different amino acids (sterical clashes, interactions based on charges or hydrophobicity) to predict the energetically favorable combinations of phi and psi and hence need to sample them. The state distribution is affected by the amino acid sequence, may differ with the force fields and simulations methods, does depend on the solvent (ions etc.) and temperature.
Have a look here to see how complex the conformational space of small peptides (13-15) can already be:
Bests
  • asked a question related to Statistical Mechanics
Question
9 answers
How to calculate mean square angular displacement ? Do we need any periodic boundary conditions if at each step angle is updated using theta(t+dt) = theta(t) + eta(t) , where eta(t) is a gaussian noise . Please describe the procedure to calculate this quantity.
Relevant answer
Answer
Souvik Sadhukhan Yes: The way to do that is by relating the angular mean square displacement, expressed through <sin2θ>, more precisely, <sinθ(t)sinθ(t')>,
for instance, with the 2-point function of the noise, <η(t)η(t')>. This is the relation that defines the diffusion coefficient (strictly speaking, in the approximation, where the sinθ(t) can be assumed to be drawn from a Gaussian distribution; otherwise it's more complicated) and is obtained by computing the probability distribution of the sinθ(t), from the knowledge of the probability distribution of the η(t) and the relation, dθ(t)/dt = η(t).
  • asked a question related to Statistical Mechanics
Question
5 answers
Is there any code available to calculate Spin Spin Spatial correlation function in 1d Ising model?
Relevant answer
Answer
Hello. I wrote a code for this . i can share it with you
  • asked a question related to Statistical Mechanics
Question
6 answers
Recently, I've been selected for an ICTP program named Physics of Complex Systems. But, I have a keen interest in Particle Physics & Quantum networks. As statistical mechanics involved in Complex systems. One of my professors said that statistical mechanics could be a helpful tool for particle physics.
Relevant answer
Answer
Dear Lutfa Rahman,
Greetings, Sorry, I mean " System of Particles".
Regards, Saeed
  • asked a question related to Statistical Mechanics
Question
4 answers
I mean, in Williamson-Hall analysis, should we take theta of a set of parallel planes or all the positions corresponding to most intense peaks?
Relevant answer
Answer
Suresh Guduru The Williamson-Hall Plot. W-H plot is used to calculate the crystallite size and microstrain from complex XRD data. That's when both the crystallite size and microstrain vary as a function of the Bragg's angle, we can only calculate these parameters from XRD data using W-H plot. I have provided the practice file (Origin file) as well as the calculation file (Excel file) in the video description. Thanks
  • asked a question related to Statistical Mechanics
Question
3 answers
Dear All;
If you are well-experienced in one of the fields Complex Networks, Human Genetics, or Statistical Mechanics and would like to collaborate with us in our project please contact me at:
Basim Mahmood
Project title: Statistical Mechanics of Human Genes Interactions
Regards
Relevant answer
Answer
Basim Mahmood interesting question and am sure people and experts from your domain would definitely look at this and will have some sort of discussions with you, however my core is Biogas and anything related to Biogas i would happy to collaborate
  • asked a question related to Statistical Mechanics
Question
9 answers
I would like to calculate the non-gaussian parameter from MSD. I think I am doing mistake in calculating it?
                               3 < r(t)**4 >
NGP ((alpha)(t))= -----------------------  - 1
                               5 < r(t)**2 >**2
In some articles it is delta r(t). I am bit confused. Someone please help me out in calculating it by explain the terms in it?
Thank you
Relevant answer
Answer
you can not calculate non-Gaussian parameter (NGP) from the averaged mead squared displacement. You can calculate $\delta r^2(t) = (\vec r(t) - \vec r(0))^2$ and $\delta r^4(t) = \delta r^2(t)*\delta r^2(t )$ at each time difference 't' for each particle. These averaged over time and number of particles will give you NGP as $\alpha_2(t) = (3/5)($\delta r^2(t)$/$\delta r^4(t)$) - 1.0$. Hope this help you!!
  • asked a question related to Statistical Mechanics
Question
113 answers
Hello Dear colleagues:
it seems to me this could be an interesting thread for discussion:
I would like to center the discussion around the concept of Entropy. But I would like to address it on the explanation-description-ejemplification part of the concept.
i.e. What do you think is a good, helpul explanation for the concept of Entropy (in a technical level of course) ?
A manner (or manners) of explain it trying to settle down the concept as clear as possible. Maybe first, in a more general scenario, and next (if is required so) in a more specific one ....
Kind regards !
Relevant answer
Dear F. Hernandes
The Entropy (Greek - ἐντροπία-transformation, conversion, reformation, change) establishes the direct link between MICRO-scopic state (in other words orbital) of some (any) system and its MACRO-scopic state parameters (temperature, pressure, etc).
This is the Concept (from capital letter).
Its main feature – this is the ONLY entity in natural sciences that shows the development trend of any self-sustained natural process. It is the state function; it isn’t the transition function. That is why the entropy is independent from the transition route, it depends only from the initial state A and final state B for any system under consideration. Entropy has many senses.
In the mathematical statistics, the entropy is the measure of uncertainty of the probability distribution.
In the statistical physics, it presents the probability (so-caled *statistical sum*) of the existence of some (given) microscopic state (*statistical weight*) under the same macroscopic characteristics. This means that the system may have different amount of information, the macroscopic parameters being the same.
In the information approach, it deals with the information capacity of the system. That is why, the Father of Information theory Claude Elwood Shannon believed that the words *entropy* and *information* are synonyms. He defined entropy as the ratio of the lost information to the whole of information volume.
In the quantum physics, this is the number of orbitals for the same (macro)-state parameters.
In the management theory, the entropy is the measure of uncertainty of the system behavior.
In the theory of the dynamic systems, it is the measure of the chaotic deviation of the transition routes.
In the thermodynamics, the entropy presents the measure of the irreversible energy loss. In other words, it presents system’s efficiency (capacity for work). This provides the additivity properties for two independent systems.
Gnoseologically, the entropy is the inter-disciplinary measure of the energy (information) devaluation (not the price, but rather the very devaluation).
This way, the entropy is many-sided Concept. This provides unusual features of entropy.
What is the entropy dimension? The right answer depends on the approach. It is dimensionless figure in the information approach (Shannon defined it as the ratio of two uniform values; therefore it is dimensionless by definition). On the contrary, in the thermodynamics approach it has a dimension (energy to temperature J/K)
Is entropy parameter (fixed number) or this is a function? Once again, the proper answer depends on the approach (point of view). It is a number in the mathematical statistics (logarithm of the number of the admissible (unprohibited) system states, well-known sigma σ). At the same time, this is the function in the quantum statistics. Etc., etc.
So, be very cautious when you are operating with entropy.
Best wishes,
Emeritus Professor V. Dimitrov vasili@tauex.tau.ac.il
  • asked a question related to Statistical Mechanics
Question
4 answers
This is to understand how the concepts of statistical mechanics is applied in astrophysics.
Relevant answer
Answer
It depends on which subject of Statistical Mechanics.
Let's say for neutrons starts you can follow the chapter on Neutron Stars in the book:
Landau, L. D., & Lifshitz, E. M. 1980, Statistical Physics (Elsevier Ltd.) the chapter entitled "properties of matter at very high-density" chap XI.
For gas dynamics and fluctuations in interstellar media you can follow:
1. Spitzer, L. 1962, Physics of Fully Ionized Gases (New York: Wiley)
2. Braginskii, S. I. 1965, RvPP, 1, 205
3. Parker, E. N. 1953, ApJ, 117, 431
Best Regards.
  • asked a question related to Statistical Mechanics
Question
10 answers
I am looking for some quality materials for learning the molecular dynamics theory and the use of LAMMPS. Besides the LAMMPS manual from Sandia National Laboratory, which sources can I use for learning LAMMPS?
  • asked a question related to Statistical Mechanics
Question
5 answers
This question relates to my recently posted question: What are the best proofs (derivations) of Stefan’s Law?
Stefan’s Law is E is proportional to T^4.
The standard derivation includes use of the concepts of entropy and temperature, and use of calculus.
Suppose we consider counting numbers and, in geometry, triangles, as level 1 concepts, simple and in a sense fundamental. Entropy and temperature are concepts built up from simpler ideas which historically took time to develop. Clausius’s derivation of entropy is itself complex.
The derivation of entropy in Clausius’s text, The Mechanical Theory of Heat (1867) is in the Fourth Memoir which begins at page 111 and concludes at page 135.
Why does the power relationship E proportional to T^4 need to use the concept of entropy, let alone other level 3 concepts, which takes Clausius 24 pages to develop in his aforementioned text book?
Does this reasoning validly suggest that the standard derivation of Stefan’s Law, as in Planck’s text The Theory of Heat Radiation (Masius translation) is not a minimally complex derivation?
In principle, is the standard derivation too complicated?
Relevant answer
Answer
Good morning.
It is really simple to deduce it if we start from the density of energy into a cavity (Planck distribution). i specify, that the Planck distribution can be deduced simply from Bose-Einstein statistics, knowing the value of Planck's constant.
I'm sending you this deduction informing you, that the work is written in Italian. I think you can follow the deduction through the sequence of formulas.
Of Course there is the Boltzmann deduction of it published an year after Stefan's experimental work.
Have a good day and stay safe.
  • asked a question related to Statistical Mechanics
Question
11 answers
I'm writing my dissertation about economic dynamics of inequality and i'm going to use econophysics as a emprical method.
Relevant answer
Answer
Dear Mehmet,
Some formal similarities between equilibrium statistical mechanics and economics may exist, but we should be very suspicious of any direct comparisons. Of course, in some instances the mathematical solutions used in statistical mechanics may be of some use in economics from a practical viewpoint, I would not read too much into this. My sense is that the relationship between both is that there is incomplete information about the microscopic state of the system. See e.g., this paper by my advisor:
There is a fair bit of literature on using entropy to model systems in physics and economics.
  • asked a question related to Statistical Mechanics
Question
6 answers
Or is the concept inapplicable?
If it were applicable, could statistical mechanical methods apply? Does entropy?
Relevant answer
Answer
Amino acids in proteins are, of course, not free to move independently like a molecule in solution. First, they are connected to 2 other amino acids (or one other if they are the first or last in the chain, or up to 3 others for a cysteine in a disulfide bond). Second, they are subject to a variety of forces exerted by their surroundings, such as charge-charge interactions, hydrogen bonding, van der Waals interactions, and pi stacking (in the case of aromatic amino acids). Computational methods, particularly molecular dynamics, can be used to model the movement of amino acids in proteins over short time scales.
  • asked a question related to Statistical Mechanics
Question
2 answers
Some excerpts from the article
Comparing methods for comparing networks Scientific Reports volume 9, Article number: 17557 (2019)
By Mattia Tantardini, Francesca Ieva, Lucia Tajoli & Carlo Piccard
are:
To effectively compare networks, we need to move to inexact graph matching, i.e., define a real-valued distance which, as a minimal requirement, has the property of converging to zero as the networks approach isomorphism.
we expect that whatever distance we use, it should tend to zero when the perturbations tend to zero
the diameter distance, which remains zero on a broad range of perturbations for most network models, thus proving inadequate as a network distance
Virtually all methods demonstrated a fairly good behaviour under perturbation tests (the diameter distance being the only exception), in the sense that all distances tend to zero as the similarity of the networks increases.
If achieving thermodynamic efficiency is the benchmark criterion for all kinds of networks, then their topologies should converge to the same model. If they all converge to the same model when optimally efficient, does that cast doubt on topology as a way to evaluate and differentiate networks?
Relevant answer
Answer
Google translation of Alyaa Khudhair Nov 2, 2020 reply:
Probably yes.
  • asked a question related to Statistical Mechanics
Question
4 answers
See for example, Statistical mechanics of networks, Physical Review E 70, 066117 (2004).
The architecture and topology of networks seem analogous to graphs.
Perhaps though the significant aspect of networks is not their architecture but their thermodynamics, how energy is distributed via networks.
Perhaps networks linkages are only means for optimizing energy distributions. If so, then network entropy (C log(n)) might be more fundamental (and much simpler to use) than the means by which network entropy is maximized. If that were so, then the network analogy to graphs might lead to a sub-optimal conceptual reference frame.
Dimensional capacity is arguably a better conceptual reference frame.
Your views?
Relevant answer
Answer
to my personal view, I think it is one of the best as far as conceptualization is considered.
  • asked a question related to Statistical Mechanics
Question
4 answers
This question is prompted by the books review in the September 2020 Physics Today of The Evolution of Knowledge: Rethinking Science for the Anthropocene, by Jürgen Renn.
I suspect that there is such an equation. It is related to thermodynamics and statistical mechanics, and might be characterized, partly, as network entropy.
Two articles that relate to the question are:
and also there is a book, the ideas in which preceded the two articles, above:
The Intelligence of Language.
The question is related somewhat distantly to an idea of Isaac Asimov in his science fiction, The Foundation Trilogy, psychohistory.
Relevant answer
Answer
Antonio Fernandez Guerrero
Thank you for kindly mentioning articles by K. Friston on free energy. I don’t recall previously running across those articles or his name. It is marvelous to learn something new, and it is a credit to ResearchGate that it affords so many opportunities for learning. Even in these pandemic times. Thank you taking the time to make your knowledge available to other people.
I read the 2010 article: The free-energy principle: a unified brain theory?
The 2010 Friston article seeks to model acquisition of knowledge by a human brain. Some of its assumptions follow (subject to my having missed understanding more than what I understood when I read the article). The brain seeks to apply inference to sensory perceptions in a way that is maximally efficient, or equivalently, uses the minimal amount of energy necessary to accomplish that purpose. The concepts of free energy and entropy from thermodynamics and statistical mechanics are adapted to apply to a model of how neurons seek to gain information. Part of the brain’s inference processes uses previously acquired data or information.
The 2009 article on A Theory of Intelligence that I mention in the question supposes that most of what an average person knows is learned from problems already collectively solved by society, forming society’s store of knowledge accumulated over (probably) hundreds of generations. The average person learns speech, how to write, counting, facts and methods of problem solving from society’s accumulated knowledge. Knowledge can be considered to consist of solutions to what were once problems that society, or some subset of society, obtained.
The 2010 Friston article inquires about the individual brain. The 2009 article the question refers to focuses on the a collection of networked brains; there being a network, statistical mechanics can apply.
The 2009 article asks how much greater is the problem solving capacity of society compared to the problem solving capacity of an average individual. For example, 350 million modern English speakers at about 1990 had about (an estimated)72 times more problem solving degrees of freedom. Since 72 degrees of freedom can be expressed as an exponent, the difference between the problem solving capacity of society compared to that of the individual is in effect 72 orders of magnitude (based roughly on the mean path length of a network as the base of a logarithmic function). The number 72 is obtained by developing the concept of network entropy.
The 72 orders of magnitude difference in favor of collective problem solving capacity compared to meager average individual capacity implies that the primary inference engine at work is possessed by society and moreover, possessed by the cumulative problem solving capacities of all human societies that ever existed. In fact, one may suspect that some of our collective knowledge might pre-exist speech and come from forbears pre-existing homo sapiens. While inference capacity that an individual brain has may guide that individual’s behavior, most knowledge is learned, and an individual brain as an inference engine is mostly involved in figuring out how to learn from knowledge that already exists.
The motivation to seek out a function like network entropy is based on a book called The Intelligence of Language that I mostly wrote in 2006 and published as an e-book on Kindle in 2016.
Regards.
  • asked a question related to Statistical Mechanics
Question
6 answers
There are two ways to derive Boltzmann exponential probability distribution of ensemble:
1) Microcanonical Ensemble: We assume a system S(E,V,N)
E= internal Energy, V=volume, N=number of molecules or entities. 
We have different energy states that the molecules can take, but the total energy E of the system is fixed. So whatever be the distribution of molecules in different energy levels, the energy of the over all system is fixed. Then we find the maxima of Entropy of the system to find out the equilibrium probability distribution of molecules in energy levels. We introduce two Lagrange multipliers for two constraints: total probability is unity and total energy is constant E. What we get is an exponential distribution.
2) Canonical Ensemble: We have a system with N molecules. The Helmholtz energy is defined as F=F(T,V,N). So this time energy is not fixed but the temperature is. Instead of different energy states for the molecules, now we have different energy levels of the entire system to be. So by minimization of F we get the equilibrium probability distribution of the system to be in different energy levels. This time the constraint is total probability is unity. The distribution we get is an exponential one.
Now the question is:
How can the probability distribution of the canonical ensemble can give population distribution of molecules in different energy states which is rather found from micro-canonical ensemble? 
In the book Molecular driving forces (Ken A Dill) Chapter 10. Equation 10.11 says something similar.
Relevant answer
Answer
Dear Prof. Rituraj Borah, in addition to all interesting answers to this thread, I would like to add that the microcanonical ensemble yields entropy as a function of energy and volume S(U, V) (at fixed particle number N).
Now for the canonical distribution when there is an exchange of energy with the heat bath, the following average thermodynamics equations can be used to describe the system: Delta F = -T Delta S - P Delta V = T Delta S - P Delta V = Delta <U> where F = <U> and P = <P>.
The average <...> comes from the equation <p> = - delta_F/ delta_T where U is the internal free energy.
It means that in the canonical ensemble by using these relations S(<U>, V), it also yields entropy as a function of energy and volume, as in the microcanonical assemble when the probability distribution is used.
For instance, for the whole derivation see: pp. 41 & 42 of L. Landau and E. Lifshitz, Vol. 6, Statistical Physics, Pergamon 1980, Part I. Chapter II.
  • asked a question related to Statistical Mechanics
Question
11 answers
The third rotation allegedly leaves the molecule unchanged no matter how much it is rotated, but is it really okay to assume this? A response in mathematics is welcomed, but if you can explain it in words that would be good, too.
Relevant answer
Answer
Dear Cory Camasta, Three degrees of freedom come from free motion, one from rotational and one from vibrational & E= 5/2 KBT. The remaining rotational degree has a very small moment of inertia as some participants noted previously.
See the following external post for the full explanation regarding the separation of levels mentioned by Prof.
Gert Van der Zwan
:
  • asked a question related to Statistical Mechanics
Question
3 answers
Suppose, chemical composition of the compound, temperature and pressure are known. Electronic structure of constituent elements from numerical solution of Quantum chemistry are also known. Then
  • There can be only about 230 3D crystallographic lattices. But is there any limit of motif that can be included into the lattice without violating stoichiometry? How ab-initio calculation find out the appropriate motif to put into lattice to generate crystal structure? Without finding motifs, it is impossible to find crystal structures whose Gibbs free energy needs to be minimized.
  • Is there any mathematical method that finds out potential energy in an infinite 3D periodic lattice with distributed charges (say, theoretical calculation of Madelung constant)? What are the mathematical requirement/prerequisite to understand such formula?
  • How electron cloud density and local potential energy of a molecule/ motif/lattice point can be linked to total Gibbs free energy of molecule/lattice integrated over the whole structure? What are the statistical-mechanical formula that relates the two? and what are the prerequisites to understand such formula?
Suppose reference point for zero gibbs free energy is conveniently provided.
Relevant answer
Answer
I can answer your second point:
"Is there any mathematical method that finds out potential energy in an infinite 3D periodic lattice with distributed charges (say, theoretical calculation of Madelung constant)? What are the mathematical requirement/prerequisite to understand such formula?"
First, free elastic energies F give you an idea of how the potential energy in crystals is used because the potential term U((C_ij) does have to include an expression invariant to the point group symmetry considered & group theory does that job for different crystallographic classes.
Second, it looks that in the case of finding the Madelung constant which is a different question I guess, because it contains the electroctatic potential energy, it can be done in an easier way, please check:
& using the Ewald method to find the electrostatic energy :
Finally, look how it can be done for cubic crystals:
  • asked a question related to Statistical Mechanics
Question
1 answer
Dear Colleagues :
Does anyone have literature referencing the diffusion process of Carbon (I mean Carbon atoms) into Bismuth Telluride (Bi2Te3) or into some other compound alike ? E.g. PbTe, (Sb,Se)Bi2Te3, Sb2Te3, etc ... ?
I'll really appreciate if someone can help me out
Kind Regards Sirs !
Relevant answer
  • asked a question related to Statistical Mechanics
Question
9 answers
In the question, Why is entropy a concept difficult to understand? (November 2019) Franklin Uriel Parás Hernández commences his reply as follows: "The first thing we have to understand is that there are many Entropies in nature."
His entire answer is worth reading.
It leads to this related question. I suspect the answer is, yes, the common principle is degrees of freedom and dimensional capacity. Your views?
Relevant answer
Answer
among all entropy definitions, the most difficult (I still don´t understand it) but probably the most important one is the Kolmolgorov-Sinai entropy.
The reason: Prof. Sinai and Acad. Kolmolgorov were major architects of the most bridges connecting the world of deterministic (dynamical) systems with the world of probabilistic (stochastic) systems.
  • asked a question related to Statistical Mechanics
Question
12 answers
By quasi-particle I mean in the sense of particles dressed with their interactions/correlations? If yes, any references would be helpful. 
Relevant answer
Answer
Dear Prof. Sandipan Dutta , in adittion to all interesting answers of this compelling thread, I will add a link to a book, where the concept of quasiparticles is masterfully explained by to of the creators of the quasiparticle approach:
Quasiparticles by Prof. M. I. Kaganov, and Academician I. M. Lifzhits.
  • asked a question related to Statistical Mechanics
Question
8 answers
Dear all:
I hope this question seems interesting to many. I believe I'm not the only one who is confused with many aspects of the so called physical property 'Entropy'.
This time I want to speak about Thermodynamic Entropy, hopefully a few of us can get more understanding trying to think a little more deeply in questions like these.
The Thermodynamic Entropy is defined as: Delta(S) >= Delta(Q)/(T2-T1) . This property is only properly defined for (macroscopic)systems which are in Thermodynamic Equilibrium (i.e. Thermal eq. + Chemical Eq. + Mechanical Eq.).
So my question is:
In terms of numerical values of S (or perhaps better said, values of Delta(S). Since we know that only changes in Entropy can be computable, but not an absolute Entropy of a system, with the exception of one being at the Absolute Zero (0K) point of temperature):
Is easy, and straightforward to compute the changes in Entropy of, lets say; a chair, or a table, our your car, etc. since all these objects can be considered macroscopic systems which are in Thermodynamic Equilibrium. So, just use the Classical definition of Entropy (the formula above) and the Second Law of Thermodynamics, and that's it.
But, what about Macroscopic objects (or systems), which are not in Thermal Equilibrium ? Maybe, we often are tempted to think about the Entropy of these Macroscopic systems (which from a macroscopic point of view they seem to be in Thermodynamic Equilibrium, but in reality, they have still ongoing physical processes which make them not to be in complete thermal equilibrium) as the definition of the classical thermodynamic Entropy.
what I want to say is: What would be the limits of the classical Thermodynamic definition of Entropy, to be used in calculations for systems that seem to be in Thermodynamic Equilibrium but they aren't really? perhaps this question can also be extended to the so called regime of Near Equilibrium Thermodynamics.
Kind Regards all !
Relevant answer
Answer
Dear Franklin Uriel Parás Hernández Some comments about your interesting thread:
1. At very low temperatures, entropy behaves according to Nernst's theorem
I copy the wiki-web inf. but you also find the same information in Academicians: L. Landau and E. Lifshitz Vol. 5 Vol:
The third law of thermodynamics or Nerst theorem, states that the entropy of a system at zero absolute temperature is a well-defined constant. Other systems have more than one state with the same, lowest energy, and have a non-vanishing "zero-point entropy".
2. Lets try to put Delta Q = m C Delta T, into the expression: Delta(S) >= Delta(Q)/(T2-T1) . What we do obtain? something missing then?
you see, physical chemistry and statistical physics look at entropy in a different subtle way.
3. Delta S = Kb Ln W2/W1 where W is the total number of micro-states of the system, then what is W1 and W2 concerning Delta S?
4. Finally, look at the following paper by Prof. Leo Kadanoff concerning the meaning of entropy in physical kinetics (out of equilibrium systems): https://jfi.uchicago.edu/~leop/SciencePapers/Entropy_is3.pdf
  • asked a question related to Statistical Mechanics
Question
8 answers
cancer (oncology) is field of biophysics
Relevant answer
Answer
Yes, there is a relationship. I cannot explain it very well, though.
Cancer cells respond to pressure on the membrane by using it as a signal to grow all the more. Normal cells interpret the signal as to stop growing. Other than cancer cells there are other cells which deal with a lot of stress. Heart cells experience a lot of pressure as blood rushes in and out of them.
The pathways that say affect cancer through the various types of pressure waves on the membrane are hard to describe because integral membrane proteins in the membrane send the signal through a kinase cascade resulting in signaling transcriptional co-activators being imported into nucleus which bind to transcription factors and result in the expression of genes that reprogram the cells.
Of course this is all so vague. But what is interesting is that the transcriptional co-activator yes-associated protein's (YAP) level of nuclear expression governs the level of cellular stretching. High pressure also activates YAP resulting in pretty much the same phenotype. The mechanism how this happens is unknown. There is a see of proteins involved in the Hippo Pathway. It is also possible that the pressure waves are transmitted through the cell and flatten the nucleus. This has been studied just a bit, but as you can imagine it is difficult to study the architecture of the nucleus inside a living cell with an intact external membrane.
This article doesn't mention YAP. It is a big puzzle.
  • asked a question related to Statistical Mechanics
Question
13 answers
Since the Gaussian is the maximal (Shannon's) entropy distribution in unbounded real spaces, I was wondering whether the tendency of cummulative statistical processes with the same mean having a Gaussian as the limiting distribution can be in some way physically related with the increase of (Boltzmann's) entropy in thermodynamical processes.
In Johnson, O. (2004) Information Theory and The Central Limit Theorem, Imperial College Press, we can read:
"It is possible to view the CLT as an anlogue of the Second Law of Thermodynamics, in that convergence to the normal distribution will be seen as an entropy maximisation result"
Could anyone elaborate on such relationship and perhaps point to other non-obvious ones?
Relevant answer
Answer
Central limit theorem is related to Gaussiun distribution -The main modelling equation of Random Coil Thermodynamics - This Randomness is the main significance of Entropy - Another name of second law of thermodynamics .
  • asked a question related to Statistical Mechanics
Question
3 answers
I am trying to simulate a heterogeneous liquid mixture with single-site atoms (translational motion only) and multi-site rigid molecules (translational + rotational motion). The molecules also vary in mass and moment of inertia from species to species. Does anyone know how to calculate the initial magnitudes of the translational and rotational velocities in relation to the desired temperature?
I understand the widely-used MDS programs take care of this "under the hood", but I am interested to know what exactly the calculation is. I have found related texts, but they focus on uniform systems. Thank you in advance for any help.
Anne
Relevant answer
Answer
I would make sure that you have 1/2kT of kinetic energy per degree of freedom on each molecule. If it is a rigid body simulation, you have 3 translational DoFs and 3 rotational DoFs, find the required velocity and angular velocity for each molecule and generate randomly for each molecule. If you want your center of mass not to be moving, you have to remove the total linear velocity of the system from each molecule and then rescale the velocities to get your total required kinetic energy back. Also, rescale all the velocities to get you traslational kinetic energy by (N - 3)/N to reflect that you have 3 DoFs less due to the resting center of mass.
I would not bother to sample the velocities from a distribution, though, they should get there on their own after a short simulation.
  • asked a question related to Statistical Mechanics
Question
37 answers
Dear all,
I have a number of Likert items with statements which have the following answer options
  • Less likely
  • No effect on likelihood
  • More likely
I am unable to find the answer to the following question, and therefore cannot seem to determine the overarching data analysis family, let alone the correct techniques, to analyse my data set.
Am I able to analyse my data with any quantitative methods, either descriptive or inferential, or do I need to use purely qualitative methods in analysing the data?
I understand that the data needs to be contextualised before the statistical mechanism is determined, for example, the parametric/non-parametric debate. However, I am struggling to determine if the data type allows for quantitive analysis when I have 25 statements, each with 700 or so answers that indicate one of the above item answer options.
Also, I have a data set that pertains to one sample at one point in time. Can any correlational statistics or inferential statistics be done? Or do those methods only apply when you have either independent or paired samples? Or am I meant to compare one statement with another (Or a compilation of statements representing a theme with another compilation representing another theme) when running correlational tests?
Looking for that thread of information that ether says ties all my misalignments.
Kind regards,
Jameel
Relevant answer
Answer
Since I saw your comment while I saw about to sl eep, I simply thought you understood, please read it again. I am in bed already
  • asked a question related to Statistical Mechanics
Question
7 answers
This question must be accompanied by provisos. One particular proviso simplifies the task. Assume that the problem solving used throughout the development of language was of the same kind that has at all times occurred since. In other words, assume that it is valid to use averages over time, at least for the time period under consideration. In 2009 I used ideas relating to statistical mechanics to estimate, on certain assumptions (a language-like call `lexicon' of about 100 calls), that language began between about 141,000 to 154,000 years ago in a couple of articles, and
). at p. 74. The work in those articles is over 10 years old and there have been developments since. One involves dispersion of phonemic diversity (Atkinson 2011). Are there other approaches?
Relevant answer
Answer
I read the 2011 paper by Atkinson on phonemic diversity in May 2019. It had not occurred to me in 2008 or since that there might be a way to find a rate of phonemic change. So the Atkinson paper is very interesting. Unfortunately, I could not find a way to align Atkinson's ideas with those in the 2008 lexical growth paper and in the 2009 intelligence paper, mentioned in the question above. But I did find a 2015 paper, Detecting Regular Sound Changes in Linguistics as Events of Concerted Evolution (Hruschka et al) and was pretty amazed at some of their data. I have thus added a paper on phonemic and lexical change which might be of interest.
  • asked a question related to Statistical Mechanics
Question
11 answers
Hi, for my statistical mechanics class, I had to calculate all the thermochemistry datas by myself and try to see I if get to the same results as Gaussian. Everything is good except for the zero point energy contribution. For instance, in the Gaussian thermochemistry PDF, they say that the ''Sum of electronic and thermal Free Energies'' (wich you can find easily in the output or in Gaussview) is suppose to be the Gibbs free energy. But its not! In fact the zero point energy is missing and there is nowhere in the output or in GaussView where you can see the correct value. And this true also for Enthalpy and internal energy. In fact we have to ad the zero point energy afterwards, which I think is weird since this is the values we really need. In their pdf Gaussian emphisis that the ZPE is added everywhere by default, but it's not true. My teacher is also suspicious about the software. Do we miss something here? What do you guys take as your G, H and U? And where do you find them. I think we could all be wrong if we forget to ad the zero point energy which is what I think Gaussian does.
Thanks
Relevant answer
Answer
Hello Mathieu
The Gaussian Output gives you 1. electronic energy, 2. Zero-point correction, 3. enthalpy correction and 4. entropy which you can use to calculate the Free energy
The enthalpy is : internal energy + zero-point correction + enthalpy correction The Free energy is : the enthalpy(calculated above) - Temp*entropy What you have be careful is make sure the units are correct in all cases
Hope this will lead to right answer
Pansy
  • asked a question related to Statistical Mechanics
Question
2 answers
I came across this question while studying Tuckerman book on Statistical Mechanics for Molecular Dynamics.
Relevant answer
Thank you
Behnam Farid
, I was able to get it by following the steps you highlighted
  • asked a question related to Statistical Mechanics
Question
3 answers
Let's just say we're looking at the classical continuous canonical ensemble of a harmonic oscillator, where:
H = p^2 / 2m + 1/2 * m * omega^2 * x^2
and the partition function (omitting the integrals over phase space here) is defined as
Z = Exp[-H / (kb * T)]
and the average energy can be calculated as proportional to the derivative of ln[Z].
Equipartion theorem says that each independent coordinate must contribute R/2 to the systems energy, so in a 3D system, we should get 3R. My question is does equipartion break down if the frequency is temperature dependent?
Let's say omega = omega[T], then when you take the derivative of Z to calculate the average energy. If omega'[T] is not zero, then it will either add or detract from the average kinetic energy and therefore will disagree with equipartition. Is this correct?
Relevant answer
Answer
Drew> Z = Exp[-H / (kb * T)], and the average energy can be calculated as proportional to the derivative of ln[Z].
The exact formula, easy to prove, is ⟨H⟩ = -∂ln(Z)∂β, where β = 1/(kBT). However, as you probably already have noted, that is mathematically correct only when H is independent of β (i.e. temperature T).
One may easily imagine situations where the parameters of the Hamiltonian actually depend on temperature, because one is dealing with a phenomenological "effective" description^*, not taking into account the physics which leads to this temperature dependence. However, if such a dependence is large enough to make any difference, the standard thermodynamic interpretation^** of ln Z breaks down, and thereby all sacred relations of thermodynamics. Which is the absolutely last thing we should consider violating in physics.
If you want to escape the usual equipartition principle, this is easily violated by non-quadratic terms in a classical Hamiltonian, or introduction of quantum mechanics (without which even Hell would freeze over, due to its infinite heat capacity).
^*) Which in practise is always the case, since we don't even know what is going on at extremely small scales, and (mostly) don't have to worry about sub-atomic scales.
^**) ln Z = -β F = -β(U-TS), where F is the Helmholtz free energy.
PS. The very first answer to this question should be viewed as an attempt to repeat the notorious Sokal hoax, https://en.wikipedia.org/wiki/Sokal_affair (often perpetrated on RG).
  • asked a question related to Statistical Mechanics
Question
8 answers
According to statistical mechanics, the translation energy of a system of point particles is given by 3/2 NKT. And it is known that a single particle exhibits only translational energy. So can we simply imply that single particle system energy can be obtained just by substituting N=1? Because as far as I remember, the first principle of statistical mechanics assumes that number of particles of a system is extremely large, so we can't directly apply those principle for a single particle system
Relevant answer
Answer
Yes 1/2 kT per degree of freedom. Science should decide whether k is a fundamental property of nature or just a convenient conversion factor. Tolman treated it as invariant conversion factor in describing relativistic thermodynamics. Other have treated it as a fundamental property of nature. Committees vote first one way and then the other.
  • asked a question related to Statistical Mechanics
Question
3 answers
Even gases like air are assumed (for sufficiently low flow velocities) to have constant density. Is it only because the hydrodynamic equations of motions are easier to solve when incompressibility is assumed? Or can it be proven with Statistical mechanics why incompressibility is frequently assumed?
In case of Sound waves, small deviations in Density are respected and kinetic Energy of many sound waves are a lot lower than the air flow around a car at 100mph. Will compressibility come into account also at low speeds and if yes, why?
Relevant answer
Answer
That is a hystorical assumption in the fluid dynamics (and more often in aerodynamics) field. This assumption is not on the physical property of the fluid but rather on the assumption of the involved characteristic velocity compared to the sound velocity. Below a certain value of v/a=Mach number the flow (not the fluid) problem is assumed to be governed by the simplified NSE equations. Historically, other assumptions were associated such as steady flow and inviscid flow to get a simplified set of equations (see potential flows).
Actually, if the flow is assumed to be unsteady and viscous, the set of equations is somehow more mathematically complicated than the corresponding set for fully compressible flows. Modern methods try to solve the compressible low-Mach equations.
Of course, the incompressible flow is a mathematical model, therefore you cannot describe some physical property such as pressure wave at finite velocity.
  • asked a question related to Statistical Mechanics
Question
66 answers
In the introduction to his text, A Student’s Guide to Entropy, Don Lemons has a quote “No one really knows what entropy is, so in a debate you will always have the advantage” and writes that entropy quantifies “the irreversibility of a thermodynamic process.” Bimalendu Roy in his text Fundamentals of Classical and Statistical Mechanics (2002) writes “The concept of entropy is, so to say, abstract and rather philosophical” (p. 29). In Feynman’s lectures (ch. 44-6): “Actually, S is the letter usually used for entropy, and it is numerically equal to the heat which we have called Q_S delivered to a 1∘-reservoir (entropy is not itself a heat, it is heat divided by a temperature, hence it is measured in joules per degree).” In thermodynamics there is the Clausius definition which is a ratio of a quantity of heat Q to a degree Kelvin, Q/T, and the Boltzmann approach, k log(n). Shannon analogized information content to entropy; 2 as the base of the logarithm gives information content in bits. Eddington in the Natural Physical World (p. 80) wrote: “So far as physics is concerned time’s arrow is a property of entropy alone.” Thomas Gold, physicist and cosmologist suggested that entropy manifests or relates to the expansion of the universe. There are reasons to suspect that entropy and the concept of degrees of freedom are closely related. How best we understand entropy?
Relevant answer
Answer
This is a very relevant question.
First, you need to specify in which "domain" you speak of entropy.
I included a specific explanation in my following article:
I quote, page 3, 4, 5 :
"There are various forms equational of entropy, we will see now. The first is the entropy used below, the Boltzmann entropy [6], which is written:
(look paper)
This equation defines the microcanonical entropy of a physical system at the macroscopic balance, but left free to evolve on a microscopic scale between Omega different micro-states (also called number of complexions, or number of system configuration). The unit is in Joule per Kelvin (J / K).
Entropy is the key point of the second law of thermodynamics, which states that "Any transformation of a thermodynamic system is performed with increasing the overall entropy, including the entropy of the system and the external environment. We then say that there is creation of entropy."; "The entropy in an isolated system can only increase or remain constant."
There is also the Shannon formula [7]. The Shannon entropy, due to Claude Shannon, is a mathematical function that corresponds to the amount of information contained in or issued by a source of information. Over the source is redundant, it contains less information. Entropy is maximum and for a source whose symbols are equally likely. The Shannon entropy can be seen as measuring the amount of uncertainty of a random event, or more precisely its distribution. Generally, the log is in base 2 (binary). Its formula is:
(look paper)
however, one can define an entropy in quantum theory [9], particularly used in quantum cryptography (with the properties of entanglement), called the von Neumann entropy noted:
(look paper)
With the density and orthonormal basis matrix:
(look paper)
The von Neumann entropy is identical to that of Shannon, except that it uses the variable (look paper), a density matrix. As written by Serge Laroche, this equation can be used to calculate the degree of entanglement of two particles: if two particles are entangled, the entropy is zero. Conversely, if the entanglement between two particles is maximum, the entropy is maximum, given we do not have access to the subsystem. In classical mechanics zero entropy means that the events are some (only one possibility), while in quantum mechanics this means that the density matrix is ​​a pure state of phi. But in quantum physics measurements are generally unpredictable because the probability distribution depends on the wave function and observable.
And this is also explained by the principle Heisenberg uncertainty: indeed, if for example we had to have more information (so less entropy) the momentum of the particle, there is less information on the position thereof (more entropy). This implies that quantum physics is still immersed in the entropy, although the entropy is low.
Now that we know the Boltzmann entropy and Shannon entropy, we can merge the two giving the Boltzmann-Shannon entropy or statistical entropy [8]. If we consider a thermodynamic system that can be in several microscopic states of probabilities , statistical entropy is then:
Or, the Boltzmann entropy-Neumann, equivalent to the above equation:
(look paper)
This function is paramount, and it will be constantly used in our theory of gravitational entropy. Its unit is the binary and Joule per Kelvin. These include some properties of this function. We know that the entropy is maximum when the numbers of molecules in each compartment are equal. Entropy is minimal if all molecules are in one compartment. It is then 0 as the number of microscopic states is 1.
From the perspective of information theory, the thermodynamic system behaves like a source that does not send any message. Thus, the entropy measure "the missing information" to the receiver (or uncertainty of the entire information).
If the entropy is maximum (the numbers of molecules in each compartment are equal) the missing information is maximum. If the entropy is minimal (molecules numbers are in the same compartment), then the missing information is zero.
In the end, the Shannon entropy and Boltzmann entropy is the same concept."
In conclusion, entropy is a measure of uncertainty:
- in information theory -> bit uncertainty
- in quantum physics (Von Neumann) -> Uncertainty in qubit
- In thermodynamics -> Uncertainty of the contents of a thermodynamic system
- in statistical physics -> bit uncertainty of the contents of a thermodynamic system
There is another form of entropy, the entropy of flat curves, proposed by Michel Mendes. But that, I let you see;)
  • asked a question related to Statistical Mechanics
Question
3 answers
Zipf law (which is a power law) is the maximum entropy distribution of a system of P particles in N boxes where P>>N. Its derivation is based on microcanonical ensemble in which the entropy is calculated for an isolated system. In the canonical ensemble the system is with contact with an external bath having a fix temperature T. The macroscopic quantities of the canonical ensemble are calculated from its partition function in which the probabilities decay exponentially with energy.
The question is: how is Zipf law a power law which can be obtained from exponential partition function?
Relevant answer
Answer
First, everything in the world is physical. Entropy is a quantity that represents the change in the world's uncertainty when an amount of energy is transferred between two bodies. The amount of the transferred energy is the heat Q.
The temperature T of the emitted body defines the "grade" of the heat. The higher the grade the higher the amount of work that can be extracted when the heat is absorbed in a given lower temperature body.
Work W may be viewed as heat emerged from infinitely high-temperature source (i.e. laser light). You can liken heat engine to waterfall were heights gap of the waterfall is analogue to the temperature gap of a heat engine. Therefore, when you replace Q by W in your Stirling engine you use two infinite temperatures. As you probably know, both zero and infinity are not true numbers and cannot be used in an arithmetical calculation. That is probably the reason for your erroneous conclusion.
  • asked a question related to Statistical Mechanics
Question
3 answers
hi everyone can any one help me to find the entropy index to measure the diversification for the company by using Σ Pi*ln(1/Pi) I already have the total sales for each year and I have each segment sales share .. N the number of industry segments , pi is the percentage of ith segment in total company sales 
Relevant answer
Answer
  • asked a question related to Statistical Mechanics
Question
13 answers
I am recently start to study statistical mechanics, but it's really hard to understand some of the basic concepts. Especially the textbooks. I follows statistical physics by F. Reif, but the language is little hard to understand. Please give me suggestions on simple books.
Relevant answer
Answer
As you declared to be a fresh mind i propose you as a training this simulated experiment. It is an activity i use for my students and i have had good results every time i proposed it to them. I apologize it is in italian but i think that pictures and formulas will let you understand the main features of it.
  • asked a question related to Statistical Mechanics
Question
5 answers
I have to calculate the rate of tunnelling in a protein, for which I need the transmission coefficient. How do I calculate it? Or is there another way that does not require the transmission coefficient?
Relevant answer
Answer
Dear Dr Saluja
I think this is a very difficult problem and presently can only be tackled by means of some rather rough quantum mechanical approximation.Look up first the Kronig-Penney model and the WKBJ quasi classical approximation to start with and then tunelling in Josephson junctions to get some feeling
about the problem.
best regards
Dikeos Mario
  • asked a question related to Statistical Mechanics
Question
4 answers
Does anyone have experience estimating formation enthalpy or Atomization energy of molecules using Gaussian? I am trying to calculate this parameter for simple molecules like H2O and NH3, however, there is a significant error even with decent theory level. I am wondering how accurate this could be done? Is there any specific theory/method performing better than others? ( currently, I am using B3LYP/6-311++G (2d,2p) . I do not have any QM background and any thought would be appreciated.
Here are some numbers I am getting from Gaussian, compared to the JANAF table values.
Molecule: Gaussian JANAF
H2O : 1174 KJ --------- 917 KJ
NH3 : 1432 ------------ 1158
N2 : 1467 ----------------- 941
H2: 434 ------------------ 432
O2: 870 -------------- 493
Javad
Relevant answer
Answer
Dear Javad,
It is very difficult to predict atomization energies with a DFT methods. For high accuracy coupled cluster is a requirement and, in order to save time, complete basis set extrapolation is another excellent option.
Even at this level, an important number of error arises from the theory and different correction and approximation must be done.
In other to perform the calculations only W1 and W2 theories are capable for an accuracy in the range of kJ/mol.
At this point, I invite you to read a very interesting (maybe essential) book and then, to apply the knowledge you will find there.1
  1. Cioslowski, J., Quantum-Mechanical Prediction of Thermochemical Data; Springer Netherlands: Dordrecht, 2001
I hope it helps you,
Best regards,
Joaquim Rius
  • asked a question related to Statistical Mechanics
Question
2 answers
The characteristic frequency of thermal motion is around 7E12 Hz at room temperature (300K), but from that information how can we conclude that the bonds are hard; they don't vibrate !!
Relevant answer
Answer
Dear Roshan,
The bonds use the hopping energy and this is much higher than the thermal energy. Notice that one eV is equivalent to a thermal energy of 11604.5 K !!! Thus the phonons (accoustic or optical) don't interact practically with the bond electrons.
  • asked a question related to Statistical Mechanics
Question
7 answers
Hi, I want to ask a question about the basic theory of molecular dynamics.
In MD simulations, we can calculate the temperature using the average of kinetic energy of the system. For ideal gas(pV=NkbT), I can derive the relationship between temperature and kinetic energy: 1/2mv^2=3/2kbT (3-dimension). But if simulating non-ideal gas or fluids, how can I get the relationship between temperature and kinetic energy?
Could anyone give me some understandable explanations(I know little about quantum mechanics)? Any relevant material or link will be appreciated. Thanks!
Relevant answer
Answer
If you want to dig a little bit deeper you can check out the papers [1] and [2] (and related ones). It actually turns out that there are many more possible definitions of the thermodynamic temperature, i.e. the general expression is:
kT = ⟨ ∇H • B ⟩ / ⟨ ∇ • B ⟩
where H is the Hamiltonian and B is an arbitrary (within weak mathematical constraints) vector field that depends on the phase space variables. The expression you mentioned (called the kinetic temperature) is recovered from this by choosing B = ∇K with the kinetic energy K. But also you can make other choices for B such as B = ∇V with V the potential energy. The latter yields a temperature (called configurational temperature) that only depends on positions and is independent of momenta. There are some interesting things you can do with this, for instance facilitate rare events as e.g. in [3].
  • asked a question related to Statistical Mechanics
Question
21 answers
I want to know, is negative T state only conceptually catches one's eye or truely significant to help us understand thermodynamics ?
  • In early days Purcell et al, and in my university textbooks, negtive T in spin degree of freedom in NMR system was mentioned;
  • In 2013, S. Braun et al perform an experiment in cold atoms and realize an inversed enegy level population for motional degeree of freedom. (http://science.sciencemag.org/content/339/6115/52)
  • Many disputes about the Boltzman entropy or Gibbs entropy, as I sknow, especially Jörn Dunkel et al(https://www.nature.com/articles/nphys2815); they insist on Gibb's entropy is physical and argue that negative T is wrong.
  • After that, many debates emerges, I read several papers, they all agree with conventional Boltzman entropy.
Does anyone has comments about this field ?
Is it truely fascinating or just trival to realize a population inversion state——negtive temperature ?
or anyone has clarification of the Carnot engine work between a negtive T and positive T substance?
Any comments and discussions are welcome.
Relevant answer
Answer
I just read this interesting and lively discussion on negative temperatures. I must confess that I have not read the paper by Abraham and Penrose and I promise to do so during the holidays. I am glad that discussions on the pure thermodynamic issue has arisen, independently of the entropy formulae of statistical physics (by the way, in my view, there is only one “entropy”, Boltzmann’s, as discussed say in Landau and Lifshitz, but that’s another discussion!). So, it is nice to hear arguments based on thermodynamics only.
Although I have not been able to follow the whole thread, I think I side with Struchtrup point of view. I have always had trouble with extending “equilibrium” concepts (such as entropy and temperature) to non equilibrium and/or metastable states. I believe it is dangerous. Certainly, there are many situations where there are no ambiguities, but when there are one should refrain to simply extend the well founded equilibrium concepts, with all their assumptions … About ten years ago there were claims in respected journals about negative heat capacities in nano systems, for instance.
I do not think I have much to say here now, except to recall that, indeed, in order to accommodate for negative temperatures, Ramsey had to change Kelvin-Planck statement of the Second law. Actually, he inverted it (this, I pretended to explain in the appendix of my paper). Then, of course, with the assumption of negative temperatures reservoirs, everything logically follows. However, before arguing about their stability, something very strange happens if equilibrium negative temperatures exist: heat can be converted into work as a sole result (with efficiency one), and the opposite is impossible! hence, there is no need for Carnot engines at all and friction can’t happen! We should be suspicious of this … the argument that reservoirs at negative temperatures are unstable allows us to find the root of the failure, namely, the tacit assumption by Ramsey that stable negative temperature reservoirs exist.
Certainly, the states created in the experiments by Purcell etal, and the many later on, do exist, they are metastable states and can have very long relaxation times to “normal” equilibrium states. But this does not suffice to say that they are in equilibrium for that time, and less to say that they have actually achieved negative temperatures. It may be useful to use those terms but it may also be deeply misleading.
  • asked a question related to Statistical Mechanics
Question
3 answers
Hi,
I apologize for this apparently silly question, but please, could you point me out if there is an underlying relationship between the defect driven phase transition and the directed percolation?
Secondly, is it possible to have a system which undergoes a KT transition at T1 generating free vortices, and subsequent by a spatial spreading of disorder via directed percolation at T2?
Please, if there is any relevant examples and materials, do let me know.
Many thanks.
Wang Zhe
Relevant answer
Answer
Dear Zhe,
Yes, you can have a relation between percolation and the correlation length, where the correlation length is defined to be the distance at which the probability of a site being connected to the origin falls to a level 1/e . In other words, the correlation function G(r) gives the average correlation between two lattice sites (one at the origin, the other at position r). That is, loosely speaking, it describes how much more likely it is for the site at position r to belong to the same cluster as the origin than it would be for a site chosen randomly from across the whole lattice.
As you go above the critical point pc, the probability of being in a large, finite cluster gets smaller and smaller, and it is only likely to be in an infinite cluster, or in a very small finite cluster. Obviously, the average finite cluster size decreases until p=1 when it becomes zero (everything is in the infinite cluster)
  • asked a question related to Statistical Mechanics
Question
17 answers
Hi,
The vortex-unbinding Kosterlitz-Thouless physics generally applies to two-dimensional systems and occasionally three dimensional solid.
I was wondered if there exits an one dimensional analogy of vortex-unbinding occuring in two dimensions. Could anyone point me out, please?
Thank you.
Very kind wishes,
Wang Zhe
Relevant answer
Answer
Dear Zhe,
If you base your research on this subject just watching that:
the velocity field induced by filiform vortex filaments falls off like 1/r, leading to the logarithmic diverging of kinetic energy on the system size. Thus, I expect, Kosterlitz-Thouless transition may indeed takes place in filiform vortices dominated transitions, i.e, , in plane Couette flow.
This analogy could be quite dangerous; for instance, the electric field falls also as 1/r and the electric charges don't produce a KT phase transition by themself. There are more ingredients needed as a diverging correlation length inducing a quasi-long range order (having a non trivial topology transformation behind it). But your analogy is not so bad if you think in a II kind superconductor where you have Abrikosov-Nielsen vortices interacting (at distances shorter than the Peierls screening length) to prevent the coherence between the Cooper pairs which have a complex scalar field with a continous local symmetry U(1) (notice that you don't have this symmetry to be broken or at least I don't see it). Below the KT temperature you have superconductivity because there are not free vortices, only vortice-antivortice pairs can be present.
Thus your idea could be possible but more things are necessary than just the kinetic energy logaritmic dependence with relative distances. You need to have a correlation length and winding numbers different than zero (non trivial topological solutions).
  • asked a question related to Statistical Mechanics
Question
7 answers
We know the ergodic definition and know the ergodic mappings. But what is the ergodic process?
Relevant answer
Answer
A random process is said to be ergodic if the time averages of the process tend to the appropriate ensemble averages. This definition implies that with probability 1, any ensemble average of {X(t)} can be determined from a single sample function of {X(t)}. Clearly, for a process to be ergodic, it has to necessarily be stationary. But not all stationary processes are ergodic.
  • asked a question related to Statistical Mechanics
Question
1 answer
Why Hamilton's equation is not used for constructing dynamical equation in liquid crystal? 
Relevant answer
Answer
theoretically all formulations of classical dynamics (Newton,Lagrange,Hamilton-Jacobi) are entirely equivalent.However  the  Lagrange formulation is often more
direct when there are constraints to be fullfilled. during the motion.
  • asked a question related to Statistical Mechanics
Question
1 answer
Hi everyone,
I'm trying to solve this exercise (attached file) from the JM Yeomans book "Statistical mechanics of phase transition".
I understood how the expansion works for the same model without the field term but here I have troubles figuring out which terms are vanishing and which one are not to answer the second question. Also I don't get what de S_m(v,N) term represents ...
Anyone's help is welcomed ! :)
Relevant answer
Answer
Why not ask Julia herself? You can find her coordinates on the web. She'll be delighted to illuminate you.
  • asked a question related to Statistical Mechanics
Question
20 answers
Hi!
Please, could anyone point me out an intuitive way to understand the exponential divergence of the correlation length in the KT-transition; in contrast to the usual algebraic divergence in the common sense of critical phenomena?
Thank you.
Wang Zhe
Relevant answer
Answer
Dear Prof. Farid,
Thank you for the reply!
Regarding the high-temperature correlation length xi, please, could you kindly explain a little bit on the physical intuition of the factor (-1/2) in the following expression:
xi=exp[a*t^(-1/2)] ,where a is a constant.
In fact, I have consulted quite a few professors on the past week but without a satisfactory. 
Thank you.
Very kind wishes,
Wang Zhe 
  • asked a question related to Statistical Mechanics