Science topic

# Statistical Mechanics - Science topic

Statistical mechanics provides a framework for relating the microscopic properties of individual atoms and molecules to the macroscopic bulk properties of materials that can be observed in everyday life.

Questions related to Statistical Mechanics

And can you reference articles or texts giving answers to this question?

This is briefly considered in https://arxiv.org/abs/0804.1924 which is on RG as https://www.researchgate.net/publication/314079736_Entropy_and_its_relationship_to_allometry_v17.

Reviewing the literature would be helpful before considering whether to updatie the 2015 ideas.

When studying statistical mechanics for the first time (about 5 decades ago) I learned an interesting postulate of equilibrium statistical mechanics which is: "The probability of a system being in a given state is the same for all states having the same energy." But I ask: "Why energy instead of some other quantity". When I was learning this topic I was under the impression that the postulates of equilibrium statistical mechanics should be derivable from more fundamental laws of physics (that I supposedly had already learned before studying this topic) but the problem is that nobody has figured out how to do that derivation yet. If somebody figures out how to derive the postulates from more fundamental laws, we will have an answer to the question "Why energy instead of some other quantity." Until somebody figures out how to do that, we have to accept the postulate as a postulate instead of a derived conclusion. The question that I am asking 5 decades later is, has somebody figured it out yet? I'm not an expert on statistical mechanics so I hope that answers can be simple enough to be understood by people that are not experts.

Why the 3 random variables x, y, and z are independent??

Consider a random vector R which has coordinates x, y, and z (3 random variables).

Prove that the random vector R will be isotropically distributed in the 3D space only if the 3 coordinates are independent and identically distributed random variables.

This case is generally referred to as i.i.d variables (

**I**ndependent and**I**dentically**D**istributed).One might argue: Animals increase their survivability by increasing the degrees of freedom available to them in interacting with their environment and other members of their species.

Right, wrong, or in between? Your views?

Are there articles discussing this?

Let's say if 12-6 equation is perfect, the most favored distance between two Ar atoms is the r(min).

My understanding for vdW radii is measured from some experiments, while 12-6 potential is more like an approximation. But if I want to link vdW radii to distance in 12-6 sigma/r/r(rmin), which one should it be?

The definition of vdW radii is the closest distance of two atoms, so should it be somewhere slightly smaller than sigma?

In the website, however, the sigma value is referred as the vdW radius.

Hamiltonian mechanics is a theory developed as a reformulation of classical mechanics and predicts the same outcomes as non-Hamiltonian classical mechanics. It uses a different mathematical formalism, providing a more abstract understanding of the theory. Historically, it was an important reformulation of classical mechanics, which later contributed to the formulation of statistical mechanics and quantum mechanics.

Statistical mechanics considering interaction is attached to the second law of thermodynamics. Considering the influence of temperature on the interaction potential, statistical mechanics can prove that the second law of thermodynamics is wrong.

A system of ideal gas can always be made to obey classical statistical mechanics by varying the temperature and the density. Now a wave packet for each particle is known to expand with time. Therefore after sufficient time has elapsed the gas would become an assembly of interacting wavelets and hence its properties would change since now it would require a quantum mechanical rather than classical description. The fact that a transition in properties is taking place without outside interference may point to some flaw in quantum mechanics. Any comments on how to explain this.

I am asking this question on the supposition that a classical body may be broken down in particles which are so small in size that quantum mechanics is applicable on each of these small particles. Here number of particles tends to uncountable (keeping number/volume as constant).

Now statistical mechanics is applicable if practically infinite no. of particles are present. So if practically infinite number of infinitely small sized particles are there, Quantum Statistical Mechanics may be applied to this collection. (Please correct me if I have a wrong notion).

But this collection of infinitesimally small particles make up the bulky body, which can be studied using classical mechanics.

It is suggested that the Zero Point Energy (that causes measurable effects like the Casimir force and Van der Waals force) cannot be a source of energy for energy harvesting devices, because the ZPE entropy cannot be raised, as it is already maximal in general, and one cannot violate the second law of statistical mechanics. However, I am not aware of a good theoretical or empirical proof that ZPE entropy is at its highest value always and everywhere. So I assume that ZPE can be used as a source of energy in order to power all our technology. Am I wrong or right?

If MD simulations converges to Boltzmann distributions ρ∼exp(−βϵ) after sufficiently long time why do we need MD simulations, as all the macroscopic quantities can be computed from the Boltzmann distribution itself. This question I am asking for short peptides of sequences of few amino acids.(tripeptide, tetrapeptide etc).

For instance in the given above (link) paper, they are using MD to generate Ramachandran distributions of conformations of pentapeptide at a constant temperature. So this should obey statistical mechanics. If it is so, then this should satisfy Boltzman distributions.So I should be able to write down the distributions using boltzmann weight as follows,

ρ({ϕi,ψi})∼exp(−βV({ϕi,ψi}))

.Here, all set of Ramachandran angle coordinates of the pentapeptides is given by {ϕi,ψi}{ϕi,ψi}.

Why should I run MD to get the same distributions?

How to calculate mean square angular displacement ? Do we need any periodic boundary conditions if at each step angle is updated using theta(t+dt) = theta(t) + eta(t) , where eta(t) is a gaussian noise . Please describe the procedure to calculate this quantity.

Is there any code available to calculate Spin Spin Spatial correlation function in 1d Ising model?

Recently, I've been selected for an ICTP program named Physics of Complex Systems. But, I have a keen interest in Particle Physics & Quantum networks. As statistical mechanics involved in Complex systems. One of my professors said that statistical mechanics could be a helpful tool for particle physics.

I mean, in Williamson-Hall analysis, should we take theta of a set of parallel planes or all the positions corresponding to most intense peaks?

**Dear All;**

If you are well-experienced in one of the fields

**Complex Networks**,**Human Genetics**, or**Statistical Mechanics**and would like to collaborate with us in our project please contact me at:**Basim Mahmood**

Project title: Statistical Mechanics of Human Genes Interactions

**Regards**

I would like to calculate the non-gaussian parameter from MSD. I think I am doing mistake in calculating it?

3 < r(t)**4 >

NGP ((alpha)(t))= ----------------------- - 1

5 < r(t)**2 >**2

In some articles it is delta r(t). I am bit confused. Someone please help me out in calculating it by explain the terms in it?

Thank you

Hello Dear colleagues:

it seems to me this could be an interesting thread for discussion:

I would like to center the discussion around the concept of Entropy. But I would like to address it on the explanation-description-ejemplification part of the concept.

i.e. What do you think is a good, helpul explanation for the concept of Entropy (in a technical level of course) ?

A manner (or manners) of explain it trying to settle down the concept as clear as possible. Maybe first, in a more general scenario, and next (if is required so) in a more specific one ....

Kind regards !

This is to understand how the concepts of statistical mechanics is applied in astrophysics.

I am looking for some quality materials for learning the molecular dynamics theory and the use of LAMMPS. Besides the LAMMPS manual from Sandia National Laboratory, which sources can I use for learning LAMMPS?

This question relates to my recently posted question: What are the best proofs (derivations) of Stefan’s Law?

Stefan’s Law is E is proportional to T^4.

The standard derivation includes use of the concepts of entropy and temperature, and use of calculus.

Suppose we consider counting numbers and, in geometry, triangles, as level 1 concepts, simple and in a sense fundamental. Entropy and temperature are concepts built up from simpler ideas which historically took time to develop. Clausius’s derivation of entropy is itself complex.

The derivation of entropy in Clausius’s text, The Mechanical Theory of Heat (1867) is in the Fourth Memoir which begins at page 111 and concludes at page 135.

Why does the power relationship E proportional to T^4 need to use the concept of entropy, let alone other level 3 concepts, which takes Clausius 24 pages to develop in his aforementioned text book?

Does this reasoning validly suggest that the standard derivation of Stefan’s Law, as in Planck’s text The Theory of Heat Radiation (Masius translation) is not a minimally complex derivation?

In principle, is the standard derivation too complicated?

I'm writing my dissertation about economic dynamics of inequality and i'm going to use econophysics as a emprical method.

Or is the concept inapplicable?

If it were applicable, could statistical mechanical methods apply? Does entropy?

Some excerpts from the article

Comparing methods for comparing networks Scientific Reports volume 9, Article number: 17557 (2019)

By Mattia Tantardini, Francesca Ieva, Lucia Tajoli & Carlo Piccard

are:

*To effectively compare networks, we need to move to inexact graph matching, i.e., define a real-valued distance which, as a minimal requirement, has the property of converging to zero as the networks approach isomorphism.*

*we expect that whatever distance we use, it should tend to zero when the perturbations tend to zero*

*the diameter distance, which remains zero on a broad range of perturbations for most network models, thus proving inadequate as a network distance*

*Virtually all methods demonstrated a fairly good behaviour under perturbation tests (the diameter distance being the only exception), in the sense that all distances tend to zero as the similarity of the networks increases.*

If achieving thermodynamic efficiency is the benchmark criterion for all kinds of networks, then their topologies should converge to the same model. If they all converge to the same model when optimally efficient, does that cast doubt on topology as a way to evaluate and differentiate networks?

See for example, Statistical mechanics of networks, Physical Review E 70, 066117 (2004).

The architecture and topology of networks seem analogous to graphs.

Perhaps though the significant aspect of networks is not their architecture but their thermodynamics, how energy is distributed via networks.

Perhaps networks linkages are only means for optimizing energy distributions. If so, then network entropy (C log(n)) might be more fundamental (and much simpler to use) than the means by which network entropy is maximized. If that were so, then the network analogy to graphs might lead to a sub-optimal conceptual reference frame.

Dimensional capacity is arguably a better conceptual reference frame.

Your views?

This question is prompted by the books review in the September 2020 Physics Today of The Evolution of Knowledge: Rethinking Science for the Anthropocene, by Jürgen Renn.

I suspect that there is such an equation. It is related to thermodynamics and statistical mechanics, and might be characterized, partly, as network entropy.

Two articles that relate to the question are:

and also there is a book, the ideas in which preceded the two articles, above:

The Intelligence of Language.

The question is related somewhat distantly to an idea of Isaac Asimov in his science fiction, The Foundation Trilogy, psychohistory.

There are two ways to derive Boltzmann exponential probability distribution of ensemble:

1) Microcanonical Ensemble: We assume a system S(E,V,N)

E= internal Energy, V=volume, N=number of molecules or entities.

We have different energy states that the molecules can take, but the total energy E of the system is fixed. So whatever be the distribution of molecules in different energy levels, the energy of the over all system is fixed. Then we find the maxima of Entropy of the system to find out the equilibrium probability distribution of molecules in energy levels. We introduce two Lagrange multipliers for two constraints: total probability is unity and total energy is constant E. What we get is an exponential distribution.

2) Canonical Ensemble: We have a system with N molecules. The Helmholtz energy is defined as F=F(T,V,N). So this time energy is not fixed but the temperature is. Instead of different energy states for the molecules, now we have different energy levels of the entire system to be. So by minimization of F we get the equilibrium probability distribution of the system to be in different energy levels. This time the constraint is total probability is unity. The distribution we get is an exponential one.

Now the question is:

How can the probability distribution of the canonical ensemble can give population distribution of molecules in different energy states which is rather found from micro-canonical ensemble?

In the book Molecular driving forces (Ken A Dill) Chapter 10. Equation 10.11 says something similar.

The third rotation allegedly leaves the molecule unchanged no matter how much it is rotated, but is it really okay to assume this? A response in mathematics is welcomed, but if you can explain it in words that would be good, too.

Suppose, chemical composition of the compound, temperature and pressure are known. Electronic structure of constituent elements from numerical solution of Quantum chemistry are also known. Then

- There can be only about 230 3D crystallographic lattices. But is there any limit of motif that can be included into the lattice without violating stoichiometry? How ab-initio calculation find out the appropriate motif to put into lattice to generate crystal structure? Without finding motifs, it is impossible to find crystal structures whose Gibbs free energy needs to be minimized.
- Is there any mathematical method that finds out potential energy in an infinite 3D periodic lattice with distributed charges (say, theoretical calculation of Madelung constant)? What are the mathematical requirement/prerequisite to understand such formula?
- How electron cloud density and local potential energy of a molecule/ motif/lattice point can be linked to total Gibbs free energy of molecule/lattice integrated over the whole structure? What are the statistical-mechanical formula that relates the two? and what are the prerequisites to understand such formula?

Suppose reference point for zero gibbs free energy is conveniently provided.

Dear Colleagues :

Does anyone have literature referencing the diffusion process of Carbon (I mean Carbon atoms) into Bismuth Telluride (Bi2Te3) or into some other compound alike ? E.g. PbTe, (Sb,Se)Bi2Te3, Sb2Te3, etc ... ?

I'll really appreciate if someone can help me out

Kind Regards Sirs !

In the question, Why is entropy a concept difficult to understand? (November 2019) Franklin Uriel Parás Hernández commences his reply as follows: "The first thing we have to understand is that there are many Entropies in nature."

His entire answer is worth reading.

It leads to this related question. I suspect the answer is, yes, the common principle is degrees of freedom and dimensional capacity. Your views?

By quasi-particle I mean in the sense of particles dressed with their interactions/correlations? If yes, any references would be helpful.

Dear all:

I hope this question seems interesting to many. I believe I'm not the only one who is confused with many aspects of the so called physical property 'Entropy'.

This time I want to speak about

**Thermodynamic Entropy**, hopefully a few of us can get more understanding trying to think a little more deeply in questions like these.The Thermodynamic Entropy is defined as:

**Delta(S) >= Delta(Q)/(T**. This property is only properly_{2}-T_{1})**defined for (macroscopic)systems**which are**in Thermodynamic Equilibrium**(i.e. Thermal eq. + Chemical Eq. + Mechanical Eq.).So

**my question is**:In terms of numerical values of S (or perhaps better said, values of Delta(S). Since we know that only changes in Entropy can be computable, but not an absolute Entropy of a system, with the exception of one being at the Absolute Zero (0K) point of temperature):

Is easy, and straightforward to compute the changes in Entropy of, lets say; a chair, or a table, our your car, etc. since all these objects can be considered macroscopic systems which are in Thermodynamic Equilibrium. So, just use the

**Classical definition of Entropy**(the formula above)**and the Second Law of Thermodynamics**, and that's it.But, what about Macroscopic objects (or systems), which are

**not in Thermal Equilibrium ?**Maybe, we often are tempted to think about the Entropy of these Macroscopic systems (which from a macroscopic point of view they seem to be in Thermodynamic Equilibrium, but in reality, they have still**ongoing physical processes which make them not to be in complete thermal equilibrium**) as the definition of the classical thermodynamic Entropy.what I want to say is:

**What would be the limits of the classical Thermodynamic definition of Entropy**, to be used in calculations for systems that**seem to be in Thermodynamic Equilibrium but they aren't**really? perhaps this question can also be extended to the so called regime of Near Equilibrium Thermodynamics.Kind Regards all !

cancer (oncology) is field of biophysics

Since the Gaussian is the maximal (Shannon's) entropy distribution in unbounded real spaces, I was wondering whether the tendency of cummulative statistical processes with the same mean having a Gaussian as the limiting distribution can be in some way physically related with the increase of (Boltzmann's) entropy in thermodynamical processes.

In Johnson, O. (2004) Information Theory and The Central Limit Theorem, Imperial College Press, we can read:

"It is possible to view the CLT as an anlogue of the Second Law of Thermodynamics, in that convergence to the normal distribution will be seen as an entropy maximisation result"

Could anyone elaborate on such relationship and perhaps point to other non-obvious ones?

I am trying to simulate a heterogeneous liquid mixture with single-site atoms (translational motion only) and multi-site rigid molecules (translational + rotational motion). The molecules also vary in mass and moment of inertia from species to species. Does anyone know how to calculate the initial magnitudes of the translational and rotational velocities in relation to the desired temperature?

I understand the widely-used MDS programs take care of this "under the hood", but I am interested to know what exactly the calculation is. I have found related texts, but they focus on uniform systems. Thank you in advance for any help.

Anne

Dear all,

I have a number of Likert items with statements which have the following answer options

- Less likely
- No effect on likelihood
- More likely

I am unable to find the answer to the following question, and therefore cannot seem to determine the overarching data analysis family, let alone the correct techniques, to analyse my data set.

**Am I able to analyse my data with any quantitative methods, either descriptive or inferential, or do I need to use purely qualitative methods in analysing the data?**

I understand that the data needs to be contextualised before the statistical mechanism is determined, for example, the parametric/non-parametric debate. However, I am struggling to determine if the data type allows for quantitive analysis when I have 25 statements, each with 700 or so answers that indicate one of the above item answer options.

Also, I have a data set that pertains to one sample at one point in time. Can any correlational statistics or inferential statistics be done? Or do those methods only apply when you have either independent or paired samples? Or am I meant to compare one statement with another (Or a compilation of statements representing a theme with another compilation representing another theme) when running correlational tests?

Looking for that thread of information that ether says ties all my misalignments.

Kind regards,

Jameel

This question must be accompanied by provisos. One particular proviso simplifies the task. Assume that the problem solving used throughout the development of language was of the same kind that has at all times occurred since. In other words, assume that it is valid to use averages over time, at least for the time period under consideration. In 2009 I used ideas relating to statistical mechanics to estimate, on certain assumptions (a language-like call `lexicon' of about 100 calls), that language began between about 141,000 to 154,000 years ago in a couple of articles, and

). at p. 74. The work in those articles is over 10 years old and there have been developments since. One involves dispersion of phonemic diversity (Atkinson 2011). Are there other approaches?

Hi, for my statistical mechanics class, I had to calculate all the thermochemistry datas by myself and try to see I if get to the same results as Gaussian. Everything is good except for the zero point energy contribution. For instance, in the Gaussian thermochemistry PDF, they say that the

**''Sum of electronic and thermal Free Energies''**(wich you can find easily in the output or in Gaussview) is suppose to be the Gibbs free energy. But its not! In fact the zero point energy is missing and there is nowhere in the output or in GaussView where you can see the correct value. And this true also for Enthalpy and internal energy. In fact we have to ad the zero point energy afterwards, which I think is weird since this is the values we really need. In their pdf Gaussian emphisis that the ZPE is added everywhere by default, but it's not true. My teacher is also suspicious about the software. Do we miss something here? What do you guys take as your G, H and U? And where do you find them. I think we could all be wrong if we forget to ad the zero point energy which is what I think Gaussian does.Thanks

I came across this question while studying Tuckerman book on Statistical Mechanics for Molecular Dynamics.

Let's just say we're looking at the classical continuous canonical ensemble of a harmonic oscillator, where:

H = p^2 / 2m + 1/2 * m * omega^2 * x^2

and the partition function (omitting the integrals over phase space here) is defined as

Z = Exp[-H / (kb * T)]

and the average energy can be calculated as proportional to the derivative of ln[Z].

Equipartion theorem says that each independent coordinate must contribute R/2 to the systems energy, so in a 3D system, we should get 3R. My question is does equipartion break down if the frequency is temperature dependent?

Let's say omega = omega[T], then when you take the derivative of Z to calculate the average energy. If omega'[T] is not zero, then it will either add or detract from the average kinetic energy and therefore will disagree with equipartition. Is this correct?

According to statistical mechanics, the translation energy of a system of point particles is given by 3/2 NKT. And it is known that a single particle exhibits only translational energy. So can we simply imply that single particle system energy can be obtained just by substituting N=1? Because as far as I remember, the first principle of statistical mechanics assumes that number of particles of a system is extremely large, so we can't directly apply those principle for a single particle system

Even gases like air are assumed (for sufficiently low flow velocities) to have constant density. Is it only because the hydrodynamic equations of motions are easier to solve when incompressibility is assumed? Or can it be proven with Statistical mechanics why incompressibility is frequently assumed?

In case of Sound waves, small deviations in Density are respected and kinetic Energy of many sound waves are a lot lower than the air flow around a car at 100mph. Will compressibility come into account also at low speeds and if yes, why?

In the introduction to his text, A Student’s Guide to Entropy, Don Lemons has a quote “No one really knows what entropy is, so in a debate you will always have the advantage” and writes that entropy quantifies “the irreversibility of a thermodynamic process.” Bimalendu Roy in his text Fundamentals of Classical and Statistical Mechanics (2002) writes “The concept of entropy is, so to say, abstract and rather philosophical” (p. 29). In Feynman’s lectures (ch. 44-6): “Actually, S is the letter usually used for entropy, and it is numerically equal to the heat (which we have called Q_S delivered to a 1∘-reservoir (entropy is not itself a heat, it is heat divided by a temperature, hence it is measured in joules per degree).” In thermodynamics there is the Clausius definition which is a ratio of a quantity of heat Q to a degree Kelvin, Q/T, and the Boltzmann approach, k log(n). Shannon analogized information content to entropy; 2 as the base of the logarithm gives information content in bits. Eddington in the Natural Physical World (p. 80) wrote: “So far as physics is concerned time’s arrow is a property of entropy alone.” Thomas Gold, physicist and cosmologist suggested that entropy manifests or relates to the expansion of the universe. There are reasons to suspect that entropy and the concept of degrees of freedom are closely related. How best we understand entropy?

Zipf law (which is a power law) is the maximum entropy distribution of a system of P particles in N boxes where P>>N. Its derivation is based on microcanonical ensemble in which the entropy is calculated for an isolated system. In the canonical ensemble the system is with contact with an external bath having a fix temperature T. The macroscopic quantities of the canonical ensemble are calculated from its partition function in which the probabilities decay exponentially with energy.

The question is: how is Zipf law a power law which can be obtained from exponential partition function?

hi everyone can any one help me to find the entropy index to measure the diversification for the company by using Σ Pi*ln(1/Pi) I already have the total sales for each year and I have each segment sales share .. N the number of industry segments , pi is the percentage of ith segment in total company sales

I am recently start to study statistical mechanics, but it's really hard to understand some of the basic concepts. Especially the textbooks. I follows statistical physics by F. Reif, but the language is little hard to understand. Please give me suggestions on simple books.

I have to calculate the rate of tunnelling in a protein, for which I need the transmission coefficient. How do I calculate it? Or is there another way that does not require the transmission coefficient?

Does anyone have experience estimating formation enthalpy or Atomization energy of molecules using Gaussian? I am trying to calculate this parameter for simple molecules like H2O and NH3, however, there is a significant error even with decent theory level. I am wondering how accurate this could be done? Is there any specific theory/method performing better than others? ( currently, I am using B3LYP/6-311++G (2d,2p) . I do not have any QM background and any thought would be appreciated.

Here are some numbers I am getting from Gaussian, compared to the JANAF table values.

Molecule: Gaussian JANAF

H2O : 1174 KJ --------- 917 KJ

NH3 : 1432 ------------ 1158

N2 : 1467 ----------------- 941

H2: 434 ------------------ 432

O2: 870 -------------- 493

Javad

The characteristic frequency of thermal motion is around 7E12 Hz at room temperature (300K), but from that information how can we conclude that the bonds are hard; they don't vibrate !!

Hi, I want to ask a question about the basic theory of molecular dynamics.

In MD simulations, we can calculate the temperature using the average of kinetic energy of the system. For ideal gas(pV=NkbT), I can derive the relationship between temperature and kinetic energy: 1/2mv^2=3/2kbT (3-dimension). But if simulating non-ideal gas or fluids, how can I get the relationship between temperature and kinetic energy?

Could anyone give me some understandable explanations(I know little about quantum mechanics)? Any relevant material or link will be appreciated. Thanks!

I want to know, is negative T state only conceptually catches one's eye or truely significant to help us understand thermodynamics ?

- In early days Purcell et al, and in my university textbooks, negtive T in spin degree of freedom in NMR system was mentioned;
- In 2013, S. Braun
*et al*perform an experiment in cold atoms and realize an inversed enegy level population for motional degeree of freedom. (http://science.sciencemag.org/content/339/6115/52) - Many disputes about the Boltzman entropy or Gibbs entropy, as I sknow, especially Jörn Dunkel et al(https://www.nature.com/articles/nphys2815); they insist on Gibb's entropy is physical and argue that negative T is wrong.
- After that, many debates emerges, I read several papers, they all agree with conventional Boltzman entropy.

Does anyone has comments about this field ?

Is it

*truely fascinating*or just*triva*l to realize a population inversion state——negtive temperature ?or anyone has clarification of the Carnot engine work between a negtive T and positive T substance?

Any comments and discussions are welcome.

Hi,

I apologize for this apparently silly question, but please, could you point me out if there is an underlying relationship between the defect driven phase transition and the directed percolation?

Secondly, is it possible to have a system which undergoes a KT transition at T

_{1}generating free vortices, and subsequent by a spatial spreading of disorder via directed percolation at T_{2}?Please, if there is any relevant examples and materials, do let me know.

Many thanks.

Wang Zhe

Hi,

The vortex-unbinding Kosterlitz-Thouless physics generally applies to two-dimensional systems and occasionally three dimensional solid.

I was wondered if there exits an one dimensional analogy of vortex-unbinding occuring in two dimensions. Could anyone point me out, please?

Thank you.

Very kind wishes,

Wang Zhe

We know the ergodic definition and know the ergodic mappings. But what is the ergodic process?

Why Hamilton's equation is not used for constructing dynamical equation in liquid crystal?

Hi everyone,

I'm trying to solve this exercise (attached file) from the JM Yeomans book "Statistical mechanics of phase transition".

I understood how the expansion works for the same model without the field term but here I have troubles figuring out which terms are vanishing and which one are not to answer the second question. Also I don't get what de S_m(v,N) term represents ...

Anyone's help is welcomed ! :)

Hi!

Please, could anyone point me out an intuitive way to understand the exponential divergence of the correlation length in the KT-transition; in contrast to the usual algebraic divergence in the common sense of critical phenomena?

Thank you.

Wang Zhe

Hi all,

To calculate residence time from potential of mean force (PMF), we use stable state picture. Here a reaction state, product state are defined. This is done from radial distribution function. The time taken to move from reaction state to product state is designated as t and residence time is given by,

1-P(t) = e^{-t/tau}, tau is the residence time,

P(t) is the probability that it moves from reaction state to product state,

t= time taken to move from reaction state to product state. How to calculate P(t)?

Hello all,

I am confused about the following fact. Does minimization of free energy always mean that entropy is maximized? are they always complimentary to each other or in certain conditions?

I am doubtful because I came across a text which said for isothermal systems ( i.e. temperature doesn't change) these are complimentary, whereas for case of non-isothermal systems, minimization of free energy does not always mean maximization of entropy. May someone help.

Thanks

Dear Research-Gaters,

It might be a very trivial question to you : 'What does the term 'wrong dynamics ' actually mean ?'. I have heard that term often times, when somebody presented his/her, her/his results. As it seems to me, the term 'wrong dynamics' is an argument, which is often applicable to bring up arguments that a simulation result might be not very useful. But what does that argument mean in physical quantities ? It that argument related to measures such as correlation functions, e.g. velocity autocorrelation, H-bond autocorrelation or radial distribution functions ? Can 'wrong dynamics' be visualized in terms of a too fast decay in any of those correlation functions in comparison with other equilibrium simulations, or can it simply be measured by deviations of the potential energies, kinetic energies and/or the root-mean square deviation from the starting structure ? At the same time, thermodynamical quantities such as free-energies might not be affected by the term 'wrong dynamics'. Finally, I would like to ask what the term 'wrong dynamics' means, if I used non-equilibrium simulations which are actually completely non-Markovian, i.e. history-independent and out-of equilibrium (Metadynamics, Hyperdynamics). Thank you for your answers. Emanuel

Is there an analytical expression for the probability of finding two consecutive parallel spins in the one-dimensional antiferromagnetic Ising model (with constant

*J*<0), assuming equilibrium at temperature*T*?Looking for methods to model the state transitions of a multi-state process. Thanks in advance!

Finding the equilibrium points and the lapsed time between those points on phase paths ?

Anybody is having solution to problems of Statistical Mechanics by Kerson Huang (in pdf format)?

Acoustic is macroscopic description of the movement of atoms and molecules

For magnetic systems, Rushbrooke inequality is a direct consequence of the thermodynamic relation between C

_{H}, C_{V}and isothermal susceptibility, their positivity, and the definition of the critical exponent alpha as [controlling the behavior of C_{H}as function of the reduced distance from the critical temperature..In the case of fluid system, the usual definition of alpha refers to the constant volume specific heat (C

_{V}).However, the role played by C

_{V}in the thermodynamic relation between C_{P}, C_{V}and isothermal compressibility is not the same as C_{H}. Some additional hypothesis has to be made in order to derive the R. inequality for fluid systems or am I missing something trivial ?To clarify my question, I am trying to construct a coarse-grained modeling of an fcc system using the iterative Boltzmann inversion method to compute the pair potential interactions. However, in order to be able to start the iterative process, I first need to use V0(r)=-KBTln(g(r)) as an initial guess for the pair interactions among my CG beads. the g(r) values used in this relation are those computed from the all-atom system. However, as you know the rdf values of a crystalline structure is not continuous and it contains zero values between the peaks where the crystall lattices lie and I have no idea what to do with these zero values as they cannot be used in this relation as ln(0) is meaningless.

Another problem that I have is that I am using lammps software for my simulations and I have used lammps to compute these rdf values but the values do not match the values that I obtained manually using this relation: g(r)=V*dn(r)/4pi*r^2*N*dr

I really appreciate your help and suggestions.

For Brownian motion, how might I get the probability density function of the time of first arriving at a 3 dimensions state space?