Science topic

Statistical Mechanics - Science topic

Statistical mechanics provides a framework for relating the microscopic properties of individual atoms and molecules to the macroscopic bulk properties of materials that can be observed in everyday life.
Questions related to Statistical Mechanics
• asked a question related to Statistical Mechanics
Question
And can you reference articles or texts giving answers to this question?
Reviewing the literature would be helpful before considering whether to updatie the 2015 ideas.
...Just a bit more to the answer by colleague V. V. Vedenyapin:
in the Boltzmann-Planck's S = k*ln(W), W = (1+Power[x,K]), with x=(T/Tc), T stands for the Kelvin's absolute temperature, Ts is temperature scale, and K - efficiency of the process under study.
About 100 years ago, in the Journal of American Chemical Society, Dr. George Augustus Linhart has published the formal statistical inference of the above fact.
I have tried to answer the poser you have posted here - consequently and in detail:
Shorter versions:
• asked a question related to Statistical Mechanics
Question
In a simple and introductory way, there is a book by Prof. F. Reif of the Berkeley course, i.e., Vol 5: "Statistical Physics" by F. Reif.
In chapter 7, section 7.4, pp. 281 of the 1965 edition by MCGraw Hill, he discusses in an introductory way what he calls: "the basic five statements of statistical thermodynamics" which are based on some statistical postulates that he also talks about in section 3.3, pp. 111, there are three postulates, inside of boxes, Eqs. 17, 18, & 19 and among those, the one who you refer to.
I prefer you read from same Prof. Reif book, what he has to say about your interesting question.
Kind Regards.
• asked a question related to Statistical Mechanics
Question
Why the 3 random variables x, y, and z are independent??
Consider a random vector R which has coordinates x, y, and z (3 random variables).
Prove that the random vector R will be isotropically distributed in the 3D space only if the 3 coordinates are independent and identically distributed random variables.
This case is generally referred to as i.i.d variables (Independent and Identically Distributed).
This seems like someone's homework problem, so I'll just give a hint. Consider that the density function is isotropic about a mean that we can take as zero without loss of generality. Therefore, the density can be expressed as a function of radius only, and therefore a function of x^2+y^2+z^2. Note that the density is an "even" function, symmetric about zero under x, y, and z. Now consider any two variables, like x and y. How do you express the expectation value of the product (xy)? What do you know about even and odd functions that tells you something about this expectation? I'll stop there.
• asked a question related to Statistical Mechanics
Question
One might argue: Animals increase their survivability by increasing the degrees of freedom available to them in interacting with their environment and other members of their species.
Right, wrong, or in between? Your views?
Are there articles discussing this?
• asked a question related to Statistical Mechanics
Question
Let's say if 12-6 equation is perfect, the most favored distance between two Ar atoms is the r(min).
My understanding for vdW radii is measured from some experiments, while 12-6 potential is more like an approximation. But if I want to link vdW radii to distance in 12-6 sigma/r/r(rmin), which one should it be?
The definition of vdW radii is the closest distance of two atoms, so should it be somewhere slightly smaller than sigma?
In the website, however, the sigma value is referred as the vdW radius.
The van der Waals radius is itself an approximation of a sort, since atoms are not hard spheres in reality. Different experimental methods of estimation yield different values for the same species. One could say the LJ sigma parameter is in the same order of magnitude as the vdW radius, but sigma should be picked to produce the desired interatomic distance or some other property in the simulation, rather than precisely equal to the vdW radius.
• asked a question related to Statistical Mechanics
Question
Hamiltonian mechanics is a theory developed as a reformulation of classical mechanics and predicts the same outcomes as non-Hamiltonian classical mechanics. It uses a different mathematical formalism, providing a more abstract understanding of the theory. Historically, it was an important reformulation of classical mechanics, which later contributed to the formulation of statistical mechanics and quantum mechanics.
• In classical mechanics, we start from the Hamilton formulation leading us to the concept of the phase space, which is used in the microcanonical ensemble widely studied in statistical mechanics, & where energy must be preserved in order to have some classical perspective of what happens with millions of microscopical states.
• In quantum mechanics, we have a sort of similar approach when we use elastic scattering theory which uses an energy conservation principle and a phase space, & which allows observing a classical perspective of the quantum micro world.
Interesting question, Best Regards.
• asked a question related to Statistical Mechanics
Question
Statistical mechanics considering interaction is attached to the second law of thermodynamics. Considering the influence of temperature on the interaction potential, statistical mechanics can prove that the second law of thermodynamics is wrong.
If you apply quantum mechanics consistently without using semi-classical approximations you get for the partition function
Z=\sum_n \exp -\beta E_n
with n running over all N-particle energy eigenstates of the system. The E-n are temperature independent and no conflict arises with the second law of thermodynamics. I assumed here that the system is confined within an external potential.
• asked a question related to Statistical Mechanics
Question
A system of ideal gas can always be made to obey classical statistical mechanics by varying the temperature and the density. Now a wave packet for each particle is known to expand with time. Therefore after sufficient time has elapsed the gas would become an assembly of interacting wavelets and hence its properties would change since now it would require a quantum mechanical rather than classical description. The fact that a transition in properties is taking place without outside interference may point to some flaw in quantum mechanics. Any comments on how to explain this.
Wigner probabilistic distributions, Prof. Sohail Khan
Best Regards.
• asked a question related to Statistical Mechanics
Question
I am asking this question on the supposition that a classical body may be broken down in particles which are so small in size that quantum mechanics is applicable on each of these small particles. Here number of particles tends to uncountable (keeping number/volume as constant).
Now statistical mechanics is applicable if practically infinite no. of particles are present. So if practically infinite number of infinitely small sized particles are there, Quantum Statistical Mechanics may be applied to this collection. (Please correct me if I have a wrong notion).
But this collection of infinitesimally small particles make up the bulky body, which can be studied using classical mechanics.
There is no difference Prof. Manish Khare, we have two windows to watch the physical world, the classical & the quantum approaches, but there is a window, they are the Wigner probabilistic distributions.
Best Regards.
• asked a question related to Statistical Mechanics
Question
It is suggested that the Zero Point Energy (that causes measurable effects like the Casimir force and Van der Waals force) cannot be a source of energy for energy harvesting devices, because the ZPE entropy cannot be raised, as it is already maximal in general, and one cannot violate the second law of statistical mechanics. However, I am not aware of a good theoretical or empirical proof that ZPE entropy is at its highest value always and everywhere. So I assume that ZPE can be used as a source of energy in order to power all our technology. Am I wrong or right?
It isn't the zero point energy'' that is the origin of either the Casimir force or the van der Waals force. First of all, these two forces don't have anything to do with each other: The van der Waals force is the classical force between electric dipoles, the Casimir force is the force that expresses the fluctuations of energy about its average value in the state where this average value is equal to zero.
That's why zero-point energy is a misnomer.
The entropy of any physical system, in flat spacetime, in the vacuum state vanishes, since the vacuum state of a quantum system in flat spacetime is unique.
• asked a question related to Statistical Mechanics
Question
If MD simulations converges to Boltzmann distributions ρ∼exp(−βϵ) after sufficiently long time why do we need MD simulations, as all the macroscopic quantities can be computed from the Boltzmann distribution itself. This question I am asking for short peptides of sequences of few amino acids.(tripeptide, tetrapeptide etc).
For instance in the given above (link) paper, they are using MD to generate Ramachandran distributions of conformations of pentapeptide at a constant temperature. So this should obey statistical mechanics. If it is so, then this should satisfy Boltzman distributions.So I should be able to write down the distributions using boltzmann weight as follows,
ρ({ϕi,ψi})∼exp(−βV({ϕi,ψi}))
.Here, all set of Ramachandran angle coordinates of the pentapeptides is given by {ϕi,ψi}{ϕi,ψi}.
Why should I run MD to get the same distributions?
As
Behnam Farid
pointed out, you cannot know all the relationships (the functional form V({ϕi,ψi})) between the different amino acids (sterical clashes, interactions based on charges or hydrophobicity) to predict the energetically favorable combinations of phi and psi and hence need to sample them. The state distribution is affected by the amino acid sequence, may differ with the force fields and simulations methods, does depend on the solvent (ions etc.) and temperature.
Have a look here to see how complex the conformational space of small peptides (13-15) can already be:
Bests
• asked a question related to Statistical Mechanics
Question
How to calculate mean square angular displacement ? Do we need any periodic boundary conditions if at each step angle is updated using theta(t+dt) = theta(t) + eta(t) , where eta(t) is a gaussian noise . Please describe the procedure to calculate this quantity.
Souvik Sadhukhan Yes: The way to do that is by relating the angular mean square displacement, expressed through <sin2θ>, more precisely, <sinθ(t)sinθ(t')>,
for instance, with the 2-point function of the noise, <η(t)η(t')>. This is the relation that defines the diffusion coefficient (strictly speaking, in the approximation, where the sinθ(t) can be assumed to be drawn from a Gaussian distribution; otherwise it's more complicated) and is obtained by computing the probability distribution of the sinθ(t), from the knowledge of the probability distribution of the η(t) and the relation, dθ(t)/dt = η(t).
• asked a question related to Statistical Mechanics
Question
Is there any code available to calculate Spin Spin Spatial correlation function in 1d Ising model?
Hello. I wrote a code for this . i can share it with you
• asked a question related to Statistical Mechanics
Question
Recently, I've been selected for an ICTP program named Physics of Complex Systems. But, I have a keen interest in Particle Physics & Quantum networks. As statistical mechanics involved in Complex systems. One of my professors said that statistical mechanics could be a helpful tool for particle physics.
Dear Lutfa Rahman,
Greetings, Sorry, I mean " System of Particles".
Regards, Saeed
• asked a question related to Statistical Mechanics
Question
I mean, in Williamson-Hall analysis, should we take theta of a set of parallel planes or all the positions corresponding to most intense peaks?
Suresh Guduru The Williamson-Hall Plot. W-H plot is used to calculate the crystallite size and microstrain from complex XRD data. That's when both the crystallite size and microstrain vary as a function of the Bragg's angle, we can only calculate these parameters from XRD data using W-H plot. I have provided the practice file (Origin file) as well as the calculation file (Excel file) in the video description. Thanks
• asked a question related to Statistical Mechanics
Question
Dear All;
If you are well-experienced in one of the fields Complex Networks, Human Genetics, or Statistical Mechanics and would like to collaborate with us in our project please contact me at:
Basim Mahmood
Project title: Statistical Mechanics of Human Genes Interactions
Regards
Basim Mahmood interesting question and am sure people and experts from your domain would definitely look at this and will have some sort of discussions with you, however my core is Biogas and anything related to Biogas i would happy to collaborate
• asked a question related to Statistical Mechanics
Question
I would like to calculate the non-gaussian parameter from MSD. I think I am doing mistake in calculating it?
3 < r(t)**4 >
NGP ((alpha)(t))= -----------------------  - 1
5 < r(t)**2 >**2
In some articles it is delta r(t). I am bit confused. Someone please help me out in calculating it by explain the terms in it?
Thank you
you can not calculate non-Gaussian parameter (NGP) from the averaged mead squared displacement. You can calculate $\delta r^2(t) = (\vec r(t) - \vec r(0))^2$ and $\delta r^4(t) = \delta r^2(t)*\delta r^2(t )$ at each time difference 't' for each particle. These averaged over time and number of particles will give you NGP as $\alpha_2(t) = (3/5)($\delta r^2(t)$/$\delta r^4(t)$) - 1.0$. Hope this help you!!
• asked a question related to Statistical Mechanics
Question
Hello Dear colleagues:
it seems to me this could be an interesting thread for discussion:
I would like to center the discussion around the concept of Entropy. But I would like to address it on the explanation-description-ejemplification part of the concept.
i.e. What do you think is a good, helpul explanation for the concept of Entropy (in a technical level of course) ?
A manner (or manners) of explain it trying to settle down the concept as clear as possible. Maybe first, in a more general scenario, and next (if is required so) in a more specific one ....
Kind regards !
Dear F. Hernandes
The Entropy (Greek - ἐντροπία-transformation, conversion, reformation, change) establishes the direct link between MICRO-scopic state (in other words orbital) of some (any) system and its MACRO-scopic state parameters (temperature, pressure, etc).
This is the Concept (from capital letter).
Its main feature – this is the ONLY entity in natural sciences that shows the development trend of any self-sustained natural process. It is the state function; it isn’t the transition function. That is why the entropy is independent from the transition route, it depends only from the initial state A and final state B for any system under consideration. Entropy has many senses.
In the mathematical statistics, the entropy is the measure of uncertainty of the probability distribution.
In the statistical physics, it presents the probability (so-caled *statistical sum*) of the existence of some (given) microscopic state (*statistical weight*) under the same macroscopic characteristics. This means that the system may have different amount of information, the macroscopic parameters being the same.
In the information approach, it deals with the information capacity of the system. That is why, the Father of Information theory Claude Elwood Shannon believed that the words *entropy* and *information* are synonyms. He defined entropy as the ratio of the lost information to the whole of information volume.
In the quantum physics, this is the number of orbitals for the same (macro)-state parameters.
In the management theory, the entropy is the measure of uncertainty of the system behavior.
In the theory of the dynamic systems, it is the measure of the chaotic deviation of the transition routes.
In the thermodynamics, the entropy presents the measure of the irreversible energy loss. In other words, it presents system’s efficiency (capacity for work). This provides the additivity properties for two independent systems.
Gnoseologically, the entropy is the inter-disciplinary measure of the energy (information) devaluation (not the price, but rather the very devaluation).
This way, the entropy is many-sided Concept. This provides unusual features of entropy.
What is the entropy dimension? The right answer depends on the approach. It is dimensionless figure in the information approach (Shannon defined it as the ratio of two uniform values; therefore it is dimensionless by definition). On the contrary, in the thermodynamics approach it has a dimension (energy to temperature J/K)
Is entropy parameter (fixed number) or this is a function? Once again, the proper answer depends on the approach (point of view). It is a number in the mathematical statistics (logarithm of the number of the admissible (unprohibited) system states, well-known sigma σ). At the same time, this is the function in the quantum statistics. Etc., etc.
So, be very cautious when you are operating with entropy.
Best wishes,
Emeritus Professor V. Dimitrov vasili@tauex.tau.ac.il
• asked a question related to Statistical Mechanics
Question
This is to understand how the concepts of statistical mechanics is applied in astrophysics.
It depends on which subject of Statistical Mechanics.
Let's say for neutrons starts you can follow the chapter on Neutron Stars in the book:
Landau, L. D., & Lifshitz, E. M. 1980, Statistical Physics (Elsevier Ltd.) the chapter entitled "properties of matter at very high-density" chap XI.
For gas dynamics and fluctuations in interstellar media you can follow:
1. Spitzer, L. 1962, Physics of Fully Ionized Gases (New York: Wiley)
2. Braginskii, S. I. 1965, RvPP, 1, 205
3. Parker, E. N. 1953, ApJ, 117, 431
Best Regards.
• asked a question related to Statistical Mechanics
Question
I am looking for some quality materials for learning the molecular dynamics theory and the use of LAMMPS. Besides the LAMMPS manual from Sandia National Laboratory, which sources can I use for learning LAMMPS?
Hello!
1) Understanding Molecular Simulation: From Algorithms to Applications, Daan Frenkel Berend Smit
• asked a question related to Statistical Mechanics
Question
This question relates to my recently posted question: What are the best proofs (derivations) of Stefan’s Law?
Stefan’s Law is E is proportional to T^4.
The standard derivation includes use of the concepts of entropy and temperature, and use of calculus.
Suppose we consider counting numbers and, in geometry, triangles, as level 1 concepts, simple and in a sense fundamental. Entropy and temperature are concepts built up from simpler ideas which historically took time to develop. Clausius’s derivation of entropy is itself complex.
The derivation of entropy in Clausius’s text, The Mechanical Theory of Heat (1867) is in the Fourth Memoir which begins at page 111 and concludes at page 135.
Why does the power relationship E proportional to T^4 need to use the concept of entropy, let alone other level 3 concepts, which takes Clausius 24 pages to develop in his aforementioned text book?
Does this reasoning validly suggest that the standard derivation of Stefan’s Law, as in Planck’s text The Theory of Heat Radiation (Masius translation) is not a minimally complex derivation?
In principle, is the standard derivation too complicated?
Good morning.
It is really simple to deduce it if we start from the density of energy into a cavity (Planck distribution). i specify, that the Planck distribution can be deduced simply from Bose-Einstein statistics, knowing the value of Planck's constant.
I'm sending you this deduction informing you, that the work is written in Italian. I think you can follow the deduction through the sequence of formulas.
Of Course there is the Boltzmann deduction of it published an year after Stefan's experimental work.
Have a good day and stay safe.
• asked a question related to Statistical Mechanics
Question
I'm writing my dissertation about economic dynamics of inequality and i'm going to use econophysics as a emprical method.
Dear Mehmet,
Some formal similarities between equilibrium statistical mechanics and economics may exist, but we should be very suspicious of any direct comparisons. Of course, in some instances the mathematical solutions used in statistical mechanics may be of some use in economics from a practical viewpoint, I would not read too much into this. My sense is that the relationship between both is that there is incomplete information about the microscopic state of the system. See e.g., this paper by my advisor:
There is a fair bit of literature on using entropy to model systems in physics and economics.
• asked a question related to Statistical Mechanics
Question
Or is the concept inapplicable?
If it were applicable, could statistical mechanical methods apply? Does entropy?
Amino acids in proteins are, of course, not free to move independently like a molecule in solution. First, they are connected to 2 other amino acids (or one other if they are the first or last in the chain, or up to 3 others for a cysteine in a disulfide bond). Second, they are subject to a variety of forces exerted by their surroundings, such as charge-charge interactions, hydrogen bonding, van der Waals interactions, and pi stacking (in the case of aromatic amino acids). Computational methods, particularly molecular dynamics, can be used to model the movement of amino acids in proteins over short time scales.
• asked a question related to Statistical Mechanics
Question
Some excerpts from the article
Comparing methods for comparing networks Scientific Reports volume 9, Article number: 17557 (2019)
By Mattia Tantardini, Francesca Ieva, Lucia Tajoli & Carlo Piccard
are:
To effectively compare networks, we need to move to inexact graph matching, i.e., define a real-valued distance which, as a minimal requirement, has the property of converging to zero as the networks approach isomorphism.
we expect that whatever distance we use, it should tend to zero when the perturbations tend to zero
the diameter distance, which remains zero on a broad range of perturbations for most network models, thus proving inadequate as a network distance
Virtually all methods demonstrated a fairly good behaviour under perturbation tests (the diameter distance being the only exception), in the sense that all distances tend to zero as the similarity of the networks increases.
If achieving thermodynamic efficiency is the benchmark criterion for all kinds of networks, then their topologies should converge to the same model. If they all converge to the same model when optimally efficient, does that cast doubt on topology as a way to evaluate and differentiate networks?
Probably yes.
• asked a question related to Statistical Mechanics
Question
See for example, Statistical mechanics of networks, Physical Review E 70, 066117 (2004).
The architecture and topology of networks seem analogous to graphs.
Perhaps though the significant aspect of networks is not their architecture but their thermodynamics, how energy is distributed via networks.
Perhaps networks linkages are only means for optimizing energy distributions. If so, then network entropy (C log(n)) might be more fundamental (and much simpler to use) than the means by which network entropy is maximized. If that were so, then the network analogy to graphs might lead to a sub-optimal conceptual reference frame.
Dimensional capacity is arguably a better conceptual reference frame.
to my personal view, I think it is one of the best as far as conceptualization is considered.
• asked a question related to Statistical Mechanics
Question
This question is prompted by the books review in the September 2020 Physics Today of The Evolution of Knowledge: Rethinking Science for the Anthropocene, by Jürgen Renn.
I suspect that there is such an equation. It is related to thermodynamics and statistical mechanics, and might be characterized, partly, as network entropy.
Two articles that relate to the question are:
and also there is a book, the ideas in which preceded the two articles, above:
The Intelligence of Language.
The question is related somewhat distantly to an idea of Isaac Asimov in his science fiction, The Foundation Trilogy, psychohistory.
Antonio Fernandez Guerrero
Thank you for kindly mentioning articles by K. Friston on free energy. I don’t recall previously running across those articles or his name. It is marvelous to learn something new, and it is a credit to ResearchGate that it affords so many opportunities for learning. Even in these pandemic times. Thank you taking the time to make your knowledge available to other people.
I read the 2010 article: The free-energy principle: a unified brain theory?
The 2010 Friston article seeks to model acquisition of knowledge by a human brain. Some of its assumptions follow (subject to my having missed understanding more than what I understood when I read the article). The brain seeks to apply inference to sensory perceptions in a way that is maximally efficient, or equivalently, uses the minimal amount of energy necessary to accomplish that purpose. The concepts of free energy and entropy from thermodynamics and statistical mechanics are adapted to apply to a model of how neurons seek to gain information. Part of the brain’s inference processes uses previously acquired data or information.
The 2009 article on A Theory of Intelligence that I mention in the question supposes that most of what an average person knows is learned from problems already collectively solved by society, forming society’s store of knowledge accumulated over (probably) hundreds of generations. The average person learns speech, how to write, counting, facts and methods of problem solving from society’s accumulated knowledge. Knowledge can be considered to consist of solutions to what were once problems that society, or some subset of society, obtained.
The 2010 Friston article inquires about the individual brain. The 2009 article the question refers to focuses on the a collection of networked brains; there being a network, statistical mechanics can apply.
The 2009 article asks how much greater is the problem solving capacity of society compared to the problem solving capacity of an average individual. For example, 350 million modern English speakers at about 1990 had about (an estimated)72 times more problem solving degrees of freedom. Since 72 degrees of freedom can be expressed as an exponent, the difference between the problem solving capacity of society compared to that of the individual is in effect 72 orders of magnitude (based roughly on the mean path length of a network as the base of a logarithmic function). The number 72 is obtained by developing the concept of network entropy.
The 72 orders of magnitude difference in favor of collective problem solving capacity compared to meager average individual capacity implies that the primary inference engine at work is possessed by society and moreover, possessed by the cumulative problem solving capacities of all human societies that ever existed. In fact, one may suspect that some of our collective knowledge might pre-exist speech and come from forbears pre-existing homo sapiens. While inference capacity that an individual brain has may guide that individual’s behavior, most knowledge is learned, and an individual brain as an inference engine is mostly involved in figuring out how to learn from knowledge that already exists.
The motivation to seek out a function like network entropy is based on a book called The Intelligence of Language that I mostly wrote in 2006 and published as an e-book on Kindle in 2016.
Regards.
• asked a question related to Statistical Mechanics
Question
There are two ways to derive Boltzmann exponential probability distribution of ensemble:
1) Microcanonical Ensemble: We assume a system S(E,V,N)
E= internal Energy, V=volume, N=number of molecules or entities.
We have different energy states that the molecules can take, but the total energy E of the system is fixed. So whatever be the distribution of molecules in different energy levels, the energy of the over all system is fixed. Then we find the maxima of Entropy of the system to find out the equilibrium probability distribution of molecules in energy levels. We introduce two Lagrange multipliers for two constraints: total probability is unity and total energy is constant E. What we get is an exponential distribution.
2) Canonical Ensemble: We have a system with N molecules. The Helmholtz energy is defined as F=F(T,V,N). So this time energy is not fixed but the temperature is. Instead of different energy states for the molecules, now we have different energy levels of the entire system to be. So by minimization of F we get the equilibrium probability distribution of the system to be in different energy levels. This time the constraint is total probability is unity. The distribution we get is an exponential one.
Now the question is:
How can the probability distribution of the canonical ensemble can give population distribution of molecules in different energy states which is rather found from micro-canonical ensemble?
In the book Molecular driving forces (Ken A Dill) Chapter 10. Equation 10.11 says something similar.
Dear Prof. Rituraj Borah, in addition to all interesting answers to this thread, I would like to add that the microcanonical ensemble yields entropy as a function of energy and volume S(U, V) (at fixed particle number N).
Now for the canonical distribution when there is an exchange of energy with the heat bath, the following average thermodynamics equations can be used to describe the system: Delta F = -T Delta S - P Delta V = T Delta S - P Delta V = Delta <U> where F = <U> and P = <P>.
The average <...> comes from the equation <p> = - delta_F/ delta_T where U is the internal free energy.
It means that in the canonical ensemble by using these relations S(<U>, V), it also yields entropy as a function of energy and volume, as in the microcanonical assemble when the probability distribution is used.
For instance, for the whole derivation see: pp. 41 & 42 of L. Landau and E. Lifshitz, Vol. 6, Statistical Physics, Pergamon 1980, Part I. Chapter II.
• asked a question related to Statistical Mechanics
Question
The third rotation allegedly leaves the molecule unchanged no matter how much it is rotated, but is it really okay to assume this? A response in mathematics is welcomed, but if you can explain it in words that would be good, too.
Dear Cory Camasta, Three degrees of freedom come from free motion, one from rotational and one from vibrational & E= 5/2 KBT. The remaining rotational degree has a very small moment of inertia as some participants noted previously.
See the following external post for the full explanation regarding the separation of levels mentioned by Prof.
Gert Van der Zwan
:
• asked a question related to Statistical Mechanics
Question
Suppose, chemical composition of the compound, temperature and pressure are known. Electronic structure of constituent elements from numerical solution of Quantum chemistry are also known. Then
• There can be only about 230 3D crystallographic lattices. But is there any limit of motif that can be included into the lattice without violating stoichiometry? How ab-initio calculation find out the appropriate motif to put into lattice to generate crystal structure? Without finding motifs, it is impossible to find crystal structures whose Gibbs free energy needs to be minimized.
• Is there any mathematical method that finds out potential energy in an infinite 3D periodic lattice with distributed charges (say, theoretical calculation of Madelung constant)? What are the mathematical requirement/prerequisite to understand such formula?
• How electron cloud density and local potential energy of a molecule/ motif/lattice point can be linked to total Gibbs free energy of molecule/lattice integrated over the whole structure? What are the statistical-mechanical formula that relates the two? and what are the prerequisites to understand such formula?
Suppose reference point for zero gibbs free energy is conveniently provided.
"Is there any mathematical method that finds out potential energy in an infinite 3D periodic lattice with distributed charges (say, theoretical calculation of Madelung constant)? What are the mathematical requirement/prerequisite to understand such formula?"
First, free elastic energies F give you an idea of how the potential energy in crystals is used because the potential term U((C_ij) does have to include an expression invariant to the point group symmetry considered & group theory does that job for different crystallographic classes.
Second, it looks that in the case of finding the Madelung constant which is a different question I guess, because it contains the electroctatic potential energy, it can be done in an easier way, please check:
& using the Ewald method to find the electrostatic energy :
Finally, look how it can be done for cubic crystals:
• asked a question related to Statistical Mechanics
Question
Dear Colleagues :
Does anyone have literature referencing the diffusion process of Carbon (I mean Carbon atoms) into Bismuth Telluride (Bi2Te3) or into some other compound alike ? E.g. PbTe, (Sb,Se)Bi2Te3, Sb2Te3, etc ... ?
I'll really appreciate if someone can help me out
Kind Regards Sirs !
• asked a question related to Statistical Mechanics
Question
In the question, Why is entropy a concept difficult to understand? (November 2019) Franklin Uriel Parás Hernández commences his reply as follows: "The first thing we have to understand is that there are many Entropies in nature."
It leads to this related question. I suspect the answer is, yes, the common principle is degrees of freedom and dimensional capacity. Your views?
among all entropy definitions, the most difficult (I still don´t understand it) but probably the most important one is the Kolmolgorov-Sinai entropy.
The reason: Prof. Sinai and Acad. Kolmolgorov were major architects of the most bridges connecting the world of deterministic (dynamical) systems with the world of probabilistic (stochastic) systems.
• asked a question related to Statistical Mechanics
Question
By quasi-particle I mean in the sense of particles dressed with their interactions/correlations? If yes, any references would be helpful.
Dear Prof. Sandipan Dutta , in adittion to all interesting answers of this compelling thread, I will add a link to a book, where the concept of quasiparticles is masterfully explained by to of the creators of the quasiparticle approach:
Quasiparticles by Prof. M. I. Kaganov, and Academician I. M. Lifzhits.
• asked a question related to Statistical Mechanics
Question
Dear all:
I hope this question seems interesting to many. I believe I'm not the only one who is confused with many aspects of the so called physical property 'Entropy'.
This time I want to speak about Thermodynamic Entropy, hopefully a few of us can get more understanding trying to think a little more deeply in questions like these.
The Thermodynamic Entropy is defined as: Delta(S) >= Delta(Q)/(T2-T1) . This property is only properly defined for (macroscopic)systems which are in Thermodynamic Equilibrium (i.e. Thermal eq. + Chemical Eq. + Mechanical Eq.).
So my question is:
In terms of numerical values of S (or perhaps better said, values of Delta(S). Since we know that only changes in Entropy can be computable, but not an absolute Entropy of a system, with the exception of one being at the Absolute Zero (0K) point of temperature):
Is easy, and straightforward to compute the changes in Entropy of, lets say; a chair, or a table, our your car, etc. since all these objects can be considered macroscopic systems which are in Thermodynamic Equilibrium. So, just use the Classical definition of Entropy (the formula above) and the Second Law of Thermodynamics, and that's it.
But, what about Macroscopic objects (or systems), which are not in Thermal Equilibrium ? Maybe, we often are tempted to think about the Entropy of these Macroscopic systems (which from a macroscopic point of view they seem to be in Thermodynamic Equilibrium, but in reality, they have still ongoing physical processes which make them not to be in complete thermal equilibrium) as the definition of the classical thermodynamic Entropy.
what I want to say is: What would be the limits of the classical Thermodynamic definition of Entropy, to be used in calculations for systems that seem to be in Thermodynamic Equilibrium but they aren't really? perhaps this question can also be extended to the so called regime of Near Equilibrium Thermodynamics.
Kind Regards all !
1. At very low temperatures, entropy behaves according to Nernst's theorem
I copy the wiki-web inf. but you also find the same information in Academicians: L. Landau and E. Lifshitz Vol. 5 Vol:
The third law of thermodynamics or Nerst theorem, states that the entropy of a system at zero absolute temperature is a well-defined constant. Other systems have more than one state with the same, lowest energy, and have a non-vanishing "zero-point entropy".
2. Lets try to put Delta Q = m C Delta T, into the expression: Delta(S) >= Delta(Q)/(T2-T1) . What we do obtain? something missing then?
you see, physical chemistry and statistical physics look at entropy in a different subtle way.
3. Delta S = Kb Ln W2/W1 where W is the total number of micro-states of the system, then what is W1 and W2 concerning Delta S?
4. Finally, look at the following paper by Prof. Leo Kadanoff concerning the meaning of entropy in physical kinetics (out of equilibrium systems): https://jfi.uchicago.edu/~leop/SciencePapers/Entropy_is3.pdf
• asked a question related to Statistical Mechanics
Question
cancer (oncology) is field of biophysics
Yes, there is a relationship. I cannot explain it very well, though.
Cancer cells respond to pressure on the membrane by using it as a signal to grow all the more. Normal cells interpret the signal as to stop growing. Other than cancer cells there are other cells which deal with a lot of stress. Heart cells experience a lot of pressure as blood rushes in and out of them.
The pathways that say affect cancer through the various types of pressure waves on the membrane are hard to describe because integral membrane proteins in the membrane send the signal through a kinase cascade resulting in signaling transcriptional co-activators being imported into nucleus which bind to transcription factors and result in the expression of genes that reprogram the cells.
Of course this is all so vague. But what is interesting is that the transcriptional co-activator yes-associated protein's (YAP) level of nuclear expression governs the level of cellular stretching. High pressure also activates YAP resulting in pretty much the same phenotype. The mechanism how this happens is unknown. There is a see of proteins involved in the Hippo Pathway. It is also possible that the pressure waves are transmitted through the cell and flatten the nucleus. This has been studied just a bit, but as you can imagine it is difficult to study the architecture of the nucleus inside a living cell with an intact external membrane.
• asked a question related to Statistical Mechanics
Question
Since the Gaussian is the maximal (Shannon's) entropy distribution in unbounded real spaces, I was wondering whether the tendency of cummulative statistical processes with the same mean having a Gaussian as the limiting distribution can be in some way physically related with the increase of (Boltzmann's) entropy in thermodynamical processes.
In Johnson, O. (2004) Information Theory and The Central Limit Theorem, Imperial College Press, we can read:
"It is possible to view the CLT as an anlogue of the Second Law of Thermodynamics, in that convergence to the normal distribution will be seen as an entropy maximisation result"
Could anyone elaborate on such relationship and perhaps point to other non-obvious ones?
Central limit theorem is related to Gaussiun distribution -The main modelling equation of Random Coil Thermodynamics - This Randomness is the main significance of Entropy - Another name of second law of thermodynamics .
• asked a question related to Statistical Mechanics
Question
I am trying to simulate a heterogeneous liquid mixture with single-site atoms (translational motion only) and multi-site rigid molecules (translational + rotational motion). The molecules also vary in mass and moment of inertia from species to species. Does anyone know how to calculate the initial magnitudes of the translational and rotational velocities in relation to the desired temperature?
I understand the widely-used MDS programs take care of this "under the hood", but I am interested to know what exactly the calculation is. I have found related texts, but they focus on uniform systems. Thank you in advance for any help.
Anne
I would make sure that you have 1/2kT of kinetic energy per degree of freedom on each molecule. If it is a rigid body simulation, you have 3 translational DoFs and 3 rotational DoFs, find the required velocity and angular velocity for each molecule and generate randomly for each molecule. If you want your center of mass not to be moving, you have to remove the total linear velocity of the system from each molecule and then rescale the velocities to get your total required kinetic energy back. Also, rescale all the velocities to get you traslational kinetic energy by (N - 3)/N to reflect that you have 3 DoFs less due to the resting center of mass.
I would not bother to sample the velocities from a distribution, though, they should get there on their own after a short simulation.
• asked a question related to Statistical Mechanics
Question
Dear all,
I have a number of Likert items with statements which have the following answer options
• Less likely
• No effect on likelihood
• More likely
I am unable to find the answer to the following question, and therefore cannot seem to determine the overarching data analysis family, let alone the correct techniques, to analyse my data set.
Am I able to analyse my data with any quantitative methods, either descriptive or inferential, or do I need to use purely qualitative methods in analysing the data?
I understand that the data needs to be contextualised before the statistical mechanism is determined, for example, the parametric/non-parametric debate. However, I am struggling to determine if the data type allows for quantitive analysis when I have 25 statements, each with 700 or so answers that indicate one of the above item answer options.
Also, I have a data set that pertains to one sample at one point in time. Can any correlational statistics or inferential statistics be done? Or do those methods only apply when you have either independent or paired samples? Or am I meant to compare one statement with another (Or a compilation of statements representing a theme with another compilation representing another theme) when running correlational tests?
Looking for that thread of information that ether says ties all my misalignments.
Kind regards,
Jameel
• asked a question related to Statistical Mechanics
Question
This question must be accompanied by provisos. One particular proviso simplifies the task. Assume that the problem solving used throughout the development of language was of the same kind that has at all times occurred since. In other words, assume that it is valid to use averages over time, at least for the time period under consideration. In 2009 I used ideas relating to statistical mechanics to estimate, on certain assumptions (a language-like call lexicon' of about 100 calls), that language began between about 141,000 to 154,000 years ago in a couple of articles, and
). at p. 74. The work in those articles is over 10 years old and there have been developments since. One involves dispersion of phonemic diversity (Atkinson 2011). Are there other approaches?
I read the 2011 paper by Atkinson on phonemic diversity in May 2019. It had not occurred to me in 2008 or since that there might be a way to find a rate of phonemic change. So the Atkinson paper is very interesting. Unfortunately, I could not find a way to align Atkinson's ideas with those in the 2008 lexical growth paper and in the 2009 intelligence paper, mentioned in the question above. But I did find a 2015 paper, Detecting Regular Sound Changes in Linguistics as Events of Concerted Evolution (Hruschka et al) and was pretty amazed at some of their data. I have thus added a paper on phonemic and lexical change which might be of interest.
• asked a question related to Statistical Mechanics
Question
Hi, for my statistical mechanics class, I had to calculate all the thermochemistry datas by myself and try to see I if get to the same results as Gaussian. Everything is good except for the zero point energy contribution. For instance, in the Gaussian thermochemistry PDF, they say that the ''Sum of electronic and thermal Free Energies'' (wich you can find easily in the output or in Gaussview) is suppose to be the Gibbs free energy. But its not! In fact the zero point energy is missing and there is nowhere in the output or in GaussView where you can see the correct value. And this true also for Enthalpy and internal energy. In fact we have to ad the zero point energy afterwards, which I think is weird since this is the values we really need. In their pdf Gaussian emphisis that the ZPE is added everywhere by default, but it's not true. My teacher is also suspicious about the software. Do we miss something here? What do you guys take as your G, H and U? And where do you find them. I think we could all be wrong if we forget to ad the zero point energy which is what I think Gaussian does.
Thanks
Hello Mathieu
The Gaussian Output gives you 1. electronic energy, 2. Zero-point correction, 3. enthalpy correction and 4. entropy which you can use to calculate the Free energy
The enthalpy is : internal energy + zero-point correction + enthalpy correction The Free energy is : the enthalpy(calculated above) - Temp*entropy What you have be careful is make sure the units are correct in all cases
Pansy
• asked a question related to Statistical Mechanics
Question
I came across this question while studying Tuckerman book on Statistical Mechanics for Molecular Dynamics.
Thank you
Behnam Farid
, I was able to get it by following the steps you highlighted
• asked a question related to Statistical Mechanics
Question
Let's just say we're looking at the classical continuous canonical ensemble of a harmonic oscillator, where:
H = p^2 / 2m + 1/2 * m * omega^2 * x^2
and the partition function (omitting the integrals over phase space here) is defined as
Z = Exp[-H / (kb * T)]
and the average energy can be calculated as proportional to the derivative of ln[Z].
Equipartion theorem says that each independent coordinate must contribute R/2 to the systems energy, so in a 3D system, we should get 3R. My question is does equipartion break down if the frequency is temperature dependent?
Let's say omega = omega[T], then when you take the derivative of Z to calculate the average energy. If omega'[T] is not zero, then it will either add or detract from the average kinetic energy and therefore will disagree with equipartition. Is this correct?
Drew> Z = Exp[-H / (kb * T)], and the average energy can be calculated as proportional to the derivative of ln[Z].
The exact formula, easy to prove, is ⟨H⟩ = -∂ln(Z)∂β, where β = 1/(kBT). However, as you probably already have noted, that is mathematically correct only when H is independent of β (i.e. temperature T).
One may easily imagine situations where the parameters of the Hamiltonian actually depend on temperature, because one is dealing with a phenomenological "effective" description^*, not taking into account the physics which leads to this temperature dependence. However, if such a dependence is large enough to make any difference, the standard thermodynamic interpretation^** of ln Z breaks down, and thereby all sacred relations of thermodynamics. Which is the absolutely last thing we should consider violating in physics.
If you want to escape the usual equipartition principle, this is easily violated by non-quadratic terms in a classical Hamiltonian, or introduction of quantum mechanics (without which even Hell would freeze over, due to its infinite heat capacity).
^*) Which in practise is always the case, since we don't even know what is going on at extremely small scales, and (mostly) don't have to worry about sub-atomic scales.
^**) ln Z = -β F = -β(U-TS), where F is the Helmholtz free energy.
PS. The very first answer to this question should be viewed as an attempt to repeat the notorious Sokal hoax, https://en.wikipedia.org/wiki/Sokal_affair (often perpetrated on RG).
• asked a question related to Statistical Mechanics
Question
According to statistical mechanics, the translation energy of a system of point particles is given by 3/2 NKT. And it is known that a single particle exhibits only translational energy. So can we simply imply that single particle system energy can be obtained just by substituting N=1? Because as far as I remember, the first principle of statistical mechanics assumes that number of particles of a system is extremely large, so we can't directly apply those principle for a single particle system
Yes 1/2 kT per degree of freedom. Science should decide whether k is a fundamental property of nature or just a convenient conversion factor. Tolman treated it as invariant conversion factor in describing relativistic thermodynamics. Other have treated it as a fundamental property of nature. Committees vote first one way and then the other.
• asked a question related to Statistical Mechanics
Question
Even gases like air are assumed (for sufficiently low flow velocities) to have constant density. Is it only because the hydrodynamic equations of motions are easier to solve when incompressibility is assumed? Or can it be proven with Statistical mechanics why incompressibility is frequently assumed?
In case of Sound waves, small deviations in Density are respected and kinetic Energy of many sound waves are a lot lower than the air flow around a car at 100mph. Will compressibility come into account also at low speeds and if yes, why?
That is a hystorical assumption in the fluid dynamics (and more often in aerodynamics) field. This assumption is not on the physical property of the fluid but rather on the assumption of the involved characteristic velocity compared to the sound velocity. Below a certain value of v/a=Mach number the flow (not the fluid) problem is assumed to be governed by the simplified NSE equations. Historically, other assumptions were associated such as steady flow and inviscid flow to get a simplified set of equations (see potential flows).
Actually, if the flow is assumed to be unsteady and viscous, the set of equations is somehow more mathematically complicated than the corresponding set for fully compressible flows. Modern methods try to solve the compressible low-Mach equations.
Of course, the incompressible flow is a mathematical model, therefore you cannot describe some physical property such as pressure wave at finite velocity.
• asked a question related to Statistical Mechanics
Question
In the introduction to his text, A Student’s Guide to Entropy, Don Lemons has a quote “No one really knows what entropy is, so in a debate you will always have the advantage” and writes that entropy quantifies “the irreversibility of a thermodynamic process.” Bimalendu Roy in his text Fundamentals of Classical and Statistical Mechanics (2002) writes “The concept of entropy is, so to say, abstract and rather philosophical” (p. 29). In Feynman’s lectures (ch. 44-6): “Actually, S is the letter usually used for entropy, and it is numerically equal to the heat (which we have called Q_S delivered to a 1∘-reservoir (entropy is not itself a heat, it is heat divided by a temperature, hence it is measured in joules per degree).” In thermodynamics there is the Clausius definition which is a ratio of a quantity of heat Q to a degree Kelvin, Q/T, and the Boltzmann approach, k log(n). Shannon analogized information content to entropy; 2 as the base of the logarithm gives information content in bits. Eddington in the Natural Physical World (p. 80) wrote: “So far as physics is concerned time’s arrow is a property of entropy alone.” Thomas Gold, physicist and cosmologist suggested that entropy manifests or relates to the expansion of the universe. There are reasons to suspect that entropy and the concept of degrees of freedom are closely related. How best we understand entropy?
This is a very relevant question.
First, you need to specify in which "domain" you speak of entropy.
I included a specific explanation in my following article:
I quote, page 3, 4, 5 :
"There are various forms equational of entropy, we will see now. The first is the entropy used below, the Boltzmann entropy [6], which is written:
(look paper)
This equation defines the microcanonical entropy of a physical system at the macroscopic balance, but left free to evolve on a microscopic scale between Omega different micro-states (also called number of complexions, or number of system configuration). The unit is in Joule per Kelvin (J / K).
Entropy is the key point of the second law of thermodynamics, which states that "Any transformation of a thermodynamic system is performed with increasing the overall entropy, including the entropy of the system and the external environment. We then say that there is creation of entropy."; "The entropy in an isolated system can only increase or remain constant."
There is also the Shannon formula [7]. The Shannon entropy, due to Claude Shannon, is a mathematical function that corresponds to the amount of information contained in or issued by a source of information. Over the source is redundant, it contains less information. Entropy is maximum and for a source whose symbols are equally likely. The Shannon entropy can be seen as measuring the amount of uncertainty of a random event, or more precisely its distribution. Generally, the log is in base 2 (binary). Its formula is:
(look paper)
however, one can define an entropy in quantum theory [9], particularly used in quantum cryptography (with the properties of entanglement), called the von Neumann entropy noted:
(look paper)
With the density and orthonormal basis matrix:
(look paper)
The von Neumann entropy is identical to that of Shannon, except that it uses the variable (look paper), a density matrix. As written by Serge Laroche, this equation can be used to calculate the degree of entanglement of two particles: if two particles are entangled, the entropy is zero. Conversely, if the entanglement between two particles is maximum, the entropy is maximum, given we do not have access to the subsystem. In classical mechanics zero entropy means that the events are some (only one possibility), while in quantum mechanics this means that the density matrix is ​​a pure state of phi. But in quantum physics measurements are generally unpredictable because the probability distribution depends on the wave function and observable.
And this is also explained by the principle Heisenberg uncertainty: indeed, if for example we had to have more information (so less entropy) the momentum of the particle, there is less information on the position thereof (more entropy). This implies that quantum physics is still immersed in the entropy, although the entropy is low.
Now that we know the Boltzmann entropy and Shannon entropy, we can merge the two giving the Boltzmann-Shannon entropy or statistical entropy [8]. If we consider a thermodynamic system that can be in several microscopic states of probabilities , statistical entropy is then:
Or, the Boltzmann entropy-Neumann, equivalent to the above equation:
(look paper)
This function is paramount, and it will be constantly used in our theory of gravitational entropy. Its unit is the binary and Joule per Kelvin. These include some properties of this function. We know that the entropy is maximum when the numbers of molecules in each compartment are equal. Entropy is minimal if all molecules are in one compartment. It is then 0 as the number of microscopic states is 1.
From the perspective of information theory, the thermodynamic system behaves like a source that does not send any message. Thus, the entropy measure "the missing information" to the receiver (or uncertainty of the entire information).
If the entropy is maximum (the numbers of molecules in each compartment are equal) the missing information is maximum. If the entropy is minimal (molecules numbers are in the same compartment), then the missing information is zero.
In the end, the Shannon entropy and Boltzmann entropy is the same concept."
In conclusion, entropy is a measure of uncertainty:
- in information theory -> bit uncertainty
- in quantum physics (Von Neumann) -> Uncertainty in qubit
- In thermodynamics -> Uncertainty of the contents of a thermodynamic system
- in statistical physics -> bit uncertainty of the contents of a thermodynamic system
There is another form of entropy, the entropy of flat curves, proposed by Michel Mendes. But that, I let you see;)
• asked a question related to Statistical Mechanics
Question
Zipf law (which is a power law) is the maximum entropy distribution of a system of P particles in N boxes where P>>N. Its derivation is based on microcanonical ensemble in which the entropy is calculated for an isolated system. In the canonical ensemble the system is with contact with an external bath having a fix temperature T. The macroscopic quantities of the canonical ensemble are calculated from its partition function in which the probabilities decay exponentially with energy.
The question is: how is Zipf law a power law which can be obtained from exponential partition function?
First, everything in the world is physical. Entropy is a quantity that represents the change in the world's uncertainty when an amount of energy is transferred between two bodies. The amount of the transferred energy is the heat Q.
The temperature T of the emitted body defines the "grade" of the heat. The higher the grade the higher the amount of work that can be extracted when the heat is absorbed in a given lower temperature body.
Work W may be viewed as heat emerged from infinitely high-temperature source (i.e. laser light). You can liken heat engine to waterfall were heights gap of the waterfall is analogue to the temperature gap of a heat engine. Therefore, when you replace Q by W in your Stirling engine you use two infinite temperatures. As you probably know, both zero and infinity are not true numbers and cannot be used in an arithmetical calculation. That is probably the reason for your erroneous conclusion.
• asked a question related to Statistical Mechanics
Question
hi everyone can any one help me to find the entropy index to measure the diversification for the company by using Σ Pi*ln(1/Pi) I already have the total sales for each year and I have each segment sales share .. N the number of industry segments , pi is the percentage of ith segment in total company sales
• asked a question related to Statistical Mechanics
Question
I am recently start to study statistical mechanics, but it's really hard to understand some of the basic concepts. Especially the textbooks. I follows statistical physics by F. Reif, but the language is little hard to understand. Please give me suggestions on simple books.
As you declared to be a fresh mind i propose you as a training this simulated experiment. It is an activity i use for my students and i have had good results every time i proposed it to them. I apologize it is in italian but i think that pictures and formulas will let you understand the main features of it.
• asked a question related to Statistical Mechanics
Question
I have to calculate the rate of tunnelling in a protein, for which I need the transmission coefficient. How do I calculate it? Or is there another way that does not require the transmission coefficient?
Dear Dr Saluja
I think this is a very difficult problem and presently can only be tackled by means of some rather rough quantum mechanical approximation.Look up first the Kronig-Penney model and the WKBJ quasi classical approximation to start with and then tunelling in Josephson junctions to get some feeling
best regards
Dikeos Mario
• asked a question related to Statistical Mechanics
Question
Does anyone have experience estimating formation enthalpy or Atomization energy of molecules using Gaussian? I am trying to calculate this parameter for simple molecules like H2O and NH3, however, there is a significant error even with decent theory level. I am wondering how accurate this could be done? Is there any specific theory/method performing better than others? ( currently, I am using B3LYP/6-311++G (2d,2p) . I do not have any QM background and any thought would be appreciated.
Here are some numbers I am getting from Gaussian, compared to the JANAF table values.
Molecule: Gaussian JANAF
H2O : 1174 KJ --------- 917 KJ
NH3 : 1432 ------------ 1158
N2 : 1467 ----------------- 941
H2: 434 ------------------ 432
O2: 870 -------------- 493
It is very difficult to predict atomization energies with a DFT methods. For high accuracy coupled cluster is a requirement and, in order to save time, complete basis set extrapolation is another excellent option.
Even at this level, an important number of error arises from the theory and different correction and approximation must be done.
In other to perform the calculations only W1 and W2 theories are capable for an accuracy in the range of kJ/mol.
At this point, I invite you to read a very interesting (maybe essential) book and then, to apply the knowledge you will find there.1
1. Cioslowski, J., Quantum-Mechanical Prediction of Thermochemical Data; Springer Netherlands: Dordrecht, 2001
I hope it helps you,
Best regards,
Joaquim Rius
• asked a question related to Statistical Mechanics
Question
The characteristic frequency of thermal motion is around 7E12 Hz at room temperature (300K), but from that information how can we conclude that the bonds are hard; they don't vibrate !!
Dear Roshan,
The bonds use the hopping energy and this is much higher than the thermal energy. Notice that one eV is equivalent to a thermal energy of 11604.5 K !!! Thus the phonons (accoustic or optical) don't interact practically with the bond electrons.
• asked a question related to Statistical Mechanics
Question
Hi, I want to ask a question about the basic theory of molecular dynamics.
In MD simulations, we can calculate the temperature using the average of kinetic energy of the system. For ideal gas(pV=NkbT), I can derive the relationship between temperature and kinetic energy: 1/2mv^2=3/2kbT (3-dimension). But if simulating non-ideal gas or fluids, how can I get the relationship between temperature and kinetic energy?
Could anyone give me some understandable explanations(I know little about quantum mechanics)? Any relevant material or link will be appreciated. Thanks!
If you want to dig a little bit deeper you can check out the papers [1] and [2] (and related ones). It actually turns out that there are many more possible definitions of the thermodynamic temperature, i.e. the general expression is:
kT = ⟨ ∇H • B ⟩ / ⟨ ∇ • B ⟩
where H is the Hamiltonian and B is an arbitrary (within weak mathematical constraints) vector field that depends on the phase space variables. The expression you mentioned (called the kinetic temperature) is recovered from this by choosing B = ∇K with the kinetic energy K. But also you can make other choices for B such as B = ∇V with V the potential energy. The latter yields a temperature (called configurational temperature) that only depends on positions and is independent of momenta. There are some interesting things you can do with this, for instance facilitate rare events as e.g. in [3].
• asked a question related to Statistical Mechanics
Question
I want to know, is negative T state only conceptually catches one's eye or truely significant to help us understand thermodynamics ?
• In early days Purcell et al, and in my university textbooks, negtive T in spin degree of freedom in NMR system was mentioned;
• In 2013, S. Braun et al perform an experiment in cold atoms and realize an inversed enegy level population for motional degeree of freedom. (http://science.sciencemag.org/content/339/6115/52)
• Many disputes about the Boltzman entropy or Gibbs entropy, as I sknow, especially Jörn Dunkel et al(https://www.nature.com/articles/nphys2815); they insist on Gibb's entropy is physical and argue that negative T is wrong.
• After that, many debates emerges, I read several papers, they all agree with conventional Boltzman entropy.
Is it truely fascinating or just trival to realize a population inversion state——negtive temperature ?
or anyone has clarification of the Carnot engine work between a negtive T and positive T substance?
Any comments and discussions are welcome.
I just read this interesting and lively discussion on negative temperatures. I must confess that I have not read the paper by Abraham and Penrose and I promise to do so during the holidays. I am glad that discussions on the pure thermodynamic issue has arisen, independently of the entropy formulae of statistical physics (by the way, in my view, there is only one “entropy”, Boltzmann’s, as discussed say in Landau and Lifshitz, but that’s another discussion!). So, it is nice to hear arguments based on thermodynamics only.
Although I have not been able to follow the whole thread, I think I side with Struchtrup point of view. I have always had trouble with extending “equilibrium” concepts (such as entropy and temperature) to non equilibrium and/or metastable states. I believe it is dangerous. Certainly, there are many situations where there are no ambiguities, but when there are one should refrain to simply extend the well founded equilibrium concepts, with all their assumptions … About ten years ago there were claims in respected journals about negative heat capacities in nano systems, for instance.
I do not think I have much to say here now, except to recall that, indeed, in order to accommodate for negative temperatures, Ramsey had to change Kelvin-Planck statement of the Second law. Actually, he inverted it (this, I pretended to explain in the appendix of my paper). Then, of course, with the assumption of negative temperatures reservoirs, everything logically follows. However, before arguing about their stability, something very strange happens if equilibrium negative temperatures exist: heat can be converted into work as a sole result (with efficiency one), and the opposite is impossible! hence, there is no need for Carnot engines at all and friction can’t happen! We should be suspicious of this … the argument that reservoirs at negative temperatures are unstable allows us to find the root of the failure, namely, the tacit assumption by Ramsey that stable negative temperature reservoirs exist.
Certainly, the states created in the experiments by Purcell etal, and the many later on, do exist, they are metastable states and can have very long relaxation times to “normal” equilibrium states. But this does not suffice to say that they are in equilibrium for that time, and less to say that they have actually achieved negative temperatures. It may be useful to use those terms but it may also be deeply misleading.
• asked a question related to Statistical Mechanics
Question
Hi,
I apologize for this apparently silly question, but please, could you point me out if there is an underlying relationship between the defect driven phase transition and the directed percolation?
Secondly, is it possible to have a system which undergoes a KT transition at T1 generating free vortices, and subsequent by a spatial spreading of disorder via directed percolation at T2?
Please, if there is any relevant examples and materials, do let me know.
Many thanks.
Wang Zhe
Dear Zhe,
Yes, you can have a relation between percolation and the correlation length, where the correlation length is defined to be the distance at which the probability of a site being connected to the origin falls to a level 1/e . In other words, the correlation function G(r) gives the average correlation between two lattice sites (one at the origin, the other at position r). That is, loosely speaking, it describes how much more likely it is for the site at position r to belong to the same cluster as the origin than it would be for a site chosen randomly from across the whole lattice.
As you go above the critical point pc, the probability of being in a large, finite cluster gets smaller and smaller, and it is only likely to be in an infinite cluster, or in a very small finite cluster. Obviously, the average finite cluster size decreases until p=1 when it becomes zero (everything is in the infinite cluster)
• asked a question related to Statistical Mechanics
Question
Hi,
The vortex-unbinding Kosterlitz-Thouless physics generally applies to two-dimensional systems and occasionally three dimensional solid.
I was wondered if there exits an one dimensional analogy of vortex-unbinding occuring in two dimensions. Could anyone point me out, please?
Thank you.
Very kind wishes,
Wang Zhe
Dear Zhe,
If you base your research on this subject just watching that:
the velocity field induced by filiform vortex filaments falls off like 1/r, leading to the logarithmic diverging of kinetic energy on the system size. Thus, I expect, Kosterlitz-Thouless transition may indeed takes place in filiform vortices dominated transitions, i.e, , in plane Couette flow.
This analogy could be quite dangerous; for instance, the electric field falls also as 1/r and the electric charges don't produce a KT phase transition by themself. There are more ingredients needed as a diverging correlation length inducing a quasi-long range order (having a non trivial topology transformation behind it). But your analogy is not so bad if you think in a II kind superconductor where you have Abrikosov-Nielsen vortices interacting (at distances shorter than the Peierls screening length) to prevent the coherence between the Cooper pairs which have a complex scalar field with a continous local symmetry U(1) (notice that you don't have this symmetry to be broken or at least I don't see it). Below the KT temperature you have superconductivity because there are not free vortices, only vortice-antivortice pairs can be present.
Thus your idea could be possible but more things are necessary than just the kinetic energy logaritmic dependence with relative distances. You need to have a correlation length and winding numbers different than zero (non trivial topological solutions).
• asked a question related to Statistical Mechanics
Question
We know the ergodic definition and know the ergodic mappings. But what is the ergodic process?
A random process is said to be ergodic if the time averages of the process tend to the appropriate ensemble averages. This definition implies that with probability 1, any ensemble average of {X(t)} can be determined from a single sample function of {X(t)}. Clearly, for a process to be ergodic, it has to necessarily be stationary. But not all stationary processes are ergodic.
• asked a question related to Statistical Mechanics
Question
Why Hamilton's equation is not used for constructing dynamical equation in liquid crystal?
theoretically all formulations of classical dynamics (Newton,Lagrange,Hamilton-Jacobi) are entirely equivalent.However  the  Lagrange formulation is often more
direct when there are constraints to be fullfilled. during the motion.
• asked a question related to Statistical Mechanics
Question
Hi everyone,
I'm trying to solve this exercise (attached file) from the JM Yeomans book "Statistical mechanics of phase transition".
I understood how the expansion works for the same model without the field term but here I have troubles figuring out which terms are vanishing and which one are not to answer the second question. Also I don't get what de S_m(v,N) term represents ...
Anyone's help is welcomed ! :)
Why not ask Julia herself? You can find her coordinates on the web. She'll be delighted to illuminate you.
• asked a question related to Statistical Mechanics
Question
Hi!
Please, could anyone point me out an intuitive way to understand the exponential divergence of the correlation length in the KT-transition; in contrast to the usual algebraic divergence in the common sense of critical phenomena?
Thank you.
Wang Zhe
Dear Prof. Farid,
Regarding the high-temperature correlation length xi, please, could you kindly explain a little bit on the physical intuition of the factor (-1/2) in the following expression:
xi=exp[a*t^(-1/2)] ,where a is a constant.
In fact, I have consulted quite a few professors on the past week but without a satisfactory.
Thank you.
Very kind wishes,
Wang Zhe
• asked a question related to Statistical Mechanics
Question
Hi all,
To calculate residence time from potential of mean force (PMF), we use stable state picture. Here a reaction state, product state are defined.  This is done from radial distribution function. The time taken to move from reaction state to product state is designated as t and residence time is given by,
1-P(t) = e^{-t/tau}, tau is the residence time,
P(t) is the probability  that it moves from reaction state to product state,
t= time taken to move from reaction state to product state. How to calculate P(t)?
The way I read it, it should simply be P(t)=1-e^{-t/tau}.
• asked a question related to Statistical Mechanics
Question
Hello all,
I am confused about the following fact. Does minimization of free energy always mean that entropy is maximized?  are they always complimentary to each other or in certain conditions?
I am doubtful because I came across a text which said for isothermal systems ( i.e. temperature doesn't change) these are complimentary, whereas for case of non-isothermal systems, minimization of free energy does not always mean maximization of entropy. May someone help.
Thanks
By my comparison, exceptions occur when ions are formed in solution. The standard Gibbs energy of formation Zn2+ (aq)=- 147.06 KJ / mol.The change in the standard entropy is - 112.1 J / mole K. Formation of order as a result of hydration of ions.
• asked a question related to Statistical Mechanics
Question
Dear Research-Gaters,
It might be a very trivial question to you : 'What does the term 'wrong dynamics ' actually mean ?'. I have heard that term often times, when somebody presented his/her, her/his results. As it seems to me, the term 'wrong dynamics' is an argument, which is often applicable to bring up arguments that a simulation result might be not very useful. But what does that argument mean in physical quantities ? It that argument related to measures such as correlation functions, e.g. velocity autocorrelation, H-bond autocorrelation or radial distribution functions ? Can 'wrong dynamics' be visualized in terms of a too fast decay in any of those correlation functions in comparison with other equilibrium simulations, or can it simply be measured by deviations of the potential energies, kinetic energies and/or the root-mean square deviation from the starting structure ? At the same time, thermodynamical quantities such as free-energies might not be affected by the term 'wrong dynamics'. Finally, I would like to ask what the term 'wrong dynamics' means, if I used non-equilibrium simulations which are actually completely non-Markovian, i.e. history-independent and out-of equilibrium (Metadynamics, Hyperdynamics). Thank you for your answers. Emanuel
Start by an obvious remark: the fact that two given Markov process tend to the same (given) equilibrium does not mean that they do so in the same way. In particular, their dynamical behaviour may be quite distinct.
Now Newtonian mechanics will generate some kind of equilibrium. It is often easier to reach that same equilibrium by a stochastic process (Monte-Carlo). Then we are guaranteed that the equilibrium properties will in fact be the same. Static correlation functions, for example, will be correct. However, the dynamical properties need not. Thus the velocity autocorrelation function (the product of v at time t with v at time t+ tau averaged over t) need not have any clear connection. Indeed, in some MC models, there are no velocities!. To the extent that the system is classical, the Newtonian dynamics is the correct one'' and the MC is fake. The results that should be trusted are thus the Newtonian ones.
However, in many cases, MC will give dynamics that are, in some sense, qualitatively close to what Newtonian dynamics gives. Nevertheless, such issues must be treated with considerable care.
• asked a question related to Statistical Mechanics
Question
Is there an analytical expression for the probability of finding two consecutive parallel spins in the one-dimensional antiferromagnetic Ising model (with constant J<0), assuming equilibrium at temperature T
If you do not have a magnetic field, this can be neatly mapped on a free particle problem: indeed, any spuin configuration
s1, s2, s3,
can be mapped onto
s1, w1, w2, w3
where s1 is the same, and w1 is 1 if there is a `wall'' , that is, if s2 is different from s1,and zero otherwise. The total energy of an ising antiferromagnet is then
-J sum over all i of w_i
and the probability that a w_i is zero, viz that two spins are parallel, is now easily computed.
For a magnetic field, you do need the transfer matrix formalism, which also gives you interesting effects at finite N, such as the corrections induced by periodic boundary conditions when the total number of spins is odd. But the free particle picture, with its limitations, I find very instructive.
• asked a question related to Statistical Mechanics
Question
Looking for methods to model the state transitions of a multi-state process. Thanks in advance!
It depends on how many elements/states of the system are put into the modeling, what  the preliminary information about the mutual influence of the states between the elements andt their states are to be considered, what the outer conditions influence the evolution of the states of single elements, etc. For generalities, any book on random (stochastic) processes might be useful, however from the question a suggestion follows, that probably some theory of semi-Markov processes were useful, if the evolution is non-Markovian and the number of states is not to high.  A pretty wide fashion of such processes with applications is given in the monograph
Semi-Markov Processes: Applications in System Reliability and Maintenance, by F. Grabski:
for introduction, one can read  free available
Also other works can be found via MrGoogle under key words:
semi markov maintenance processes   /  semi markov social sciences  /  semi markov economy markets
e.g. Semi-Markov Risk Models for Finance, Insurance and Reliability
Authors: Jacques Janssen,Raimondo Manca
for introduction to some problems one can reach free available from RG work by the authors et al.:
Regards
• asked a question related to Statistical Mechanics
Question
Finding the equilibrium points and the lapsed time between those points on phase paths ?
Maybe you can tell more about your system; I guess that x is a phase space variable function of time t and ''x means d^2x/dt^2 (second derivative with time). Moreover, is your system closed or open system?
From mathematical point of view, you can replace your system by this equivalent system of differential equations of first order:
dx/dt = v
dv/dt = x^2-x
where the solution x(t) will depend on the initial conditions:
x(t0) and v(t0)
You can also use any numerical integrator to solve the about equations. (For example, Runge-Kutta numerical method.)
• asked a question related to Statistical Mechanics
Question
Anybody is having solution to problems of Statistical Mechanics by Kerson Huang (in pdf format)?
When the book was published, the Solutions Manual/Instructor's Guide was made available "free of charge to lecturers who adopt this book for their courses". It will be a collector's item ...
• asked a question related to Statistical Mechanics
Question
Acoustic is macroscopic description of the movement of atoms and molecules
Well, I guess you should frame your question more precisely. However, I will try to answer your question to the best of my understanding. The temperature T would mean energy of the order of kT where k is the Boltzman's constant.
This energy should be of the same order as the vibration energy (VE) state of the air molecules. Only then, the heat energy would be able to trigger a vibrational rearrangement of the air molecules. And then using simple Maths you could obtain T = VE / k.
For example, typical frequencies of molecular vibrations range from less than 1013 to approximately 1014 Hz, corresponding to wavenumbers of approximately 300 to 3000 cm-1 .
The VE mode of C2 H4 is 826 cm-1  and of N2  is 2331 cm-1. And, VE mode of air molecules, i..e, Oxygen O2 , is 1556 cm-1 .
I assume you know how to convert these numbers into units of energy, Joules?
Some excellent literature on these topics are:
1) Bhatia, Ultrasonic Absorption: An Introduction to the theory of sound absorption and dispersion in gases, liquids and solids, Dover Publications, New York, 1967.
2) Markham, “Absorption of sounds in fluids”, Rev. Mod. Phys., 1951.
3) Baueriot, “Influences of transport mechanisms on sound propagation in gases”, Adv. Mol. Relax. Proc., 1972.
Hope that helps.
• asked a question related to Statistical Mechanics
Question
For magnetic systems, Rushbrooke inequality is a direct consequence of the thermodynamic relation between  CH, CV and isothermal susceptibility, their positivity, and the definition of the critical exponent alpha as [controlling the behavior of CH  as function of the reduced distance from the critical temperature..
In the case of fluid system, the usual definition of alpha refers to the constant volume specific heat (CV).
However, the role played by CV in the thermodynamic relation between CP, CV and isothermal compressibility is not the same as CH. Some additional hypothesis has to be made in order to derive the  R. inequality  for fluid systems or am I missing something trivial ?
Wasn't this issue addressed for PVT systems in an open access article published by Elsner (2014) in Engineering 6, 789-826?
• asked a question related to Statistical Mechanics
Question
To clarify my question, I am trying to construct a coarse-grained modeling of an fcc system using the iterative Boltzmann inversion method to compute the pair potential interactions. However, in order to be able to start the iterative process, I first need to use V0(r)=-KBTln(g(r)) as an initial guess for the pair interactions among my CG beads. the g(r) values used in this relation are those computed from the all-atom system. However, as you know the rdf values of a crystalline structure is not continuous and it contains zero values between the peaks where the crystall lattices lie and I have no idea what to do with these zero values as they cannot be used in this relation as ln(0) is meaningless.
Another problem that I have is that I am using lammps software for my simulations and I have used lammps to compute these rdf values but the values do not match the values that I obtained manually using this relation: g(r)=V*dn(r)/4pi*r^2*N*dr
I really appreciate your help and suggestions.