Science topic
Statistical Mechanics - Science topic
Statistical mechanics provides a framework for relating the microscopic properties of individual atoms and molecules to the macroscopic bulk properties of materials that can be observed in everyday life.
Questions related to Statistical Mechanics
How do the non-trivial zeros of the Riemann zeta function relate to the quantum chaotic behavior of high-dimensional systems, and what implications might this have for the study of quantum eigenstate thermalization hypothesis (ETH)?
The Riemann zeta function is a fundamental object in number theory, known for its deep connection to the distribution of prime numbers. One of the most intriguing aspects of this function is its non-trivial zeros, which lie along the "critical line" in the complex plane. The Riemann Hypothesis posits that all these zeros have a real part of 1/2, though this remains unproven. Interestingly, the behavior of these zeros has been found to share striking similarities with the statistical properties of eigenvalues in quantum systems, particularly in systems exhibiting quantum chaos. Quantum chaotic systems are those that, despite being governed by deterministic laws, display unpredictable behavior akin to classical chaotic systems when viewed in the quantum regime.
The quantum eigenstate thermalization hypothesis (ETH) is a concept in statistical mechanics that seeks to explain how isolated quantum systems can exhibit thermal equilibrium behavior, despite being in a pure quantum state. According to ETH, the individual eigenstates of a quantum system should mimic the properties of a thermal ensemble in the appropriate limit. The relevance of the question about the connection between the non-trivial zeros of the Riemann zeta function and quantum chaotic behavior arises from the possibility that insights from number theory might provide new perspectives on the statistical mechanics of quantum systems. If the distribution of these zeros is related to quantum chaotic systems, it could offer a novel approach to understanding the emergence of thermal behavior in quantum systems and even further our understanding of quantum-to-classical transitions.
The short answer is yes.
Scientific education in the West throughout the 20th century was based on the assumption that Schrödinger's PDE is the only unified theory of energy fields (microscopic and macroscopic), which is false.
Schrödinger's PDE is fundamentally incomplete because it lives and operates in 3D geometric space plus real time as an external controller.
Our mother nature lives and functions in a unitary 4D x-t space with time as a dimensionless integer woven into geometric space.
We assume that a serious improvement in scientific teaching and research can be achieved by describing the 4D x-t unit space via a 4D statistical mechanics unit space or any other adequate representation.
The short answer is yes, it is absolutely true.
We first affirm that the matrix mechanics of W. Heisenberg, Max Born and P. Gordon (H-B-G) was born dead and destroyed by the Schrödinger equation in three years.
This is not surprising since the HBG matrix is designed to resolve energy levels in the hydrogen atom and is therefore only a subset of SE PDE.
On the other hand, modern statistical mechanics (B matrix chains and Cairo techniques in 2020) has established itself as a giant capable of solving almost all physics and mathematics problems.
Here we can say that it is a unified field theory and Schrödinger's PDE is one of its subsets and not vice versa.
We predict that future scientific research over the next ten years will increasingly focus on the area of modern matrix mechanics, at the cost of eliminating the Schrödinger equation.
Schrödinger's PDE is too old (+100 years) and fundamentally incomplete but it has proven itself in almost all scientific fields.
He can't die or be fired all at once, at least in the next ten years.
However, in the long term, it is very likely that they will gradually disappear and be replaced by elegant and brilliant theories of modern statistical mechanics.
Contrary to the opinion of the iron guards of SE who believe that this is the only unified field theory, the truth is that modern statistical mechanics of Cairo techniques and its B matrix chain products is the theory unified fields. Which means it can be the source of all SE solutions or any other time-dependent PDE.
There are too many examples of validation, but rather difficult to find.
We assume this form is inappropriate and misleading in many situations.
Numerical statistical mechanics that works efficiently to solve the heat diffusion equation as well as Schrödinger's PDE predicts a more appropriate eigenmatrix form,
( [B] + Constant. V[I] )= λ ( [B] + Constant. V[I] )
with the principal eigenvalue λ = 1 which is equivalent to the principle of least action.
It is clear that the vector V replaces Ψ^2.
Elementary explanations of the second law of thermodynamics refer to probabilities of system states and seem convincing. But not when considering time-reversals, because the same statistical arguments should also apply there but they produce contradictions regarding entropy increases with time. (I think the difficulty is in whether or not the assumption of statistical randomness is appropriate because it depends on what is given and maybe also on the direction of time but I'm not an expert and this doesn't answer my question anyway.) While reading some literature about the direction of time I learned that the direction of time and the second law of thermodynamics all come from a very low entropy immediately after the big bang, with increasing entropy produced by things that include gravitational clumping (e.g., the formation of black holes and the merging of black holes to produce larger black holes). I learned that this is responsible for the second law of thermodynamics but it seems to me that this is an incredibly large-scale thing. Given this explanation it seems amazing to me that we can randomly select a tiny piece of matter (large enough to be macroscopic but tiny from the point of view of human perception) and find that it obeys the laws of thermodynamics. Is there an explanation of how such large influences on entropy (e.g., objects produced by gravity clumping) can produce a second law that is so incredibly homogeneous that we find the law obeyed by all of the tiny specs of material?
Anybody is having solution to problems of Statistical Mechanics of Phase Transitions by J. M. Yeomans?
Similarity transformation to make a Hamiltonian diagonal look like U^{\dagger}HU. What about an observable and density matrix? How do they transform?
Attaching mathematical expressions here is problematic. I am attaching the link to the question here.
And can you reference articles or texts giving answers to this question?
This is briefly considered in https://arxiv.org/abs/0804.1924 which is on RG as https://www.researchgate.net/publication/314079736_Entropy_and_its_relationship_to_allometry_v17.
Reviewing the literature would be helpful before considering whether to updatie the 2015 ideas.
When studying statistical mechanics for the first time (about 5 decades ago) I learned an interesting postulate of equilibrium statistical mechanics which is: "The probability of a system being in a given state is the same for all states having the same energy." But I ask: "Why energy instead of some other quantity". When I was learning this topic I was under the impression that the postulates of equilibrium statistical mechanics should be derivable from more fundamental laws of physics (that I supposedly had already learned before studying this topic) but the problem is that nobody has figured out how to do that derivation yet. If somebody figures out how to derive the postulates from more fundamental laws, we will have an answer to the question "Why energy instead of some other quantity." Until somebody figures out how to do that, we have to accept the postulate as a postulate instead of a derived conclusion. The question that I am asking 5 decades later is, has somebody figured it out yet? I'm not an expert on statistical mechanics so I hope that answers can be simple enough to be understood by people that are not experts.
One might argue: Animals increase their survivability by increasing the degrees of freedom available to them in interacting with their environment and other members of their species.
Right, wrong, or in between? Your views?
Are there articles discussing this?
Let's say if 12-6 equation is perfect, the most favored distance between two Ar atoms is the r(min).
My understanding for vdW radii is measured from some experiments, while 12-6 potential is more like an approximation. But if I want to link vdW radii to distance in 12-6 sigma/r/r(rmin), which one should it be?
The definition of vdW radii is the closest distance of two atoms, so should it be somewhere slightly smaller than sigma?
In the website, however, the sigma value is referred as the vdW radius.
No way to find out unless you do the actual proposed experiment:
Research Proposal Galton Board Double Slit Experiment
This guy here:
claims he made an interference Galton board https://en.wikipedia.org/wiki/Galton_board experiment and got an interference pattern. This would explain that quantum randomness originates from determinism and is a result of hidden local variables possible in the photon's environment in contrast to the Bell inequality EPR experiment?
Note:
As a reminder this is nothing extremely new, actually a deterministic explanation of the quantum DS single photon experiment was previously demonstrated by this experimental application of the pilot-wave theory using bouncing droplets:
The photons which are epicenters of electromagnetic distortions when translating in space distort the EM mass fiel of the environment they move in.These distortions of the mass field environment are feedback at the photon as alterations in its motion trajectory. Photons as massless particles may pass through each other without being affected but dynamic EM flux coming from the mass field of their environment they interact with can affect their trajectory.
Hamiltonian mechanics is a theory developed as a reformulation of classical mechanics and predicts the same outcomes as non-Hamiltonian classical mechanics. It uses a different mathematical formalism, providing a more abstract understanding of the theory. Historically, it was an important reformulation of classical mechanics, which later contributed to the formulation of statistical mechanics and quantum mechanics.
Statistical mechanics considering interaction is attached to the second law of thermodynamics. Considering the influence of temperature on the interaction potential, statistical mechanics can prove that the second law of thermodynamics is wrong.
A system of ideal gas can always be made to obey classical statistical mechanics by varying the temperature and the density. Now a wave packet for each particle is known to expand with time. Therefore after sufficient time has elapsed the gas would become an assembly of interacting wavelets and hence its properties would change since now it would require a quantum mechanical rather than classical description. The fact that a transition in properties is taking place without outside interference may point to some flaw in quantum mechanics. Any comments on how to explain this.
I am asking this question on the supposition that a classical body may be broken down in particles which are so small in size that quantum mechanics is applicable on each of these small particles. Here number of particles tends to uncountable (keeping number/volume as constant).
Now statistical mechanics is applicable if practically infinite no. of particles are present. So if practically infinite number of infinitely small sized particles are there, Quantum Statistical Mechanics may be applied to this collection. (Please correct me if I have a wrong notion).
But this collection of infinitesimally small particles make up the bulky body, which can be studied using classical mechanics.
It is suggested that the Zero Point Energy (that causes measurable effects like the Casimir force and Van der Waals force) cannot be a source of energy for energy harvesting devices, because the ZPE entropy cannot be raised, as it is already maximal in general, and one cannot violate the second law of statistical mechanics. However, I am not aware of a good theoretical or empirical proof that ZPE entropy is at its highest value always and everywhere. So I assume that ZPE can be used as a source of energy in order to power all our technology. Am I wrong or right?
If MD simulations converges to Boltzmann distributions ρ∼exp(−βϵ) after sufficiently long time why do we need MD simulations, as all the macroscopic quantities can be computed from the Boltzmann distribution itself. This question I am asking for short peptides of sequences of few amino acids.(tripeptide, tetrapeptide etc).
For instance in the given above (link) paper, they are using MD to generate Ramachandran distributions of conformations of pentapeptide at a constant temperature. So this should obey statistical mechanics. If it is so, then this should satisfy Boltzman distributions.So I should be able to write down the distributions using boltzmann weight as follows,
ρ({ϕi,ψi})∼exp(−βV({ϕi,ψi}))
.Here, all set of Ramachandran angle coordinates of the pentapeptides is given by {ϕi,ψi}{ϕi,ψi}.
Why should I run MD to get the same distributions?
How to calculate mean square angular displacement ? Do we need any periodic boundary conditions if at each step angle is updated using theta(t+dt) = theta(t) + eta(t) , where eta(t) is a gaussian noise . Please describe the procedure to calculate this quantity.
Is there any code available to calculate Spin Spin Spatial correlation function in 1d Ising model?
Recently, I've been selected for an ICTP program named Physics of Complex Systems. But, I have a keen interest in Particle Physics & Quantum networks. As statistical mechanics involved in Complex systems. One of my professors said that statistical mechanics could be a helpful tool for particle physics.
I mean, in Williamson-Hall analysis, should we take theta of a set of parallel planes or all the positions corresponding to most intense peaks?
Dear All;
If you are well-experienced in one of the fields Complex Networks, Human Genetics, or Statistical Mechanics and would like to collaborate with us in our project please contact me at:
Basim Mahmood
Project title: Statistical Mechanics of Human Genes Interactions
Regards
I would like to calculate the non-gaussian parameter from MSD. I think I am doing mistake in calculating it?
3 < r(t)**4 >
NGP ((alpha)(t))= ----------------------- - 1
5 < r(t)**2 >**2
In some articles it is delta r(t). I am bit confused. Someone please help me out in calculating it by explain the terms in it?
Thank you
Hello Dear colleagues:
it seems to me this could be an interesting thread for discussion:
I would like to center the discussion around the concept of Entropy. But I would like to address it on the explanation-description-ejemplification part of the concept.
i.e. What do you think is a good, helpul explanation for the concept of Entropy (in a technical level of course) ?
A manner (or manners) of explain it trying to settle down the concept as clear as possible. Maybe first, in a more general scenario, and next (if is required so) in a more specific one ....
Kind regards !
This is to understand how the concepts of statistical mechanics is applied in astrophysics.
I am looking for some quality materials for learning the molecular dynamics theory and the use of LAMMPS. Besides the LAMMPS manual from Sandia National Laboratory, which sources can I use for learning LAMMPS?
This question relates to my recently posted question: What are the best proofs (derivations) of Stefan’s Law?
Stefan’s Law is E is proportional to T^4.
The standard derivation includes use of the concepts of entropy and temperature, and use of calculus.
Suppose we consider counting numbers and, in geometry, triangles, as level 1 concepts, simple and in a sense fundamental. Entropy and temperature are concepts built up from simpler ideas which historically took time to develop. Clausius’s derivation of entropy is itself complex.
The derivation of entropy in Clausius’s text, The Mechanical Theory of Heat (1867) is in the Fourth Memoir which begins at page 111 and concludes at page 135.
Why does the power relationship E proportional to T^4 need to use the concept of entropy, let alone other level 3 concepts, which takes Clausius 24 pages to develop in his aforementioned text book?
Does this reasoning validly suggest that the standard derivation of Stefan’s Law, as in Planck’s text The Theory of Heat Radiation (Masius translation) is not a minimally complex derivation?
In principle, is the standard derivation too complicated?
I'm writing my dissertation about economic dynamics of inequality and i'm going to use econophysics as a emprical method.
Or is the concept inapplicable?
If it were applicable, could statistical mechanical methods apply? Does entropy?
Some excerpts from the article
Comparing methods for comparing networks Scientific Reports volume 9, Article number: 17557 (2019)
By Mattia Tantardini, Francesca Ieva, Lucia Tajoli & Carlo Piccard
are:
To effectively compare networks, we need to move to inexact graph matching, i.e., define a real-valued distance which, as a minimal requirement, has the property of converging to zero as the networks approach isomorphism.
we expect that whatever distance we use, it should tend to zero when the perturbations tend to zero
the diameter distance, which remains zero on a broad range of perturbations for most network models, thus proving inadequate as a network distance
Virtually all methods demonstrated a fairly good behaviour under perturbation tests (the diameter distance being the only exception), in the sense that all distances tend to zero as the similarity of the networks increases.
If achieving thermodynamic efficiency is the benchmark criterion for all kinds of networks, then their topologies should converge to the same model. If they all converge to the same model when optimally efficient, does that cast doubt on topology as a way to evaluate and differentiate networks?
See for example, Statistical mechanics of networks, Physical Review E 70, 066117 (2004).
The architecture and topology of networks seem analogous to graphs.
Perhaps though the significant aspect of networks is not their architecture but their thermodynamics, how energy is distributed via networks.
Perhaps networks linkages are only means for optimizing energy distributions. If so, then network entropy (C log(n)) might be more fundamental (and much simpler to use) than the means by which network entropy is maximized. If that were so, then the network analogy to graphs might lead to a sub-optimal conceptual reference frame.
Dimensional capacity is arguably a better conceptual reference frame.
Your views?
This question is prompted by the books review in the September 2020 Physics Today of The Evolution of Knowledge: Rethinking Science for the Anthropocene, by Jürgen Renn.
I suspect that there is such an equation. It is related to thermodynamics and statistical mechanics, and might be characterized, partly, as network entropy.
Two articles that relate to the question are:
and also there is a book, the ideas in which preceded the two articles, above:
The Intelligence of Language.
The question is related somewhat distantly to an idea of Isaac Asimov in his science fiction, The Foundation Trilogy, psychohistory.
There are two ways to derive Boltzmann exponential probability distribution of ensemble:
1) Microcanonical Ensemble: We assume a system S(E,V,N)
E= internal Energy, V=volume, N=number of molecules or entities.
We have different energy states that the molecules can take, but the total energy E of the system is fixed. So whatever be the distribution of molecules in different energy levels, the energy of the over all system is fixed. Then we find the maxima of Entropy of the system to find out the equilibrium probability distribution of molecules in energy levels. We introduce two Lagrange multipliers for two constraints: total probability is unity and total energy is constant E. What we get is an exponential distribution.
2) Canonical Ensemble: We have a system with N molecules. The Helmholtz energy is defined as F=F(T,V,N). So this time energy is not fixed but the temperature is. Instead of different energy states for the molecules, now we have different energy levels of the entire system to be. So by minimization of F we get the equilibrium probability distribution of the system to be in different energy levels. This time the constraint is total probability is unity. The distribution we get is an exponential one.
Now the question is:
How can the probability distribution of the canonical ensemble can give population distribution of molecules in different energy states which is rather found from micro-canonical ensemble?
In the book Molecular driving forces (Ken A Dill) Chapter 10. Equation 10.11 says something similar.
The third rotation allegedly leaves the molecule unchanged no matter how much it is rotated, but is it really okay to assume this? A response in mathematics is welcomed, but if you can explain it in words that would be good, too.
Suppose, chemical composition of the compound, temperature and pressure are known. Electronic structure of constituent elements from numerical solution of Quantum chemistry are also known. Then
- There can be only about 230 3D crystallographic lattices. But is there any limit of motif that can be included into the lattice without violating stoichiometry? How ab-initio calculation find out the appropriate motif to put into lattice to generate crystal structure? Without finding motifs, it is impossible to find crystal structures whose Gibbs free energy needs to be minimized.
- Is there any mathematical method that finds out potential energy in an infinite 3D periodic lattice with distributed charges (say, theoretical calculation of Madelung constant)? What are the mathematical requirement/prerequisite to understand such formula?
- How electron cloud density and local potential energy of a molecule/ motif/lattice point can be linked to total Gibbs free energy of molecule/lattice integrated over the whole structure? What are the statistical-mechanical formula that relates the two? and what are the prerequisites to understand such formula?
Suppose reference point for zero gibbs free energy is conveniently provided.
Dear Colleagues :
Does anyone have literature referencing the diffusion process of Carbon (I mean Carbon atoms) into Bismuth Telluride (Bi2Te3) or into some other compound alike ? E.g. PbTe, (Sb,Se)Bi2Te3, Sb2Te3, etc ... ?
I'll really appreciate if someone can help me out
Kind Regards Sirs !
In the question, Why is entropy a concept difficult to understand? (November 2019) Franklin Uriel Parás Hernández commences his reply as follows: "The first thing we have to understand is that there are many Entropies in nature."
His entire answer is worth reading.
It leads to this related question. I suspect the answer is, yes, the common principle is degrees of freedom and dimensional capacity. Your views?
By quasi-particle I mean in the sense of particles dressed with their interactions/correlations? If yes, any references would be helpful.
Dear all:
I hope this question seems interesting to many. I believe I'm not the only one who is confused with many aspects of the so called physical property 'Entropy'.
This time I want to speak about Thermodynamic Entropy, hopefully a few of us can get more understanding trying to think a little more deeply in questions like these.
The Thermodynamic Entropy is defined as: Delta(S) >= Delta(Q)/(T2-T1) . This property is only properly defined for (macroscopic)systems which are in Thermodynamic Equilibrium (i.e. Thermal eq. + Chemical Eq. + Mechanical Eq.).
So my question is:
In terms of numerical values of S (or perhaps better said, values of Delta(S). Since we know that only changes in Entropy can be computable, but not an absolute Entropy of a system, with the exception of one being at the Absolute Zero (0K) point of temperature):
Is easy, and straightforward to compute the changes in Entropy of, lets say; a chair, or a table, our your car, etc. since all these objects can be considered macroscopic systems which are in Thermodynamic Equilibrium. So, just use the Classical definition of Entropy (the formula above) and the Second Law of Thermodynamics, and that's it.
But, what about Macroscopic objects (or systems), which are not in Thermal Equilibrium ? Maybe, we often are tempted to think about the Entropy of these Macroscopic systems (which from a macroscopic point of view they seem to be in Thermodynamic Equilibrium, but in reality, they have still ongoing physical processes which make them not to be in complete thermal equilibrium) as the definition of the classical thermodynamic Entropy.
what I want to say is: What would be the limits of the classical Thermodynamic definition of Entropy, to be used in calculations for systems that seem to be in Thermodynamic Equilibrium but they aren't really? perhaps this question can also be extended to the so called regime of Near Equilibrium Thermodynamics.
Kind Regards all !
cancer (oncology) is field of biophysics
Since the Gaussian is the maximal (Shannon's) entropy distribution in unbounded real spaces, I was wondering whether the tendency of cummulative statistical processes with the same mean having a Gaussian as the limiting distribution can be in some way physically related with the increase of (Boltzmann's) entropy in thermodynamical processes.
In Johnson, O. (2004) Information Theory and The Central Limit Theorem, Imperial College Press, we can read:
"It is possible to view the CLT as an anlogue of the Second Law of Thermodynamics, in that convergence to the normal distribution will be seen as an entropy maximisation result"
Could anyone elaborate on such relationship and perhaps point to other non-obvious ones?
I am trying to simulate a heterogeneous liquid mixture with single-site atoms (translational motion only) and multi-site rigid molecules (translational + rotational motion). The molecules also vary in mass and moment of inertia from species to species. Does anyone know how to calculate the initial magnitudes of the translational and rotational velocities in relation to the desired temperature?
I understand the widely-used MDS programs take care of this "under the hood", but I am interested to know what exactly the calculation is. I have found related texts, but they focus on uniform systems. Thank you in advance for any help.
Anne
Dear all,
I have a number of Likert items with statements which have the following answer options
- Less likely
- No effect on likelihood
- More likely
I am unable to find the answer to the following question, and therefore cannot seem to determine the overarching data analysis family, let alone the correct techniques, to analyse my data set.
Am I able to analyse my data with any quantitative methods, either descriptive or inferential, or do I need to use purely qualitative methods in analysing the data?
I understand that the data needs to be contextualised before the statistical mechanism is determined, for example, the parametric/non-parametric debate. However, I am struggling to determine if the data type allows for quantitive analysis when I have 25 statements, each with 700 or so answers that indicate one of the above item answer options.
Also, I have a data set that pertains to one sample at one point in time. Can any correlational statistics or inferential statistics be done? Or do those methods only apply when you have either independent or paired samples? Or am I meant to compare one statement with another (Or a compilation of statements representing a theme with another compilation representing another theme) when running correlational tests?
Looking for that thread of information that ether says ties all my misalignments.
Kind regards,
Jameel
This question must be accompanied by provisos. One particular proviso simplifies the task. Assume that the problem solving used throughout the development of language was of the same kind that has at all times occurred since. In other words, assume that it is valid to use averages over time, at least for the time period under consideration. In 2009 I used ideas relating to statistical mechanics to estimate, on certain assumptions (a language-like call `lexicon' of about 100 calls), that language began between about 141,000 to 154,000 years ago in a couple of articles, and
). at p. 74. The work in those articles is over 10 years old and there have been developments since. One involves dispersion of phonemic diversity (Atkinson 2011). Are there other approaches?
Hi, for my statistical mechanics class, I had to calculate all the thermochemistry datas by myself and try to see I if get to the same results as Gaussian. Everything is good except for the zero point energy contribution. For instance, in the Gaussian thermochemistry PDF, they say that the ''Sum of electronic and thermal Free Energies'' (wich you can find easily in the output or in Gaussview) is suppose to be the Gibbs free energy. But its not! In fact the zero point energy is missing and there is nowhere in the output or in GaussView where you can see the correct value. And this true also for Enthalpy and internal energy. In fact we have to ad the zero point energy afterwards, which I think is weird since this is the values we really need. In their pdf Gaussian emphisis that the ZPE is added everywhere by default, but it's not true. My teacher is also suspicious about the software. Do we miss something here? What do you guys take as your G, H and U? And where do you find them. I think we could all be wrong if we forget to ad the zero point energy which is what I think Gaussian does.
Thanks
I came across this question while studying Tuckerman book on Statistical Mechanics for Molecular Dynamics.
Let's just say we're looking at the classical continuous canonical ensemble of a harmonic oscillator, where:
H = p^2 / 2m + 1/2 * m * omega^2 * x^2
and the partition function (omitting the integrals over phase space here) is defined as
Z = Exp[-H / (kb * T)]
and the average energy can be calculated as proportional to the derivative of ln[Z].
Equipartion theorem says that each independent coordinate must contribute R/2 to the systems energy, so in a 3D system, we should get 3R. My question is does equipartion break down if the frequency is temperature dependent?
Let's say omega = omega[T], then when you take the derivative of Z to calculate the average energy. If omega'[T] is not zero, then it will either add or detract from the average kinetic energy and therefore will disagree with equipartition. Is this correct?
According to statistical mechanics, the translation energy of a system of point particles is given by 3/2 NKT. And it is known that a single particle exhibits only translational energy. So can we simply imply that single particle system energy can be obtained just by substituting N=1? Because as far as I remember, the first principle of statistical mechanics assumes that number of particles of a system is extremely large, so we can't directly apply those principle for a single particle system
Even gases like air are assumed (for sufficiently low flow velocities) to have constant density. Is it only because the hydrodynamic equations of motions are easier to solve when incompressibility is assumed? Or can it be proven with Statistical mechanics why incompressibility is frequently assumed?
In case of Sound waves, small deviations in Density are respected and kinetic Energy of many sound waves are a lot lower than the air flow around a car at 100mph. Will compressibility come into account also at low speeds and if yes, why?
In the introduction to his text, A Student’s Guide to Entropy, Don Lemons has a quote “No one really knows what entropy is, so in a debate you will always have the advantage” and writes that entropy quantifies “the irreversibility of a thermodynamic process.” Bimalendu Roy in his text Fundamentals of Classical and Statistical Mechanics (2002) writes “The concept of entropy is, so to say, abstract and rather philosophical” (p. 29). In Feynman’s lectures (ch. 44-6): “Actually, S is the letter usually used for entropy, and it is numerically equal to the heat which we have called Q_S delivered to a 1∘-reservoir (entropy is not itself a heat, it is heat divided by a temperature, hence it is measured in joules per degree).” In thermodynamics there is the Clausius definition which is a ratio of a quantity of heat Q to a degree Kelvin, Q/T, and the Boltzmann approach, k log(n). Shannon analogized information content to entropy; 2 as the base of the logarithm gives information content in bits. Eddington in the Natural Physical World (p. 80) wrote: “So far as physics is concerned time’s arrow is a property of entropy alone.” Thomas Gold, physicist and cosmologist suggested that entropy manifests or relates to the expansion of the universe. There are reasons to suspect that entropy and the concept of degrees of freedom are closely related. How best we understand entropy?
Zipf law (which is a power law) is the maximum entropy distribution of a system of P particles in N boxes where P>>N. Its derivation is based on microcanonical ensemble in which the entropy is calculated for an isolated system. In the canonical ensemble the system is with contact with an external bath having a fix temperature T. The macroscopic quantities of the canonical ensemble are calculated from its partition function in which the probabilities decay exponentially with energy.
The question is: how is Zipf law a power law which can be obtained from exponential partition function?
hi everyone can any one help me to find the entropy index to measure the diversification for the company by using Σ Pi*ln(1/Pi) I already have the total sales for each year and I have each segment sales share .. N the number of industry segments , pi is the percentage of ith segment in total company sales
I am recently start to study statistical mechanics, but it's really hard to understand some of the basic concepts. Especially the textbooks. I follows statistical physics by F. Reif, but the language is little hard to understand. Please give me suggestions on simple books.
I have to calculate the rate of tunnelling in a protein, for which I need the transmission coefficient. How do I calculate it? Or is there another way that does not require the transmission coefficient?
Does anyone have experience estimating formation enthalpy or Atomization energy of molecules using Gaussian? I am trying to calculate this parameter for simple molecules like H2O and NH3, however, there is a significant error even with decent theory level. I am wondering how accurate this could be done? Is there any specific theory/method performing better than others? ( currently, I am using B3LYP/6-311++G (2d,2p) . I do not have any QM background and any thought would be appreciated.
Here are some numbers I am getting from Gaussian, compared to the JANAF table values.
Molecule: Gaussian JANAF
H2O : 1174 KJ --------- 917 KJ
NH3 : 1432 ------------ 1158
N2 : 1467 ----------------- 941
H2: 434 ------------------ 432
O2: 870 -------------- 493
Javad
The characteristic frequency of thermal motion is around 7E12 Hz at room temperature (300K), but from that information how can we conclude that the bonds are hard; they don't vibrate !!
Hi, I want to ask a question about the basic theory of molecular dynamics.
In MD simulations, we can calculate the temperature using the average of kinetic energy of the system. For ideal gas(pV=NkbT), I can derive the relationship between temperature and kinetic energy: 1/2mv^2=3/2kbT (3-dimension). But if simulating non-ideal gas or fluids, how can I get the relationship between temperature and kinetic energy?
Could anyone give me some understandable explanations(I know little about quantum mechanics)? Any relevant material or link will be appreciated. Thanks!
I want to know, is negative T state only conceptually catches one's eye or truely significant to help us understand thermodynamics ?
- In early days Purcell et al, and in my university textbooks, negtive T in spin degree of freedom in NMR system was mentioned;
- In 2013, S. Braun et al perform an experiment in cold atoms and realize an inversed enegy level population for motional degeree of freedom. (http://science.sciencemag.org/content/339/6115/52)
- Many disputes about the Boltzman entropy or Gibbs entropy, as I sknow, especially Jörn Dunkel et al(https://www.nature.com/articles/nphys2815); they insist on Gibb's entropy is physical and argue that negative T is wrong.
- After that, many debates emerges, I read several papers, they all agree with conventional Boltzman entropy.
Does anyone has comments about this field ?
Is it truely fascinating or just trival to realize a population inversion state——negtive temperature ?
or anyone has clarification of the Carnot engine work between a negtive T and positive T substance?
Any comments and discussions are welcome.
Hi,
I apologize for this apparently silly question, but please, could you point me out if there is an underlying relationship between the defect driven phase transition and the directed percolation?
Secondly, is it possible to have a system which undergoes a KT transition at T1 generating free vortices, and subsequent by a spatial spreading of disorder via directed percolation at T2?
Please, if there is any relevant examples and materials, do let me know.
Many thanks.
Wang Zhe
Hi,
The vortex-unbinding Kosterlitz-Thouless physics generally applies to two-dimensional systems and occasionally three dimensional solid.
I was wondered if there exits an one dimensional analogy of vortex-unbinding occuring in two dimensions. Could anyone point me out, please?
Thank you.
Very kind wishes,
Wang Zhe
We know the ergodic definition and know the ergodic mappings. But what is the ergodic process?
Why Hamilton's equation is not used for constructing dynamical equation in liquid crystal?
Hi everyone,
I'm trying to solve this exercise (attached file) from the JM Yeomans book "Statistical mechanics of phase transition".
I understood how the expansion works for the same model without the field term but here I have troubles figuring out which terms are vanishing and which one are not to answer the second question. Also I don't get what de S_m(v,N) term represents ...
Anyone's help is welcomed ! :)
Hi!
Please, could anyone point me out an intuitive way to understand the exponential divergence of the correlation length in the KT-transition; in contrast to the usual algebraic divergence in the common sense of critical phenomena?
Thank you.
Wang Zhe