Question

# Are the old postulates of statistical mechanics still postulates, or are they now derived conclusions?

Maybe there is a direct relation between the measurement problem and the relationship between microscopic and macroscopic phenomena. I recently learned of a topic called the "collapse model" of QM. Some people claim that this is the resolution of the measurement problem, and also the explanation of the relationship between microscopic and macroscopic phenomena. I'm not an expert but I did find a paper that seems to be something of a tutorial of the collapse model. You can find it at this location:
1 Recommendation

In a simple and introductory way, there is a book by Prof. F. Reif of the Berkeley course, i.e., Vol 5: "Statistical Physics" by F. Reif.
In chapter 7, section 7.4, pp. 281 of the 1965 edition by MCGraw Hill, he discusses in an introductory way what he calls: "the basic five statements of statistical thermodynamics" which are based on some statistical postulates that he also talks about in section 3.3, pp. 111, there are three postulates, inside of boxes, Eqs. 17, 18, & 19 and among those, the one who you refer to.
I prefer you read from same Prof. Reif book, what he has to say about your interesting question.
Kind Regards.
14 Recommendations

I am not an expert on the fundamentals of statistical physics but let me try (and maybe some of the actual experts can chime in)
What you describe is the microcanonical ensemble where one assumes the system to be completely isolated. This implies that it has a well defined energy at all times. As this value of the energy is compatible with many combinations of positions and momenta (velocities) of particles, the specific state of the system at a given time is drawn from a probability distribution. And this distribution is parametrized by the energy. That explains why the energy is the important quantity here. Fixing other observables leads to other ensembles.
The postulate that all states with the same energy are equally probable is still a postulate as far as I am aware of. And to me it seems that the general suspicion is, that it is, in fact, not true for most systems. However, it seems to lead to the right conclusions. In some sense, the fact that the true probability distribution is more complicated is apparently of little consequence for the average quantities that are calculated with the probability distribution.
Bayesian probability theory likely has to say something to this end but here I am really not qualified to make any statements. However, my understanding is that it is not fully understood why the microcanonical postulate works.
1 Recommendation
That all states with the same energy are equally probable assumes non-degeneracy and isn't true if the states are degenerate. It just amounts to the statement that, for compact phase spaces, the invariant distribution is the uniform distribution.
So what really matters is what are the transformations that leave the phase space invariant.
That energy is the relevant quantity is, also, a simplification, since what really matters is the volume in phase space, which has the dimensions of an action, not an energy.
It isn't possible to prove postulates; what is possible is to adopt different frameworks; with different frameworks, statements that are postulates in one framework are theorems in another. It then becomes a question of taste, which system is used.
In a simple and introductory way, there is a book by Prof. F. Reif of the Berkeley course, i.e., Vol 5: "Statistical Physics" by F. Reif.
In chapter 7, section 7.4, pp. 281 of the 1965 edition by MCGraw Hill, he discusses in an introductory way what he calls: "the basic five statements of statistical thermodynamics" which are based on some statistical postulates that he also talks about in section 3.3, pp. 111, there are three postulates, inside of boxes, Eqs. 17, 18, & 19 and among those, the one who you refer to.
I prefer you read from same Prof. Reif book, what he has to say about your interesting question.
Kind Regards.
14 Recommendations
I can add short comments. A more general principle is the maximum entropy in a state of equilibrium. The principle has been theoretically proven only for a small number of systems.
Energy, as well as momentum and volume, are exceptional because there are extensive conserved values. But really the extensive property is satisfied only approximately for a very large systems with short range interactions.
2 Recommendations
My question is:
When the energy of thermal motion of a given system to be converted into the potential energy, such as the gravitational potential energy, how can one explain those postulates.
1 Recommendation
A more modern confirmation of the postulate is that the Hamiltonian enters the Schrödinger equation. From it we obtain the Neumann equation and the Liouville equation in the classical limit. The postulate under discussion does not contradict it.
2 Recommendations
The postulate of equal a priori probabilities can only hold in the case when thermal motion is stronger than interactions.
For example, we cannot describe a spring elastic potential energy by the aid of the postulate of equal a priori probabilities.
The postulate of equal a priori probabilities I think is rather a tool allowing for convenient calculations than an absolute principle. First of all, in systems with additional globally conserved quantities, such as momentum or angular momentum, one has to restrict the states considered to all the ones having the prescribed values for all of these. In addition, in systems going through a symmetry breaking phase transition, e.g. on lowering temperature, the set has to be restricted further. E.g. for a crystalline state both the orientation and the lattice positions around which the particles oscillate, have to be fixed. The postulate is based on the belief that local properties are the same for all states satisfying all requirements imposed by the known conservation laws and symmetry breakings. And whenever this is found not to be the case, an additional restriction must be added, usually described as an additional symmetry breaking rule.
To my taste statistical physics is a highly pragmatic, yet highly successful branch of physics.
The problem is the effective range of statistical physics, as we know, thermodynamics does not need to consider those postulates.
When the energy of thermal motion within a gas system is converted into a spring elastic potential energy, we confront such a problem: for the energy of thermal motion of the gas system, the postulate of equal a priori probabilities is applicable, however, for the spring elastic potential energy, the postulate of equal a priori probabilities is not applicable.
Therefore, the postulate of equal a priori probabilities is not applicable to describe the complete process.
1 Recommendation
I remember reading that statistical mechanics applies only to systems in thermodynamic equilibrium. I guess a compressed spring can be thought of as being in thermodynamic equilibrium as long as it is held in the compressed state, but that is not a very interesting situation (the questions that statistical mechanics would answer are questions that nobody asks). A more interesting situation is what happens when the spring is released, and is no longer in thermodynamic equilibrium. As already pointed out by Tang Suye, statistical mechanics does not apply.
Although statistical mechanics has limited applicability, the original question that started this thread can still be asked regarding those situations that statistical mechanics does apply to, so I am still interested in suggested answers.
@Edmonds is too restrictive about statistical mechanics. There is equilibrium statistical mechanics, which can be used for deriving thermodynamics and for obtaining explicit forms for functions like equations of state (e.g. expressing pressure as a function of temperature and density, or melting temperature as a function of pressure), for which thermodynamics derives general relations. Many people ask questions related to these.
In addition there is non-equilibrium statistical mechanics, which has a lot to say about the oscillations of springs and their damping, as well as about many more non-equilibrium phenomena. Perhaps the oldest example is Boltzmann's equation, describing, among other things the decay to equilibrium of a dilute gas.
1 Recommendation
Thank you Henk van Beijeren for the information. I never studied non-equilibrium statistical mechanics and did not know there is such a thing. I know of Boltzmann's equation, a topic in kinetic theory, but was not aware that this also belongs to the category of non-equilibrium statistical mechanics. If it does, then non-equilibrium statistical mechanics does not use the same postulates (e.g., equal a priori probabilities) as equilibrium statistical mechanics. The question that started this thread is asking about those postulates so I edited the question by replacing "statistical mechanics" with "equilibrium statistical mechanics".
1 Recommendation
You are totally right & thank you for that clarifying answer:
Non equilibrium Statistical Mechanics has to do a lot with damping of a particular case of springs, and also yes, the Boltzmann equation for a decay of a dilute gas is one of the examples (let say that even where the gradient of temperature can be neglected).
But when there are gradients of temperature, the exercise becomes more interesting as a numerical problem.
It took me 28 years since I took my first course on non equilibrium statistical mechanics to understand your remarkable statement.
Kind Regards.
14 Recommendations
Up to now, the postulate of equal a priori probabilities cannot be deduced from the more fundamental laws of physics, it is thought to be unprovable.
The postulate is only applicable to the energy of thermal motion, the last one of the undefined forms of the internal energy, not all forms of energy.
The equilibrium states of a given thermodynamic system include thermal equilibrium, mechanical equilibrium, chemical equilibrium, the postulate of equal a priori probabilities cannot be applied to mechanical equilibrium, chemical equilibrium.
The effective ranges of statistical mechanics and thermodynamics are different, statistical mechanics itself can hardly establish the relationships between the thermodynamic functions that is similar to the fundamental equations of thermodynamics.
3 Recommendations
Tang Suye said "Up to now, the postulate of equal a priori probabilities cannot be deduced from the more fundamental laws of physics, it is thought to be unprovable. "
That is the answer to my question. My question has been answered. But more discussions might follow on this thread. I heard of a theory stating that macro-physics (large and complicated systems) has additional postulates independent of and not derivable from the more fundamental laws of physics. The postulate of equal a priori probabilities appears to be an example. So what is missing from the fundamental laws? Initial conditions of the universe? Any thoughts on that?
1 Recommendation
I forgot to mention that the theory I heard was from a person being interviewed on an episode of a TV series called "Closer to Truth". Unfortunately, I don't remember which episode or the name of the person being interviewed. I only remember that the person said that very large and complicated systems (including but not limited to biological systems) are subject to postulates not derivable from the accepted laws governing smaller systems. The only thing that I can think of that is missing from the accepted fundamental laws are initial conditions. Perhaps something else is missing that I am not aware of?
1 Recommendation
Dear Prof. L.D. Edmonds
This is a formal definition of what is physical kinetics & it includes the differences with the thermodynamics of non-equilibrium processes, although it does not say anything about springs, which is one of the most wonderful differences, in addition to the scattering lifetime and the mean free path physical concepts:
It is taken, adapted & translated from a Russian link, I do not know what the LaTex processor does not work in the cited website:
Physical kinetics (ancient Greek κίνησις - movement) is a microscopic theory of processes in non-equilibrium media where the methods of quantum or classical statistical physics are used to study the processes of transfer of energy, momentum, charge, and matter in various physical systems (gases, plasma, liquids, & solids) and the influence of external fields on them.
Unlike the thermodynamics of nonequilibrium processes and the electrodynamics of continuums, kinetics proceeds from the idea of ​​the molecular structure of the media under consideration, which makes it possible to calculate from first principles the kinetic coefficients, dielectric and magnetic permeabilities, and other characteristics of continuums.
Physical kinetics includes:
• The kinetic theory of gases from neutral atoms or molecules.
• The statistical theory of nonequilibrium processes in plasma.
• The theory of transport phenomena in solids (dielectrics, metals, and semiconductors) and liquids.
• The kinetics of magnetic processes.
• The theory of kinetic phenomena which associates with the passage of fast particles through matter.
• The theory of transport processes in quantum liquids and superconductors and The kinetics of phase transitions.
Kind Regards.
12 Recommendations
Perhaps the best example of a postulate that is not derivable from properties of formal systems is Boltzmann's Stosszahlansatz of lack of velocity correlations between pairs of particles about to collide. In a way this is connected indeed to initial conditions; there is Loschmidt's paradox arguing irrevocably that after time reversal in a single system or an ensemble of systems one will be subject to an anti-Boltzmann equation with zero velocity correlations just after collisions instead of before. But this will only hold up to return of the system to the initial time. If the resulting state shows no velocity correlations, the kinetic equation describing the system subsequently will be the ordinary Boltzmann equation. To be noted: there is no mathematical proof that for an ensemble satisfying the Stosszahlansatz initially this will remain the case forever, with the exception of Lanford's proof for extremely dilute hard sphere systems. But the latter remains limited to times up to a finite but fairly small fraction of the mean free time between subsequent collisions of the same particle.
The postulate of equal a priori probabilities to my impression remains just a convenient tool for calculating or at least defining equilibrium averages. It is sufficient (under proper conditions) but by no means necessary.
 Oscar E. Lanford. Time evolution of large classical systems. In Dynamical systems, theory and applications, pages 1–111. Springer, 1975.
2 Recommendations
The postulate of equal a priori probabilities cannot be applied to complex system, the main topic about complex systems is evolution, statistical mechanics does not contain the internal causal mechanism of evolution, which can only describe some simple systems, or the system of simple systems.
This is not only the problem about the postulates, or initial conditions, one key point is, are the theoretical structures of the micro-theory and macro-theory similar?
For examples
1，The master equations of dynamics （including Newton, Maxwell，Hamilton）are established on a fundamental principle: the law of conservation of energy, in the present dynamics, there has no similar theoretical structure of the second law of thermodynamics, so it makes no sense to discuss irreversibility in the present dynamics, the second law of thermodynamics cannot be derived from the present dynamics.
2，The second law of thermodynamics itself has already contained the internal mechanism of evolution, which cannot be described by statistical mechanics. The theoretical postulates of the two are different, and the theoretical structures of the two are also different.
The fundamental laws of dynamics and statistical mechanics describe “being”, the complexity theory describe “becoming”, the later cannot be derived from former.
2 Recommendations
Thank you Tang Suye . But I have a question. Isn't it true that the second law of thermodynamics (or some equivalent description of an approach to thermodynamic equilibrium) has been derived for ideal gases characterized by binary collisions? Or am I wrong about that? If I am correct then we might ask if a similar derivation for more complicated systems is not available only because nobody figured out how to do it yet. This is different than intrinsically not do-able, meaning that it can't be done no matter how smart we are because the information is not there. I think that you are telling us that the master equations of dynamics (Newton and so on) don't contain enough information to derive the second law of thermodynamics no matter how smart we are. But if this derivation can be done for ideal gases, can we be sure that it can't be done for a more general case?
The master equations of dynamics (Newton and so on) are the equations of conservation of energy, corresponding to the first law of thermodynamics. As we know, the first law and the second law are the two different laws, the equation of the second law is not an equation of conservation of energy.
An ideal gas characterized by binary collisions only discuss thermal motion, however, in more general case, there are some other forms of the states and processes, such as mechanical potential energy, Gibbs free energy, and besides the gas, there are solids, structures.
Mechanical potential energy and Gibbs free energy are different from the energy of thermal motion, in equilibrium states, the two both are not most probable distribution in the sense of statistical mechanics, the postulate of equal a priori probabilities is not applicable. Mechanical potential energy, Gibbs free energy and the energy of thermal motion can be converted into each other, statistical mechanics cannot discuss such states and the processes.
In non-equilibrium state, for an ideal gas, there are two thermodynamic forces, the gradient of the temperature and the gradients of the density of the gas, the two gradients are the driving forces of the diffusing processes.
However, there are some other thermodynamic driving forces which come from interactions but not thermal motion, different from the gradient of the temperature and the gradients of the density, which drive some other processes different from the diffusing processes, such as condensation, crystallization, energy conversion, chemical reaction, phase transformation.
The second law of thermodynamics describes all of the macro-processes of thermodynamic system, thermal motion or an ideal gas characterized by binary collisions is only one.
“Heat always flows from a body at a high temperature (hence is cooled) to a body at a lower temperature (which is heated up)”, the driving force of the process is the gradient of the temperature;
“Two gases spontaneously mix”, the driving force is the gradients of the density of the two gases;
“Water always flows downhill”, the driving force of the process is the gradient of gravity force;
The driving force of a chemical reaction or phase transformation are the gradients of the chemical potentials.
The are some different statements about the second law, you may compare these with my statement:
The second law of thermodynamics is derived from a fundamental principle: All of the gradients of the thermodynamic forces spontaneously tend to zero, and in an irreversible process, the dissipation of the gradients of the thermodynamic forces cannot be spontaneously recovered.
Dear L.D. Edmonds, you are referring to the Boltzmann equation, which holds for very dilute gases of interacting particles, not really ideal gases, which are non-interacting. The Boltzmann equation has been derived only under the essential assumption known as the Stosszahlansatz, that the velocities of two particles about to collide are completely uncorrelated. In the limit of almost vanishing density this looks very plausible, as the particles obtained their velocities in far remote area's. But it remains an assumption, which, to start with requires the absence of velocity correlations in the initial state. But there exists no mathematical proof that initial absence of velocity correlations will persist for long times (see a previous remark for a reference to Lanford's work for hard spheres).
Experimentally the validity of the Boltzmann equation has been confirmed very well. Some generalizations to higher densities are available (see e.g.  and references therein). It is a complicated matter, partly because a systematic expansion in powers of the density is not possible and partly because simultaneous interactions between more than two particles are very hard to deal with (see again ).
Master equations, like the Boltzmann equation require additional assumptions beyond the microscopic equations of motion, responsible for the irreversibility of the macroscopic dynamics. The second law, implying increase of entropy with time cannot be derived from reversible microscopic equations alone. Most plausible to me seems some property has to be required of the state just after the big bang.
Another remark, which I learnt from Nico van Kampen, is that gravity, making systems unstable at large scales, and cosmic expansion may perhaps lead to violations of the second law at cosmic scales. In fact, whether this is true may be a good question.
 Contemporary Kinetic Theory of Matter, J. R. Dorfman, University of Maryland, College Park, Henk van Beijeren, Universiteit Utrecht, The Netherlands, T. R. Kirkpatrick, University of Maryland, College Park
3 Recommendations
Henk van Beijeren said: "The Boltzmann equation has been derived only under the essential assumption known as the Stosszahlansatz, that the velocities of two particles about to collide are completely uncorrelated. In the limit of almost vanishing density this looks very plausible, as the particles obtained their velocities in far remote area's. But it remains an assumption, which, to start with requires the absence of velocity correlations in the initial state. But there exists no mathematical proof that initial absence of velocity correlations will persist for long times (see a previous remark for a reference to Lanford's work for hard spheres)."
That answers the question in my latest post very nicely and in a way that I can understand. Thank you.
Also, the statement "which, to start with requires the absence of velocity correlations in the initial state" gets me to thinking. It seems amazing that the universe could be so homogeneous that this statement applies to every little macroscopic piece of material. It seems less amazing to think that, instead of such extremely homogeneous initial conditions, some (not yet discovered) microscopic law is forcing this condition. If so, then I guess a new microscopic law needs to be discovered if we are to derive macroscopic behaviors from microscopic equations.
Dear professor Henk van Beijeren,
You wrote:
“Boltzmann equation has been derived only under the essential assumption known as the Stosszahlansatz, that the velocities of two particles about to collide are completely uncorrelated.”
For Newton particles system, even in non-interacting case, the collision of two particles will change the relative motions of these two particles with other particles, the relative motions of all particles are correlated, this is a correlation, in Boltzmann statistics, the correlations between the particles are ignored.
Can there be a connection with the “measurement problem” of quantum mechanics (QM)? The experts on this thread made me aware that the physical laws (Newton’s and others since then) for small systems are missing something in the sense that some macroscopic behaviors are not derivable from those laws. It also seems (maybe I am wrong here) that what is missing from the accepted fundamental laws is more than just initial conditions. Some kinds of physics are missing. Maybe the answer to the question of what is missing from the microscopic fundamental laws is also the answer to the measurement problem of QM. Conventional QM theory deals with two kinds of system evolution with time. One (a Schrodinger type) applies between times at which measurements are made and is a deterministic evolution of a system state from a given initial state. Predicting future measurements from this determined state still requires probability interpretations but the state up to the end of that evolution is determined. The other type of evolution is produced by a measurement. This is where evolution is not deterministic, and the theory becomes statistical. The “measurement problem” is the absence of a clear definition of what a “measurement” is. We get by in practice by being able to recognize a measurement (at least sometimes) when we see it, even if we don’t know how to define it. However, even without a definition, a property that seems to characterize a measurement is that some signal has been amplified to a macroscopic level. For example, consider a click on a Geiger counter. It starts with a single particle (clearly described by QM) interacting with an atom in a medium, but the next step is a cascade of electrons moving through the medium, followed by electrical circuitry that activates a speaker to produce an audible sound. This is an example of signal amplification from a single particle to the macroscopic level. When this happens, system evolution is measurement-type instead of Schrodinger-type. The measurement-type is on a macroscopic level while the Schrodinger-type is on a microscopic level. I wonder if the inability of microscopic laws to derive macroscopic behavior might be resolved if and when somebody figures out the answer to the QM measurement problem.
1 Recommendation
The title of the first successful paper is:
Quantum entanglement and interference from classical statistics
Abstract
"Quantum mechanics for a four-state-system is derived from classical statistics. Entanglement, interference, the difference between identical fermions or bosons and the unitary time evolution find an interpretation within a classical statistical ensemble. Quantum systems are subsystems of larger classical statistical systems, which include the environment or the vacuum. They are characterized by incomplete statistics in the sense that the classical correlation function cannot be used for sequences of measurements in the subsystem."
It's on research gate: (1) (PDF) Quantum entanglement and interference from classical statistics (researchgate.net)
I go down this path by being Boltzmann-like on quantum gravity-like the kinetic theory of gasses before it was discovered. "If you can heat it up, then it's made out of atoms," said Boltzmann. In my case the atoms are atoms-of-spacetime plus matter at the Planck scale in aikyon theory. Nobody can measure energy from wavefunction collapse because the energy vanishes into a heatsink. How convenient is that?
National Agricultural Research Center - NARC
The postulate you described is known as the "equal a priori probability postulate" in statistical mechanics. It is a fundamental principle that is used to derive many of the key results of statistical mechanics, including the laws of thermodynamics.
The equal a priori probability postulate is based on the idea that, at equilibrium, the probability of a system being in a particular state is determined solely by the energy of that state. This postulate is often motivated by the idea that, at equilibrium, the system has had sufficient time to explore all of the available states and therefore the probability of being in any particular state should be equal.
There are several reasons why the equal a priori probability postulate is typically expressed in terms of energy rather than some other quantity. One reason is that energy is a conserved quantity, meaning that it is always conserved during any physical process. This makes energy a natural choice for characterizing the states of a system.
Another reason is that energy is a scalar quantity, meaning that it does not have any direction associated with it. This makes it easier to work with mathematically, as it allows for the use of simple probability distributions and statistical methods.
Overall, the equal a priori probability postulate is an important principle in statistical mechanics that has been very successful in explaining the statistical behavior of macroscopic systems in equilibrium. While it is not derived from more fundamental laws, it is a useful postulate that has proven to be very effective in describing the behavior of many different types of physical systems.
Thank you Ahmad Al Khraisat . Your statement "While it is not derived from more fundamental laws,...", (referring to the equal a priori probability postulate) is an answer to this thread's question. It is an answer that appears to have universal agreement among the people participating in this thread. You also said something that sounds profound but I'm not sure I have a good understanding of it. This is the statement "This postulate is often motivated by the idea that, at equilibrium, the system has had sufficient time to explore all of the available states and therefore the probability of being in any particular state should be equal." That sounds profound but I don't quite understand why the implication follows. Why does exploring all states imply equal probability? Can that be explained in layman's terms?
National Agricultural Research Center - NARC
L.D. Edmonds The equal a priori probability postulate is a fundamental assumption in statistical mechanics, which is a branch of physics that deals with the behavior of large systems composed of many particles. This postulate states that, at equilibrium, the probability of a system being in any particular state is the same, regardless of the state.
The idea behind this postulate is that, at equilibrium, the system has had sufficient time to explore all of the available states, and therefore the probability of being in any particular state should be equal. This is because, over time, the system will have visited each state an equal number of times, on average.
To understand this idea in more detail, it's helpful to consider a simple example. Imagine a box containing a large number of balls, each with a different color. If the box is well-mixed, then the probability of any particular ball being drawn out of the box is the same as the probability of any other ball being drawn out. This is because, over time, the balls will be mixed together and each ball will have an equal chance of being drawn.
In the same way, the equal a priori probability postulate assumes that, at equilibrium, a system will have had sufficient time to explore all of the available states, and therefore the probability of being in any particular state should be equal. This is a fundamental assumption in statistical mechanics, and it is important for making predictions about the behavior of large systems.
1 Recommendation
Thank you again Ahmad Al Khraisat .
I am repeating a question from a previous post in the hopes that it will get more attention. The question is as follows:
Can there be a connection with the “measurement problem” of quantum mechanics (QM)? The experts on this thread made me aware that the physical laws (Newton’s and others since then) for small systems are missing something in the sense that some macroscopic behaviors are not derivable from those laws. It also seems (maybe I am wrong here) that what is missing from the accepted fundamental laws is more than just initial conditions. Some kinds of physics are missing. Maybe the answer to the question of what is missing from the microscopic fundamental laws is also the answer to the measurement problem of QM. Conventional QM theory deals with two kinds of system evolution with time. One (a Schrodinger type) applies between times at which measurements are made and is a deterministic evolution of a system state from a given initial state. Predicting future measurements from this determined state still requires probability interpretations but the state up to the end of that evolution is determined. The other type of evolution is produced by a measurement. This is where evolution is not deterministic, and the theory becomes statistical. The “measurement problem” is the absence of a clear definition of what a “measurement” is. We get by in practice by being able to recognize a measurement (at least sometimes) when we see it, even if we don’t know how to define it. However, even without a definition, a property that seems to characterize a measurement is that some signal has been amplified to a macroscopic level. For example, consider a click on a Geiger counter. It starts with a single particle (clearly described by QM) interacting with an atom in a medium, but the next step is a cascade of electrons moving through the medium, followed by electrical circuitry that activates a speaker to produce an audible sound. This is an example of signal amplification from a single particle to the macroscopic level. When this happens, system evolution is measurement-type instead of Schrodinger-type. The measurement-type is on a macroscopic level while the Schrodinger-type is on a microscopic level. I wonder if the inability of microscopic laws to derive macroscopic behavior might be resolved if and when somebody figures out the answer to the QM measurement problem.
National Agricultural Research Center - NARC
L.D. Edmonds It is certainly possible that resolving the measurement problem in quantum mechanics could shed light on the behavior of macroscopic systems and the relationship between microscopic and macroscopic phenomena. The measurement problem in quantum mechanics refers to the fact that the theory does not provide a clear definition of what constitutes a measurement or explain how a measurement causes the wave function of a system to collapse. There are various approaches to addressing this issue, such as the many-worlds interpretation and spontaneous collapse theories, but a complete and satisfactory resolution of the measurement problem has yet to be achieved. It is worth noting that while the measurement problem is a significant open problem in quantum mechanics, it is not clear that it is directly related to the inability of microscopic laws to fully describe macroscopic behavior, which may be due to a variety of factors.
As I mentioned earlier, it is not clear that the inability of microscopic laws to fully describe macroscopic behavior is directly related to the measurement problem in quantum mechanics. There are many factors that can contribute to the breakdown of microscopic laws at macroscopic scales, such as the complexity of the system, the presence of external influences, and the emergence of new properties at larger scales. It is possible that a better understanding of the measurement problem in quantum mechanics could help to improve our understanding of these phenomena and the relationship between microscopic and macroscopic behavior, but it is not necessarily the key to resolving the general issue of deriving macroscopic behavior from microscopic laws.
1 Recommendation
Thank you again Ahmad Al Khraisat . It appears that there are two topics that are not yet understood. One is the measurement problem and the other is the relationship between microscopic and macroscopic phenomena. I think that you are saying that an understanding of one does not necessarily lead to an understanding of the other, but they might have something in common. I am hoping that an understanding of the second topic (if and when that day ever happens) might lead to an answer to the measurement problem. But I admit that this is just wishful thinking.
Hi L.D. Edmonds The answer to your questions that have measurable physical consequences is in Tejinder P. Singh paper, "Proposal for a New Quantum Theory of Gravity." Just scan for the words “quantum measurement” in the paper, https://doi.org/10.1515/zna-2019-0079
National Agricultural Research Center - NARC
L.D. Edmonds The measurement problem and the relationship between microscopic and macroscopic phenomena are two separate topics in physics that are not directly related to each other.
The measurement problem is a problem in quantum mechanics that arises when trying to understand how measurements of quantum systems can produce definite outcomes. The problem is that, according to the principles of quantum mechanics, the state of a quantum system is described by a wave function, which is a mathematical object that encodes the probabilities of different outcomes of measurements. However, when a measurement is actually performed, the wave function "collapses" and a definite outcome is obtained. It is not clear how this collapse occurs or why it leads to definite outcomes.
The relationship between microscopic and macroscopic phenomena refers to the fact that the behavior of macroscopic systems (such as gases, liquids, and solids) can be described in terms of the behavior of their microscopic constituents (such as atoms and molecules). This relationship is described by statistical mechanics, which is a branch of physics that deals with the behavior of large ensembles of particles using statistical methods.
While these two topics are not directly related, they both involve fundamental questions about the nature of reality and how it can be understood through mathematical models and scientific theories. It is possible that an understanding of one topic might lead to insights into the other, but this would depend on the specific connections between the two topics.
Maybe there is a direct relation between the measurement problem and the relationship between microscopic and macroscopic phenomena. I recently learned of a topic called the "collapse model" of QM. Some people claim that this is the resolution of the measurement problem, and also the explanation of the relationship between microscopic and macroscopic phenomena. I'm not an expert but I did find a paper that seems to be something of a tutorial of the collapse model. You can find it at this location:
1 Recommendation

## Related Publications

Article
Full-text available
In order to apply thermodynamics to systems in which entropy is not extensive, it has become customary to define generalized entropies. While this approach has been effective, it is not the only possible approach. We suggest that some systems, including nanosystems, can be investigated by instead generalizing the concept of extensivity. We begin by...
Got a technical question?