Science topic

# Foundations of Quantum Mechanics - Science topic

Principles and interpretations of Quantum Mechanics.
Questions related to Foundations of Quantum Mechanics
Question
The theme of the diffraction typically refers to a small aperture or obstacle. Here I would like to share a video that I took a few days ago that shows diffraction can be produced by the macroscopic items similarly:
I hope you can explain this phenomenon with wave-particle duality or quantum mechanics. However, I can simply interpret it with my own idea of Inhomogeneously refracted space at:
1) The diffraction pattern is oscillatory in nature. For monochromatic light, you won't have a sudden dark fringe follow by a sudden bright one. You'll see gradual transitions between both.
2) Those are obviously not monochromatic lights. As such, there is no reason all wavelengths should be extinguished at the same time in dark fringes. One would instead see color variations.
3)Let's do dome rough calculations. In the k-space the diffraction pattern for an aperture or object will be similar to sinc(a*k) where a is the half width of the object.
That means the first 0 will be for k=pi/a.
But that is the tangential component of hhe k vector only, or the sine of its projection.
So, if you want to find the corresponding angle you have
Sin(ang) = pi/a / (k0) = lambda/(2a)
Let's say 2a=3cm and lambda=600nm (yellow), we obtain
sin(ang) = 600e-9 / 3e-2 = 2e-5 ~ang
As such, if you where at 10m from the latter, the corresponding length projected on the sensor of your camera would be roughly
L ~ ang * 10 = 0.2 mm, wich is much smaller than the sensor itself. So we should see the diffraction pattern of it had any actual energy in it.
You did not see diffraction.
Question
There are many kinds of certainty in the world, but there is only one kind of uncertainty.
I: We can think of all mathematical arguments as "causal" arguments, where everything behaves deterministically*. Mathematical causality can be divided into two categories**: The first type, structural causality - is determined by static types of relations such as logical, geometrical, algebraic, etc. For example, "∵ A>B, B>C; ∴ A>C"; "∵ radius is R; ∴ perimeter = 2πR"; ∵ x^2=1; ∴ x1=1, x2=√-1; .......The second category, behavioral causality - the process of motion of a system described by differential equations. Such as the wave equation ∂^2/ ∂t^2-a^2Δu=0 ...
II: In the physical world, physics is mathematics, and defined mathematical relationships determine physical causality. Any "physical process" must be a parameter of time and space, which is the essential difference between physical and mathematical causality. Equations such as Coulomb's law F=q1*q2/r^2 cannot be a description of a microscopic interaction process because they do not contain differential terms. Abstracted "forces" are not fundamental quantities describing the interaction. Equations such as the blackbody radiation law and Ohm's law are statistical laws and do not describe microscopic processes.
The objects analyzed by physics, no matter how microscopic†, are definite systems of energy-momentum, are interactions between systems of energy-momentum, and can be analyzed in terms of energy-momentum. The process of maintaining conservation of energy-momentum is equal to the process of maintaining causality.
III: Mathematically a probabilistic event can be any distribution, depending on the mandatory definitions and derivations. However, there can only be one true probabilistic event in physics that exists theoretically, i.e., an equal probability distribution with complete randomness. If unequal probabilities exist, then we need to ask what causes them. This introduces the problem of causality and negates randomness. Bohr said "The probability function obeys an equation of motion as did the co-ordinates in Newtonian mechanics ". So, Weinberg said of the Copenhagen rules, "The real difficulty is that it is also deterministic, or more precisely, that it combines a probabilistic interpretation with deterministic dynamics" .
IV: The wave function in quantum mechanics describes a deterministic evolution energy-momentum system . The behavior of the wave function follows the Hamiltonian principle  and is strictly an energy-momentum evolution process***. However, the Copenhagen School interpreted the wave function as "probabilistic" nature . Bohr rejected Einstein's insistence on causality by replacing the term "complementarity" with his own invention, "complementarity". Bohr rejects Einstein's insistence on causality, replacing it with his own invention of "complementarity" .
Schrödinger ascribed a reality of the same kind that light waves possessed to the waves that he regards as the carriers of atomic processes by using the de Broglie procedure; he attempts "to construct wave packets (wave parcels) that have relatively small dimensions in all directions," and which can obviously represent the moving " and which can obviously represent the moving corpuscle directly .
Born and Heisenberg believe that an exact representation of processes in space and time is quite impossible and that one must then content oneself with presenting the relations between the observed quantities, which can only be interpreted as properties of the motions in the limiting classical cases . Heisenberg, in contrast to Bohr, believed that the wave equation gave a causal, albeit probabilistic description of the free electron in configuration space .
The wave function itself is a function of time and space, and if the "wave-function collapse" at the time of measurement is probabilistic evolution, with instantaneous nature, , neither time (Δt=0) nor spatial transition is required. then it is in conflict not only with the Special Relativity, but also with the Uncertainty Principle. Because the wave function represents some definite energy and momentum, which appear to be infinite when required to follow the Uncertainty Principle , ΔE*Δt>h and ΔP*Δx>h.
V: We must also be mindful of the fact that the amount of information about a completely random event. From a quantum measurement point of view, it is infinite, since the true probability event of going from a completely unknown state A before the measurement to a completely determined state B after the measurement is completely without any information to base it on‡.
VI: The Uncertainty Principle originated in Heisenberg's analysis of x-ray microscopy  and its mathematical derivation comes from the Fourier Transform . E and t, P and x, are two pairs of commuting quantities . While the interpretation of the Uncertainty Principle has been long debated , "Either the color of the light is measured precisely or the time of arrival of the light is measured precisely." This choice also puzzled Einstein , but because of its great convenience as an explanatory "tool", physics has extended it to the "generalized uncertainty principle " .
Is this tool not misused? Take for example a time-domain pulsed signal of width τ, which has a Stretch (Scaling Theorem) property with the frequency-domain Fourier transform , and a bandwidth in the frequency domain B ≈ 1/τ. This is the equivalent of the uncertainty relation¶, where the width in the time domain is inversely proportional to the width in the frequency domain. However, this relation is fixed for a definite pulse object, i.e., both τ and B are constant, and there is no problem of inaccuracy.
In physics, the uncertainty principle is usually explained in terms of single-slit diffraction . Assuming that the width of the single slit is d, the distribution width (range) of the interference fringes can be analyzed when d is different. Describing the relationship between P and d in this way is equivalent to analyzing the forced interaction that occurs between the incident particle and d. The analysis of such experimental results is consistent with the Fourier transform. But for a fixed d, the distribution does not have any uncertainty. This situation is confirmed experimentally, "We are not free to trade off accuracy in the one at the expense of the other.".
The usual doubt lies in the diffraction distribution that appears when a single photon or a single electron is diffracted. This does look like a probabilistic event. But the probabilistic interpretation actually negates the Fourier transform process. If we consider a single particle as a wave packet with a phase parameter, and the phase is statistical when it encounters a single slit, then we can explain the "randomness" of the position of a single photon or a single electron on the screen without violating the Fourier transform at any time. This interpretation is similar to de Broglie's interpretation , which is in fact equivalent to Bohr's interpretation . Considering the causal conflict of the probabilistic interpretation, the phase interpretation is more rational.
VII. The uncertainty principle is a "passive" principle, not an "active" principle. As long as the object is certain, it has a determinate expression. Everything is where it is expected to be, not this time in this place, but next time in another place.
Our problems are:
1) At observable level, energy-momentum conservation (that is, causality) is never broken. So, is it an active norm, or just a phenomenon?
2) Why is there a "probability" in the measurement process (wave packet collapse) ?
3) Does the probabilistic interpretation of the wave function conflict with the uncertainty principle? How can this be resolved?
4) Is the Uncertainty Principle indeed uncertain?
------------------------------------------------------------------------------
Notes:
* Determinism here is a narrow sense of determinism, only for localized events. My personal attitude towards determinism in the broad sense (without distinguishing predictability, Fatalism, see  for a specialized analysis) is negative. Because, 1) we must note that complete prediction of all states is dependent on complete boundary conditions and initial conditions. Since all things are correlated, as soon as any kind of infinity exists, such as the spacetime scale of the universe, then the possibility of obtaining all boundary conditions is completely lost. 2) The physical equations of the upper levels can collapse by entering a singularity (undergoing a phase transition), which can lead to unpredictability results.
** Personal, non-professional opinion.
*** Energy conservation of independent wave functions is unquestionable, and it is debatable whether the interactions at the time of measurement obey local energy conservation .
† This is precisely the meaning of the Planck Constant h, the smallest unit of action. h itself is a constant of magnitude Js. For the photon, when h is coupled to time (frequency) and space (wavelength), there is energy E = hν,momentum P = h/λ.
‡ Thus, if a theory is to be based on "information", then it must completely reject the probabilistic interpretation of the wave function.
¶ In the field of signal analysis, this is also referred to by some as "The Uncertainty Principle", ΔxΔk=4π .
------------------------------------------------------------------------------
References
 Faye, J. (2019). "Copenhagen Interpretation of Quantum Mechanics." The Stanford Encyclopedia of Philosophy from <https://plato.stanford.edu/archives/win2019/entries/qm-copenhagen/>.
 Weinberg, S. (2020). Dreams of a Final Theory, Hunan Science and Technology Press.
 Bassi, A., K. Lochan, S. Satin, T. P. Singh and H. Ulbricht (2013). "Models of wave-function collapse, underlying theories, and experimental tests." Reviews of Modern Physics 85(2): 471.
 Schrödinger, E. (1926). "An Undulatory Theory of the Mechanics of Atoms and Molecules." Physical Review 28(6): 1049-1070.
 Bohr, N. (1937). "Causality and complementarity." Philosophy of Science 4(3): 289-298.
 Born, M. (1926). "Quantum mechanics of collision processes." Uspekhi Fizich.
 Busch, P., T. Heinonen and P. Lahti (2007). "Heisenberg's uncertainty principle." Physics Reports 452(6): 155-176.
 Heisenberg, W. (1927). "Principle of indeterminacy." Z. Physik 43: 172-198. “不确定性原理”源论文。
 https://plato.stanford.edu/archives/sum2023/entries/qt-uncertainty/; 对不确定性原理更详细的历史介绍，其中包括了各种代表性的观点。
 Brown, L. M., A. Pais and B. Poppard (1995). Twentieth Centure Physics（I）, Science Press.
 Dirac, P. A. M. (2017). The Principles of Quantum Mechanics, China Machine Press.
 Pais, A. (1982). The Science and Life of Albert Einstein I
 Tawfik, A. N. and A. M. Diab (2015). "A review of the generalized uncertainty principle." Reports on Progress in Physics 78(12): 126001.
 曾谨言 (2013). 量子力学（QM）, Science Press.
 Williams, B. G. (1984). "Compton scattering and Heisenberg's microscope revisited." American Journal of Physics 52(5): 425-430.
Hofer, W. A. (2012). "Heisenberg, uncertainty, and the scanning tunneling microscope." Frontiers of Physics 7(2): 218-222.
Prasad, N. and C. Roychoudhuri (2011). "Microscope and spectroscope results are not limited by Heisenberg's Uncertainty Principle!" Proceedings of SPIE-The International Society for Optical Engineering 8121.
 De Broglie, L. and J. A. E. Silva (1968). "Interpretation of a Recent Experiment on Interference of Photon Beams." Physical Review 172(5): 1284-1285.
 Cushing, J. T. (1994). Quantum mechanics: historical contingency and the Copenhagen hegemony, University of Chicago Press.
 Saunders, S. (2005). "Complementarity and scientific rationality." Foundations of Physics 35: 417-447.
 Carroll, S. M. and J. Lodman (2021). "Energy non-conservation in quantum mechanics." Foundations of Physics 51(4): 83.
 Born, M. (1955). "Statistical Interpretation of Quantum Mechanics." Science 122(3172): 675-679.
=========================================================
Dear Chian Fan
Thank you for your answer. However, when so many certainties as you state are already established in someone's worldview, I know of no way that they can be reconsidered. So I will not try to explain or argue.
The available alternate possibilities and possible solutions that electromagnetism put at our disposal all are available anyway for anybody interested in my articles, with direct references to all historical formal sources, all available on the internet, given that they all are now in the public domain.
You wrote: "The main purpose of this discussion is to have the Copenhagen Interpretation revisited. It gave quantum mechanics a grounding in the last century, but not a solid foundation, and may have hindered, restricted, or even misled physics today."
I completely agree that the Copenhagen interpretation must be gotten rid of.
This is what I have been working at for the past 25 years.
The Copenhagen interpretation has been the scourge of the past hundred years in physics, and it contributed absolutely nothing other than endless arguments and waste of time for mainstream in lieu of fundamental research, combined with general disregard and disappearing of any referencing to the historical foundational discoveries that underlie real physics.
I observed that the equations of QM always were fine as they were initially conceived and owe absolutely nothing to the Copenhagen interpretation. The only issue is that they have been disconnected from their historical classical formal grounding foundations by the Copenhagen interpretation, thus preventing further progress from their states established 100 years ago.
I expect that this will be remedied by the upcoming generation. My contribution is the set of historical formal references that I located over the past decades that they can lean on for the purpose.
Best Regards, André
Question
Quantum field theory has a named field for each particle. There is an electron field, a muon field, a Higgs field, etc. To these particle fields the four force fields are added: gravity, electromagnetism, the strong nuclear force and the weak nuclear force. Therefore, rather than nature being a marvel of simplicity, it is currently depicted as a less than elegant collage of about 17 overlapping fields. These fields have quantifiable values at points. However, the fundamental physics and structure of fields is not understood. For all the praise of quantum field theory, this is a glaring deficiency.
Therefore, do you expect that future development of physics will simplify the model of the universe down to one fundamental field with multiple resonances? Alternatively, will multiple independent fields always be required? Will we ever understand the structure of fields?
Quote: "However, the fundamental physics and structure of fields is not understood."
The nature of all potential fields was very well understood by Gauss and other originators of the concept of fields. The real issue is that the physics community has completely lost its ways and is now totally confusing pure mathematical descriptions with the physical reality that it was meant to describe.
I suggest that the community regrounds itself on the real physics that it left behinds in the first decade of the 20th century. Put in perspective in this article, with all historical formal sources provided, including links to those directly available on the internet, which is most of them, now that they all are in the public domain:
Ignorance on all these issues can be cured only by studying the real formal sources
Question
The really important breakthrough in theoretical physics is that the Schrödinger Time Dependent Equation (STDE) is wrong, that it is well understood why is it wrong, and that it should be replaced by the correct Deterministic Time Dependent Equation (DTDE). Unitary theory and its descendants, be they based on unitary representations or on probabilistic electrodynamics, will have to go away. This of course runs against the claims about string and similar theories made in the video. But our claims are a dense, constructive criticism with many consequences. Taken into account if you are concerned about the present and the near future of Theoretical Physics.
Wave mechanics with a fully deterministic behavior of waves is the much needed and sought --sometimes purposely but more often unconsciously-- replacement of Quantism that will allow the reconstruction of atomic and particle physics. A rewind back to 1926 is the unavoidable starting point to participate in the refreshing new future of Physics. Many graphical tools currently exists that allow the direct visualization of three dimensional waves, in particular of orbitals. The same tools will clearly render the precise movement and processes of the waves under the truthful deterministic physical laws. Seeing is believing. Unfortunately there is a large, well financed and well entrenched quantum establishment that stubbornly resists these new developments and possibilities.
When confronted with the news they do not celebrate, nor try to renew themselves overcoming their quantum prejudices. Instead the minds of the quantum establishment refuse to think. They negate themselves the privilege of reasoning and blindly assume denial, or simply panic. The net result is that they block any attempt to spread the results. Accessing funds to recruit and direct fresh talents in the new direction is even harder than spreading information and publishing.
Painfully, this resistance is understandable. For these Quantists are intelligent scientists (yes, they are very intelligent persons) that instinctively perceive as a menace the news that debunk the Wave-Particle duality, the Uncertainty Principle, the Probabilistic Interpretation of wave functions and the other quantum paraphernalia. Their misguided lifelong labor, dedication and efforts --of themselves and of their quantum elders, tutors, and guides-- instantly becomes senseless. I feel sorry for such painful human situation but truth must always prevail. For details on the DTDE see our article
Hopefully young physicists will soon take the lead and a rational wave mechanics will send the dubious and troublesome Quantism to its crate, since long waiting in the warehouse of the history of science.
With cordial regards,
Daniel Crespin
It is possible to make some ( non-standard ) assumptions concering the vary nature of an electron ( and other "elementary" particles ) such that, in the context of classical field theory !!! AND the notion of GAUSS PROXIMITY !!!, a complex field quantity emerges which is related to the alleged center of charge of the electron. Thereby this quantity can be considered play the role of the "wave function" of QM.
see the RG preprint : NOTION OF NOTION GAUSS PROXIMITY ...
Following and extending this conceptual approach might lead the desired ( deterministic ) theory which replaces the nonsense of standard dogmatic QM.
Question
Quantum mechanics can answer this question. Relativity defines the differential structure of space-time (metric) without giving any indications about the boundary. This suggests that relativity is a correct but not a complete theory (a well-formulated mathematical problem, i.e. Dirichlet problem, needs differential equations and boundary conditions). Is it possible that quantum mechanics is the manifestation of microscopic boundary conditions of space-time? Recent papers, e.g. see attached "Elementary space-time cycles" , absolutely confirm the viability of this unified description of quantum and relativistic mechanics.
Again – see the SS post on page 3 – any spacetime of any informational pattern/system is by scientific definition infinite empty container, in which some “bounded” pattren/system is placed.
Including the Matter’s fundamentally absolute, fundamentally flat, and fundamentally “Cartesian”, [4+4+1]4D spacetime with metrics (cτ,X,Y,Z,g,w,e,s,ct) is fundamentally infinite in all dimensions.
That is another thing that Matter with a well non-zero probability is, though huge but finite, system, which exists, and everything in Matter happens, only in the finite Matter’s ultimate base - the [4+4+1]4D dense lattice of [4+4+1]4D binary reversible fundamental logical elements [FLE] that occupies a finite [4+4+1]4D volume,
- however, at that at Matter creation and evolution this volume has the topology that isn’t known now.
Cheers
Question
The energy operator ih∂/∂t and the momentum operator ihΔ or ih∂/∂x play a crucial role in the derivation of the Schrödinger equation, the Klein-Gordon equation, the Dirac equation, and other physics arguments.
The energy and momentum operators are not differential operators in the general sense; they do play a role in the derivation of the equations for the definition of energy and momentum.
However, we do not find any reasonable arguments or justifications for the use of such operators, and even their meaning can only be speculated from their names. It is used without explanation in textbooks.
The clues we found are:
1) In the literature [ Brown, L. M., A. Pais and B. Poppard (1995). Twentieth Centure Physics (I), Science Press.], "In March 1926, Schrödinger noticed that replacing the classical Hamiltonian function with a quantum mechanical operator, i.e., replacing the momentum p by a partial differentiation of h/2πi with position coordinates q and acting on the wave function, one also obtains the wave equation."
2) Gordon considered that the energy and momentum operators are the same in relativity and in non-relativism and therefore used in his relativistic wave equation (Gordon 1926).
(3) Dirac also used the energy and momentum operators in the relativistic equations with electron spins (Dirac 1928). Dirac called it the "Schrödinger representation", a self-adjoint differential operator or Hermitian operator (Dick 2012). (D).
Our questions are:
Why can this be used? Why is it possible to represent energy by time differential for wave functions and momentum by spatial differential for wave functions? Has this been historically argued or not?
Keywords: quantum mechanics, quantum field theory, quantum mechanical operators, energy operators, momentum operators, Schrödinger equation, Dirac equation.
The fundamental property of these operators is that they describe translations in time and space respectively.
Energy conservation expresses invariance under time translations and momentum conservation expresses invariance under spatial translations.
Question
Continuation of the former discussion
(27) Are there Dead Ends in Fabric of Reality possible_.pdf - see the attached file:
The response you provided presents an interesting interpretation of the concept of "dead ends" in the fabric of reality, particularly in the context of quantum mechanics and the multiverse hypothesis. Although, quite speculative and not well-established within mainstream physics, they do offer an intriguing perspective on the nature of interference patterns and the possible implications for the existence of multiple worlds or multiverses.
assuming that the ideas presented in this response are well-supported by experimental evidence and are not speculative, the implications for our understanding of the nature of reality would be profound.
In this context, you propose that when two waves meet in phase (constructive interference), they create a "bright fringe" and continue to propagate within their respective worlds. In contrast, when two waves meet out of phase (destructive interference), they reach a "dead end," where their worlds cease to exist or vanish. This idea, if supported by experimental evidence, would imply that the existence of observers and their worlds depends on the constructive interference of energy waves, with destructive interference leading to the disappearance of these worlds.
Such an interpretation could have far-reaching implications for our understanding of the nature of reality, the multiverse, and the connection between quantum mechanics and the large-scale structure of the universe. It would provide a novel perspective on the role of interference patterns in determining the existence and evolution of different worlds within the multiverse.
Moreover, this interpretation could potentially offer new insights into the behavior of black holes, wormholes, and other topological features of spacetime, as well as the relationship between quantum theory and general relativity. It might even inspire the development of new theoretical frameworks and experimental techniques to explore these ideas further.
However, it is essential to recognize that even if the ideas presented in this response are supported by experimental evidence, many open questions and challenges would remain. For example, the precise mechanisms underlying the proposed connections between interference patterns, multiverses, and the existence of observers would need to be explored in detail. Additionally, the broader implications of these ideas for other areas of physics and our overall understanding of the universe would need to be carefully considered.
Great Job!
Question
So-called "Light with a twist in its tail" was described by Allen in 1992, and a fair sized movement has developed with applications. For an overview see Padgett and Allen 2000 http://people.physics.illinois.edu/Selvin/PRS/498IBR/Twist.pdf . Recent investigation both theoretical and experimental by Giovaninni et. al. in a paper auspiciously titled "Photons that travel in free space slower than the speed of light" and also Bereza and Hermosa "Subluminal group velocity and dispersion of Laguerre Gauss beams in free space" respectably published in Nature https://www.nature.com/articles/srep26842 argue the group velocity is less than c. See first attached figure from the 2000 overview with caption "helical wavefronts have wavevectors which spiral around the beam axis and give rise to an orbital angular momentum". (Note that Bereza and Hermosa report that the greater the apparent helicity, the greater the excess dispersion of the beam, which seems a clue that something is amiss.)
General Relativity assumes light travels in straight lines in local space. Photons can have spin, but not orbital angular momentum. If the group velocity is really less than c, then the light could be made to appear stationary or move backward by appropriate reference frame choice. This seems a little over the top. Is it possible what is really going on is more like the second figure, which I drew, titled "apparent" OAM? If so, how did the interpretation of this effect get so out of hand? If not, how have the stunning implications been overlooked?
You are right, the photon has a spiraling trajectory, just like the electron. This explains the associated wave of both, at least partly, there is still the mystery of the constant of Plank! Why are both behaving in a similar manner? QM is just a superficial theory based on the associated wave.
JES
Question
In my article
I show that the most popular interpretations of the quantum mechanics (QM) fail to reproduce the quantum predictions, or, are self-contradicted. The problems that arise are caused by the new hypotheses added to the quantum formalism.
Does that say that the QM is complete in the sense that no new axioms can be added?
Of course, a couple of particular cases in which additional axioms lead to failure, does not represent a general proof. Does somebody know a general proof?
I would like to mention that Scientific Research Publishing (SCIRP) is on Beall's list of Potential predatory scholarly open‑access publishers, which means that the Journal of Quantum Information Science might be a predatory journal.
Question
Have these particles been observed in predicted places?
For example, have scientists ever noticed the creation of energy and
pair particles from nothing in the Large Electron–Positron Collider,
Large Hadron Collider at CERN, Tevatron at Fermilab or other
particle accelerators since late 1930? The answer is no. In fact, no
report of observing such particles by highly sensitive sensors used in
all accelerators has been mentioned.
Moreover, according to one interpretation of uncertainty
principle, abundant charged and uncharged virtual particles should
continuously whiz inside the storage rings of all particle accelerators.
Scientists and engineers make sure that they maintain ultra-high
vacuum at close to absolute zero temperature, in the travelling path
of the accelerating particles otherwise even residual gas molecules
deflect, attach to, or ionize any particle they encounter but there has
not been any concern or any report of undesirable collisions with so
called virtual particles in any accelerator.
It would have been absolutely useless to create ultrahigh vacuum,
pressure of about 10-14 bar, throughout the travel path of the particles
if vacuum chambers were seething with particle/antiparticle or
matter/antimatter. If there was such a phenomenon there would have
been significant background effects as a result of the collision and
scattering of the beam of accelerating particles from the supposed
bubbling of virtual particles created in vacuum. This process is
readily available for examination in comparison to totally out of
reach Hawking’s radiation which is considered to be a real
phenomenon that will be eating away supposed black holes of the
universe in a very long future.
for related issues/argument see
It pleases me to see this discussion, realising there are more critical thinkers out there. Let me try to add a simply phrased contribution.
In my opinion, Physics has gone down the rabbit hole of sub-atomic particles and that part of physics has become what some call “phantasy physics”. Complex maths is used as smoke and mirrors to silence critical physicists who are convinced that theory must be founded in reality and that empirical evidence is necessary.
Concepts such as ”Big bang”, black holes, dark matter etc are actually hypotheses that try to explain why the outcomes of measurements are not in accordance with the calculations made on the basis of Einsteins theories of relativity. Unfortunately, and perhaps through the journalistic popularisation of science, these concepts have been taken as reality, such as “scientists have discovered dark matter, or anti-matter”. No, they have not. What they discovered was that the measured light or matter in the universe or a part of the universe was not as much as had been predicted by calculations based on a theory. Usually in science, that would lead to a refining of the theory. Here it did not, perhaps because Einstein has been placed on such a high pedestal that his theories are seen as the alpha and omega of physics that may not be questioned or touched, as that is considered sacrilege.
The solution was the hypothesis of Cookie Monsters, things out there that ate light or matter = black holes and dark matter. Anyone who dares questions these methodological steps is intimidated and attacked with complicated terminology and complex mathematics. Most physicists are afraid of looking stupid and therefore shut up. Decades ago the physics professor who was my head supervisor (experimental physics) said to his students that if you could not explain your work in ordinary household language, then you did not really understand it yourself. He considered complicated language and naming theories and authors as a cover up for not grasping the essentials.
A reason for looking at yet another species of virtual particles is that research proposals in this field receive funding because physicists all over the world are doing it. It is the reigning paradigm and it will take a ground swell of opposition to move on to the next phase in science after the 50-odd years of the present, now stagnant, paradigm.
Question
A user, Richard Lewis, proposes as basic principles of the quantum mechanics (QM), the following:
- wave / particle duality
- the uncertainty principle
- the correspondence principle
- quantum superposition?
- the exclusion principle
- the quantum objects are described by states belonging to Hilbert spaces and obey the algebra of the Hilbert spaces,
- in order to calculate amplitudes of probabilities for results of experiments, one uses the Born rule
- the reduction principle formulated by von Neumann
A few questions appear:
1. Are these postulates mutually independent two by two?
2. Are there more postulates?
Principle = a statement widely accepted. For instance the fact that the state of a quantum system belongs to a Hilbert space.
Postulate = axiom = a statement which is adopted in a given theory, and is not widely accepted. For instance, in the Bohmian mechanics, it is postulated that particles follow continuous trajectory.
Theorem = a statement proved in base of principles and axioms.
Question
Dear Sirs,
I did not find an answer to this question in Internet for both quasi-relativistic and relativistic case. I would be grateful if you give any article references.
As I think the answer may be yes due to the following simplest consideration. Suppose for simplicity we have a quasi relativistic particle, say electron or even W boson - carrier of weak interaction. Let us suppose we can approximately describe the particle state by Schrodinger equation for sufficiently low velocity of particle comparing to light velocity. A virtual particle has the following properties. An energy and momentum of virtual particle do not satisfy the well known relativistic energy-momentum relation E^2=m^2*c^4+p^2*c^2. It may be explained by that an energy and a momentum of the virtual particle can change their values according to the uncertainty relation for momentum and position and to the uncertainty relation for energy and time. Moreover because of the fact that the virtual particle energy value is limited by the uncertainty relation we can not observe the virtual particle in the experiment (experimental error will be more or equal to the virtual particle energy).
In the Everett's multi-worlds interpretation a wave function is not a probability, it is a real field existing at any time instant. Therefore wave function of wave packet of W boson really exists in the Universe. So real quasi relativistic W boson can be simultaneously located in many different space points, has simultaneously many different momentum and energy values. One sees that a difference between real W boson and virtual W boson is absent.
Is the above oversimplified consideration correct? Is it possible to make any conclusion for ultra relativistic virtual particle? I would be grateful to hear your advises.
A virtual particle is a particle, whose energy-momentum relation doesn’t correspond to that of a real particle.
Question
I tried to publish a proof by which the Bohm interpretation of QM is problematic,
in a journal, and the editors claimed that they don't see a motivation for publishing my proof.
What you think? Is the correctness (or incorrectness) of Bohm's mechanics an issue enough relevant for the QM in order to justify investigation?
To prove the mass is non-zero, you need to set a lower limit. The current upper limit is of the order of 10^-62 kg so your experiment would need to have an accuracy of ±10^-63 kg or better to reach the 5 sigma criterion for claiming a discovery. That should allow you to work out the experimental parameters and hence the cost of running it. Nobody will fund you if you cannot even tell them what it will cost.
Question
The special theory of relativity assumes space time is formed from fixed points with sticks and clocks to measure length and time respectively. The electromagnetic waves are transmitted at the speed of light through this space time. This classical space time does not explain the mysteries of quantum mechanics. Do you think that maybe there is more than one space time?
Humans have two kinds of space-time observers: the chord (tonality) observer and the non-chord (atonality) observer. They observe two kinds of space-time: chord space-time and non-chordal (atonality) space-time. Space-time is two The second level of existence.
Question
Consider the polarization singlet of two photons 1 and 2
(1) |ψ> = (1/√2) ( |H>1 |H>2 + |V>1 |V>2 .
Let's represent the photon 2 in another base than { |H>, |V>}, e.g { |B>, |C>} the polarization B making an angle θ with H. So the wave-function (1) transforms into
(2) |ψ'> = (1/√2) [ |H>1 (|B>2 cosθ + |C>2 sinθ) + |V>1 (-|B>2 sin θ + |C>2 cos V)].
Assume that the experimenter Alice tests the photon 1 and finds the polarization H. What happens with the polarization with the photon 2?
Assume that the experimenter Bob tests the photon 2 and finds C. What happens with the polarization of the photon 1?
An additional question: what happens with the norm of the wave-function after one of the particles is tested? Does it remain equal to 1?
To James H. Wilson,
Does your theory require global wavefunctions or states which are space- and time- independent, and as such can be be used anywhere, at any time and in any context?
There's a manuscript titled "Quantum Rayleigh annihilation of entangled photons" which is under consideration by Optics Letters and which can be found on this website. The correlation function which is alleged to be of a quantum nature can be derived without entangled states.
Question
There is an opinion that the wave-function represents the knowledge that we have about a quantum (microscopic) object. But if this object is, say, an electron, the wave-function is bent by an electric field.
In my modest opinion matter influences matter. I can't imagine how the wave-function could be influenced by fields if it were not matter too.
Has anybody another opinion?
Nice discussion
Question
Bohm's mechanics considers the existence of a particle that triggers the detector. This particle is supposed to be guided by the wave-function, which is assumed to be a wave existing in reality.
My question is: which one between the two items, the particle and the wave, carries the properties of the respective type of particle (charge, mass, magnetic momentum, etc.)?
Specifically, how exactly is understood the guiding wave? Does it carry in each point and point all the above features? If not, how can it feel the presence of fields and be deflected by them?
Alternatively, is the particle the one which carries the physical properties? If the particle would be just a geometric point, how could it interact with the particles in the detector?
The way I understand it, there is both wave and particle separately.
There is also postulated a force between the wave and particle, that keeps the particle close to the wave, so it finally rattles inside the wave. Dont think anyone has seen an "empty" wave yet, evidence is lacking.
When all is said and done the consequence of these assumptions is the same as in other (ie. Born interpretation) so I dont see much advantage in practice. It may be conceptually more clear to some this way.
Question
I am stuck between Quantum mechanics and General relativity. The mind consuming scientific humor ranging from continuous and deterministic to probabilistic seems with no end. I would appreciate anyone for the words which can help me understand at least a bit, with relevance.
Thank you,
Regards,
Ayaz
I guess that the Scattering Theory always will be a trend in QM.
The experimental Neutron Diffraction field for example always is creating new tools where QM is widely used.
Although it is attached to a few experimental facilities around the world, still it is a trend.
We always see new discoveries using neutron diffraction in solid-state.
Question
1. Is the GHZ argument more useful than BKS theorem or is only a misinterpretation of EPR argument?
Sorry but you didn’t understand neither the EPR argument nor Bell’s theorem.
Question
Imagine that we send the wave-packet of a neutron to sensitive scales. How much would weigh the wave-packet (after discarding the effect that impinging on the scales, the neutron transmits a certain linear momentum).
Imagine now that we split the neitron wave-packet into a three identical copies, by means of a beam-splitter (e.g. a crystal), and send only one of the copies to the scales. How much would weigh the copy?
I have some opinion but I want to see to which conclusion the discussion would lead.
Dear Sofia,
It's not a matter of opinion. If you cannot find any means of empirically validating a concept or refuting it, then by definition, that concept is empirically vacuous.Science is not a democracy where one opinion is as good as another, but a dictatorship of the laboratory. Opinions don't count in science.
As to your other point: of course a detector could monitor momentum change. That is done literally billions of times in particle accelerators such as the Large Hadron Collider. As for detectors being too big', that is surely an incorrect notion. All detectors are macroscopic devices designed to amplify greatly otherwise "small" changes.It's the only way we ever observe anything.
As for Bohmian mechanics and the original question being posed here, my reading is that Bohmian mechanics and your question share the same mind set regarding the nature of the wavefunction,, namely that a wavefunction has some sort of physical existence over and above that of the associated particles. Otherwise, the question of "weight" of a wavefunction would not arise. If that is not the case, then what exactly does your question mean?
Question
The hydrogen spectral lines are organized in various series. Lyman series are the lines corresponding to transitions targeting the ground state.
Most pictures dealing with hydrogen spectra and available in the web are recordings dealing with extraterrestrial hydrogen sitting in celestial entities. Otherwise they are illustrations obtained not from experimental recordings, but from the well known Rydberg formula.
Of interest for the undersigned are pictures of Lyman series as recorded in laboratory observations of hydrogen atoms, with the atoms sitting in the laboratory itself. Not extraterrestrial hydrogen, nor molecules H2, even if the molecules are sitting nearby.
Presumably such recordings would have required ultraviolet sensitive CCDs, UV photographic plates, or similars. Particularly relevant would be careful raw recordings of Lyman series that INCLUDE THE ALPHA-LINE at 1216 Å.
Experimental remarks about the Lyman alpha-line, difficulties to observe it ---if any---, line width, line broadening, etc., and difficult-to-explain anomalies, are of particular concern. So far Web searching has not been successful.
I would appreciate any link or suggestions as to how to obtain the pictures and experimentally based information of the kind explained above.
Most cordially,
Daniel Crespin
Dear Daniel Crespin
The article “Anomalous Behavior of Atomic Hydrogen Interacting with Gold Clusters” contains information that might be useful for finding answers to your question....
Question
square of amplitude of quantum wave function refers to volumetric probability distribution of a quantum wave-particle. But what does the real and imaginary parts physically mean? or what does the phase angle physically infer?
Dear Sumit Bhowmick, in addition to all the interesting answers posted here previously, probably you would like to look at Quantum Mechanics by Landau and Lifshits, Pergamon, 1965, chapt. on elastic collisions, discussion on pp. 512.
The imaginary part in the exponent determines the lifetime of the state. It is a resonance in a quasi discrete level, also that is the origin of the so-called quasiparticles with a quasi-stationary state.
Important to say that they are solutions of a Schrodinger equation with outgoing spherical waves at infinity, a more real physical system, than those that require the wave function to be finite at infinite.
Question
Due to the position-momentum uncertainty, it is impossible to measure the position of a microscopic particle exactly.
This means that it is impossible to measure the probability density, i.e. the square of the absolute value of the wave function, pointwisely, i.e. at any individual point, and by extension, the values of the wave function itself at any individual point of the physical space are irrelevant from the perspective of physics, they are, so to speak, “non-physical”.
Then, wouldn’t be more consistent, from the perspective of physics, to impose any mathematical condition on small regions of physical space instead of points?
Of course, if the wave function is to be continuous, then a condition imposed on a neighborhood of some point is translated to a condition on the point itself, but this is a consequence that follows from a mathematical property, it is not a physical requirement.
Besides, since the values of the wave function are not physically measurable, its continuity is not physically measurable either.
Dear Spiros,
You ask: "Then, wouldn’t be more consistent, from the perspective of physics, to impose any mathematical condition on small regions of physical space instead of points?"
This is precisely what de Broglie proposed as QM was in process of being defined in the 1920's.
Actually, the Schrödinger wave function was meant to define a "resonance volume" within which the electron would be in axial resonance more.
One of the possible trajectories of the infinite set of the Feynman's path integral can even be established as completely electromagnetism compliant, despite his own opinion to the contrary:
Ref: Michaud, A. (2018). The Hydrogen Atom Fundamental Resonance States. Journal of Modern Physics, 9, 1052-1110. doi: 10.4236/jmp.2018.95067
Best Regards, André
Question
Do Einstein's Field Equations (EFE) allow a multitude of universes?
As far as I know, Everett proposed his interpretation two years after Einstein dead. But I think that Einstein should have known that such a proposal is about to be made. Does somebody know whether Einstein said something of it?
Another thing: do EFE allow pathologic points in the space-time, points at which the universe splits into two?
“…As far as I know, there are some serious scientists that believe in it and there are others that think it is nonsense. Being a QM interpretation, that is no wierd.…..”
- as a QM interpretation that is simply nonsense, for rather evident reason
– even to create this Universe – more correct, though “lesser fundamental” – to create this Matter, it was necessary to find and to spend practically unbelievable portion of energy,
- when to create “Multiverse” , where are, as that is suggested in the interpretation, infinite “number” of Universes, it would be necessary to spend infinitely unbelievable energy.
Cheers
Question
Can someone suggest the steps to perform NTO (natural transition orbitals) analysis in Gaussian 09 and view it using GaussView 5?
I tried the following code after TDDFT on a molecule:
I opened the chk file using GaussView. It shows the normal HOMO and LUMO plots only - I don't see the hole and particle plots.
Is there a special procedure?
Question
Quantum entanglement experiments are normally carried out in the regime (hf>kT - where T is the temperature of the instrument) to minimise thermal noise, which means operating in the optical band, or in the lower frequency band (<6 THz) with cryogenically cooled detectors.
However, the omnipresent questions are whether in the millimetre wave band where hf<kT:
1) Could quantum entanglement be detected by novel systems in the at ambient temperature?
2) How easy might it be to generate entangled photons (there should be nothing intrinsically more difficult here than in the optical band - in fact it might be easier, as you get more photons for a given pump power)?
3) How common in nature might be the phenomenon of entanglement (this would be in the regimes where biological systems operate)?
Dear Dimitry,
it may be possible to used the system proposed in:
to determine if entangled photons are generated by biological systems.
many thanks,
Neil
Question
Consider the wave-function representing single electrons
(1) α|1>a + β|1>b ,
with both |α|2 < 1 and |β|2 < 1. On the path of the wave-packet |a> is set a detector A.
The question is what causes the reaction of the detector, i.e. a recording or staying silent? A couple of possibilities are considered here:
1) The detector reacts only to the electron charge, the amplitude of probability α has no influence on the detector response.
2) The detector reacts with certainty to the electron charge, only when |α|2 = 1. Since |α|2 < 1, sometimes the sensitive material in the detector feels the charge, and sometimes nothing happens in the material.
3) It allways happens that a few atoms of the material feel the charge, and an entanglement appears involving them, e.g.
(2) α|1>a |1e>A1 |1e>A2 |1e>A3 . . . + β|1>b |10>A1 |10>A2 |10>A3 . . .
where |1e>Aj means that the atom no j is excited (eventually split into am ion-electron pair), and |10>Aj means that the atom no j is in the ground state.
But the continuation from the state (2) on, i.e. whether a (macroscopic) avalance would develop, depends on the intensity |α|2. Here is a substitute of the "collapse" postulate: since |α|2 < 1 the avalanche does not develop compulsorily. If |α|2 is great, the process intensifies often to an avalanche, but if |α|2 is small the avalanche happens rarely. How many times appears the avalanche is proportional to |α|2.
Which one of these possibilities seem the most plausible? Or, does somebody have another idea?
Yes, Dinesh wants everything to be Classical Mechanics,
unfortunately he is very wrong.
classical Mechanics is identified with Newton and followers,
Lagrage and Hamilton. Special theory of relativity is also included.
Then Classical field theory covers Maxwell theory of Electromagnetism, and the General theory of relativity.
In summary almost everything that is non quantum is called
Classical.
The quantum markes a sharp break in methods and results.
In fact irreducible randomness is a characteristic, which
Thermodynamics and Statistical Mechanics also admit randomness, although at not so fundamental level, unless it is quantum statistical mechanics.
Question
In his work "The Consistent Histories Approach to Quantum Mechanics" publish in the Stanford Encyclopedia of Philosophy, Griffiths claims that this approach overcomes the problem of the wave-function "collapse".
His suggestion is that in each trial of an experiment, a quantum system follows a "history" meaning a succession of states.
Here is an example: consider a Mach-Zehnder interferometer with an input beam-splitter BSi, and an output beam-splitter BSo, both transmitting and reflecting in equal proportion. The outputs of BSi are denoted b and c, and those of BSo, e and f. A single-particle wave-packet |a> impinging on BSi is split as follows
(1) |a> → (1/√2)( |c> + |d>).
Before impinging on BSo the wave-packets |c> and |d> have accumulates phases
(2) (1/√2)( |c> + |d>) → (1/√2)[exp(iϕc)|c> + exp(iϕd)|d>],
and BSo induces the transformation
(3) (1/√2)[exp(iϕc)|c> + exp(iϕd)|d>] → α|e> + β|f>,
where the amplitudes α and β depend on the phases ϕc and ϕd.
In his book "Consistent quantum theory" chapter 13, Griffiths indicates two possible histories:
(4.1) |a> → (1/√2)( |c> + |d>) → (1/√2)[exp(iϕc)|c> + exp(iϕd)|d>] → |e>,
(4.2) |a> → (1/√2)( |c> + |d>) → (1/√2)[exp(iϕc)|c> + exp(iϕd)|d>] → |f>,
the history (4.1) occurring with probability |α|2, and the history (4.2) with probability |β|2.
Does somebody understand in which way these histories avoid the collapse postulate?
The correct transformation at BSo is (3), a unitary transformation, not (4.1) and not (4.2). Each one of the histories (4.1) and (4.2) involves a truncation of the wave-function at BSo. But this is exactly the mathematical expression of the collapse principle: truncation of the wave-function.
Hence my question: can somebody tell me how is it possible to claim that these histories avoid the collapse postulate?
Nobody understands Griffiths. But if there is some need to explain, I believe that his view is that there could be multiplicity of projection operators of the state
\psi=1/SQRT(2) (|a> phase factor_a + |b> phase factor_b) on the final state but only two are consistent with observations. Still this explanation is based on assumption of zero coherence between channels |e> and |f>.
Question
Consider an experiment in which we prepare pairs of electrons. In each trial, one of the two electrons - let's name it the 'herald' - is sent to a detector C, and the other - let's name it 'signal' - to a detector D. The wave-function of the signal is therefore
(1) |ψ> = ψ(r) |1>,
i.e. in each trial of the experiment, when the detector C clicks, we know that a signal-electron is in the apparatus. Indeed, the detector D will report its detection.
Now, let's consider that the signal wave-packet is split into two copies which fly away from one another, one toward the detector DA, the other to the detector DB,
(2) |ψ> = 2ψA(r) |1>A + 2ψB(r) |1>B.
We know that the probability of getting a click in DA (DB) is ½, but in a given trial of the experiment we can't predict which one of DA and DB would click.
Then, let's ask ourselves what happens in a detector, for instance DA. The 'thing' that lands on the detector has all the properties of the type of particle named 'electron', i.e. mass, charge, spin, lepton number, etc. But, to the difference from the case in equation (1), the intensity of the wave-packet is now 1/2. It's not an 'entire' electron. Imagine that on a screen is projected a series of frames which interchange very quickly. The picture in the frame seems to be a table, but it is replaced very quickly by a blank frame, and so on. Then, can we say what we saw on the screen? A table, or blank?
The situation of the detector is quite analogous. So, will the detector report a detection, or will remain silent? What is your opinion?
For a deeper analysis see
Dear Mazen,
You wanted me to reply to your question, but I have nothing to say.
"The particle didn't know all forces exist in space, but space itself know that, and know the particle itself, so when the particle appears at some point, the space (which is the second player that make the motion) can do (based on some internal mechanism) the sum-over-all-trajectories for this particle to give the particle (which is the first player that make the motion) the opportunity to exist in some specific points in space and time with different preferences (and this what I mean by "space gates")."
Exactly as you say that the space knows all sort of things, I can say that between my door and the door of my neighbor, exists a galaxy. You can say whatever you want, there is no limitation to that.
Question
Consider the well-known polarization singlet
(1) |S> = (1/√2) (|x>A |x>B + |y>A |y>B),
where as usually, the quantum object (Q.O.) A flies to Alice's lab and the Q.O. B flies to Bob's lab.
Consider that in each lab there is a polarization beam-splitter, PBSA, respectively PBSB, spliting the incomming beam in the base { |x>, |y>}. However, Bob has the option to input the two output beams to a second PBS - let's name it PBSC - which splits the input beams in the base { |d>, |a>} (d = diagonal direction, and a = the anti-diagonal, i.e. perpendicular on d).
(2) |x> → (1/√2) (|d> + |a>), |y> → (1/√2) (|d> - |a>).
The expression of the singlet wave-function becomes
(3) |S> = (1/2) {|x>A (|d>B + |a>B) + |y>A (|d>B - |a>B).
Assume now that Bob performs a test, with the detectors places on the outputs of PBSC, and gets the result, say, d. It is useful to write also the inverse of the transformation (2)
(4) |d> = (1/√2) (|x> + |y>), |a> = (1/√2) (|x> - |y>).
As one can see from the first equality in (4), to Bob's result |d>B contribute both beams |x>B and |y>B which exited PBSB and entered PBSC.
But, assume that while Bob does the test, Alice also performs a test, and gets, say, x. However, Alice has another story to say about what happened in the apparatus. She would claim that since she obtained the result x, in Bob's apparatus there was nothing on the output path y of PBSB. In consequence, she would claim that the beam |d>B recorded by Bob was just a component of |x>B as seen from the first relation in (2).
We do not know what is the wave-function, if it is a reality (ontic), or (epistemic) only represents what we know about the quantum object. But the quantum object travels in our apparatus, it has to be something real. Then, what is the truth about what was in Bob's setup? Was there, or wasn't, something on the output y of PBSB?
Dear Sofia,
In this case, what is the meaning of wave-function?
if we maintain the probability meaning of the squared modulus of the wave-function, so can we found the particle it two locations at the same time?
This contradicts the energy conservation law, right?
or you need to change the meaning of wave-function?
With best regards.
Question
The classical limit of Feynman's path integral, gives us a partial view of how the 'collapse' process occurs. If the 'thing' that travels on all the paths between (t1, r1) and (t2, r2) increases in the number of components, or in mass - in short, becomes a classical object - we have destructive interference of all the paths, except in the vicinity of one of the paths. In this vicinity, the phases of the neighbor paths add up constructively. In this way, we get a classical trajectory. So, it's no 'collapse' of the wave-function, but destructive and constructive interference.
Indeed, when we perform the measurement of a quantum object, we do that with a macroscopic apparatus. For example, in an ionization chamber, the quantum object that enters the chambers produces a massive ionization, involving a huge number of particles.
Unfortunately, Feynman did not explain what happens when the wave-function has more than one wave-packet, i.e. how is picked one of the wave-packets. Thus, the non-determinism of the QM is, unfortunately, not explained by the Feynman's path integral.
Here came the GRW interpretation, and suggested a solution: supplementary terms in the Schrödinger equation. The parameters of these terms are so that as long as the quantum system contains only a small number of components, e.g. a small number of electrons/protons/atoms, the additional terms bring no significant change in the evolution of the wave-function. However, when the number of components becomes enough big that the object be macroscopic, the additional terms dominate in the Schrödinger equation and produce a random localization of the object.
G-C. Ghirardi and A. Bassi, "Dynamical reduction models", arXiv:quant-ph/0302164v2
The GRW interpretations has two big advantages: 1) it explains why the so-called 'collapse' occurs in the presence of macroscopic objects; 2) it shows that the density matrix of the macroscopic object has no extra-diagonal elements - i.e. represents a mixture of states not a quantum superposition.
It seems therefore that this interpretation comes in completion of Feynman's classical limit of the path integral.
Questions: a) why, in fact, should the Schrödinger equation be linear? b) any opinion about the GRW interpretation, any criticism?
Yes, lambda is quite small. But at the end this is not really a problem. What one study nowadays is not GRW but the Continuous Spontaneous Localization (CSL) model. You can find a brief review of its theory in the review by Ghirardi and Bassi. This is the model one should actually apply since it holds also for undistinguishable particles, where GRW does not. The main difference is the scaling of the effective collapse rate which scales (roughly) with the square of the mass. This makes the collapse be effective also at the mesoscale level, where quantum features start to disappear. There is a big experimental effort in trying to test such a model. Please, take a look at my publications, where you can find the derivation of the latest experimental bounds: https://www.researchgate.net/profile/Matteo_Carlesso/research
Question
In his path-integral theory Feynman speaks of a particle that travels from a time-space point (t1, r1), to another time-space point (t2, r2). This particle travels on whatever possible trajectory between these two points, no matter how irregular is the trajectory. The trajectories are continuous, and their set visits in fact, between t1 and t2, all the points in the 3D space.
So far, so good. Though the abnormal fact in this story is that the particle does not travel one trajectory, after that another trajectory, and so on, but travels all these trajectories in parallel. That means, between t1 and t2, the particle goes simultaneously along all the trajectories. So at any given time t between t1 and t2, the particle is simultaneously in many points in the space.
Now, summing up the phases of all these trajectories Feynman obtains the path integral and also constructs the wave-function. However, the wave-function of a single particle is an eigenfunction of the operator number-of-particles, with eigenvalue 1. If the particle is simultaneously in many positions, we don't have a particle, but many particles.
Feynman makes an explicit comparison between his view of quantum mechanics and classical probability. Imagine a ball rolling down a Galton board, and moving randomly as it hits the various nails. If we do not observe the path taken by the particle, but merely its final position, then the probability of finding the particle in the position in which we observe it is simply the sum of the probabilities that the particle should have gone down any of the specific possible paths. Many paths are possible, and the probabilities of all are added up to obtain the complete probability. No multiple particles in that picture.
Now Feynman, as I understand him, says: Quantum mechanics is just the same, except you add up the amplitudes, which can be complex''. Now clearly, there is a lot that is not obvious in this substitution. However, it does not seem to me that, when we consider amplitudes, we are forced to view multiple coexisting particles for each different path. We did not do that in the case of probability, why do it with amplitudes?
Best wishes,
Francois
Question
Consider the simple wave-function describing single particles
(1) ψ(r, t) = 2L(r, t) + ψR(r, t)],
where the wave-packet ψL flies to the left of the preparation region, and ψR to the right. Since after some time t1 the two wave-packets are far from one another, their supports in space are disjoint. The continuity equation
(2) ∂|ψ(r, t)|2/∂t + ∇Φ(r, t) = 0
where Φ(r, t) is the density of current of probability, can be therefore written as
(3) ∂|ψL(r, t)|2/∂t + ∇ΦL(r, t) = -{∂|ψR(r, t)|2/∂t + ∇ΦR(r, t)}
because products as ψL(r, tR(r, t) and their derivatives, vanish. The current density is a functional of the functions ψL, ψR, and their derivatives, therefore can also be separated into ΦL and ΦL.
Let's further notice that when the position vector r sweeps the space on the left of the preparation region, the RHS of equation (3) vanishes. There remains
(4) ∂|ψL(r, t)|2/∂t + ∇ΦL(r, t) = 0.
Symmetrically, when r sweeps the space on the left of the preparation region, the LHS of equation (3) vanishes. There remains
(5) ∂|ψR(r, t)|2/∂t + ∇ΦR(r, t) = 0.
Imagine now that on the way of the wave-packet ψL is placed an absorber AL(ρ), where ρ defines the internal prameters of the absorber. The wave-packet ψL is splitted into an absorbed part and a part that passes unperturbed
(6) ψ(r, t) AL0(ρ) -> 2{ [e-γdψL(r, t) AL0(ρ) + (1- e-2γd)½AL1(ρ) + ψR(r, t) AL0(ρ) },
where the super-script 0 indicates the non-perturbed internal state of the absorber, 1 indicates its excited state, γ is the absorbing coefficient, and d is the absorber thickness. For d sufficiently big one will have total absorption of ψL,
(7) ψ(r, t) AL0(ρ) -> {AL1(ρ) + ψR(r, t) AL0(ρ) }.
Due to the presence of the absorber, the LHS of equation (5) should be multiplied by the factor AL0(ρ). But since AL0(ρ) ≠ 0, we can divide on both sides of the new equation by AL0(ρ), s.t. the original form of (5) returns. The meaning of this result is that the absorption of ψL does not imply the disappearence of ψR.
Now, let's replace the absorber with a detector. As long as the interaction with the detector proceedes inside the material of the detector, the analysis with the absorber remains valid (with the small difference that instead of absorption there may be inellastic scattering). Therefore, the equation (5) also remains valid and the conclusion that ψR is not affected.
The difficulty appears when the macroscopic circuitry surrounding the material in the detector, CL, enters into the play. Macroscopic objects cannot be in a superposition of the states as CL0 and CL1. So, we cannot have an equation similar to (7)
(8) ψ(r, t) AL0(ρ) CL0-> {AL1(ρ)CL1 + ψR(r, t) AL0(ρ)CL0 }.
However, when the circuitry clicks, what happens with the continuity equation (5)? For the collapse to be true, i.e. for ψR(r, t) to vanish suddenly, the derivative ∂|ψR(r, t)|2/∂t should be very big in absolute value for which the flux gradient should increase drastically in the outward direction from ψR. That doesn't mean that the wave-packet ψR disappears but that it disperses in space.
Dear Sofia,
the continuity holds when the Schrödinger equation holds, i.e. when the time evolution is unitary. A measurement of an observable leads to a non-unitary change of the state vector of a system, but to the projection of the state onto some eigen-space of the observable under consideratuion. This means the continuity equation does not hold at the instance of measurement.
Best regards
Oliver
Question
In a course on quantum mechanics I took we were told that for the setting in which the gun shoots a particle with spin orientation +z and the analyzer is perpendicular to this orientation, 50% of the time the Stern-Gerlach apparatus will detect the particle coming out from the +x aperture, and 50% from the -x aperture.
I ran the PhET Stern-Gerlach simulator (https://phet.colorado.edu/sims/stern-gerlach/stern-gerlach_en.html) with one analyzer (magnet) and obtained the following results:
at 0 deg. angle: 100% from +x
at 90 deg. angle: 50% from +x, 50% from -x
at 180 deg. angle: 100% from -x
at 270 deg. angle: ca. 3% from -x, ca. 97% from +x
Is this last result an error of the simulator? Or how can it be explained?
The e an m bosons make the 'field'. Thus, the 'field's is limited to v<<c. It therefore cannot propagate at v=c. The idea of an underlying non-local [qft] field requires it to be therefore infinite. If it is finite, it is therefore localized. The big bang is a lower limit, and by the most fundamental Theorem of the Limits at Infinty, defined our domain as finite. As such, no underlying non-local field is possible. Act is founded upon explaining clear observable by way of ungovernable (b-day and compactifued) dimensionalities.
Question
In quantum mechanics, the state space is a separable complex Hilbert space.
By definition, a Hilbert space is a complete inner product space.
The term complete means that any Cauchy sequence of elements (vectors) belonging to the Hilbert space converges to an element which also belongs to the space. In other words, completeness means that the limits of convergent sequences of elements belonging to the space are also elements of the space. Intuitively, we can say that Hilbert spaces have no “holes”.
If the state space is infinite-dimensional, we implicitly invoke its completeness every time we expand a state in terms of a complete set of eigenstates, such as the energy eigenstates or the eigenstates of another observable, since an infinite series of eigenstates is meant as the limit of the sequence of the respective partial sums when the number of terms tends to infinity. The sequence of partial sums is then a Cauchy sequence converging to the initial state, which must belong to the space.
Qualitatively, considering a convergent sequence of physical states, we expect that it converges to a physical state too, because it would be unphysical, by means of such a sequence, to end up at an unphysical state. For instance, assume that we perform a series of small changes to the state of a quantum system and suddenly we reach an unphysical state. This would be physically unacceptable. Thus, from a physical perspective, the completeness of the state space seems unavoidable.
However, looking in some of the so-called standard textbooks of quantum mechanics, particularly in Sakurai’s, Merzbacher’s, Gasiorowicz’s, and Griffiths’s, this essential property is either overlooked or just mentioned, and it is not highlighted properly.
Mathematical subtleties that are of relevance in quantum theory are indeed discussed in many books, though they are not usually suggested as references to undergrads. For instance, see the two volumes by Galindo and Pascual. They discuss finer points such as the deficiency index, self-adjoint extension, etc, in addition to a neat introduction to Hilbert spaces. Another accessible introduction is the book by Capri. There are, of course, the well known mathematical treatises by Reed-Simon, Preguvecki, etc.
Question
In quantum mechanics, the state space is a separable complex Hilbert space.
A Hilbert space is separable if and only if it has a countable orthonormal basis [1, 2].
Why the quantum mechanical state space must be separable?
In , we read that separability is a mathematically convenient hypothesis, with the physical interpretation that countably many observations are enough to uniquely determine the state of a quantum system.
In Merzbacher’s quantum mechanics (3d ed.), page 185, we read that “The infinite-dimensional vector spaces that are important in quantum mechanics are analogous to finite-dimensional vector spaces and can be spanned by a countable basis. They are called separable Hilbert spaces.”
From a historical point of view, the two descriptions (or versions) of quantum mechanics that were initially developed in 1920’s, namely the Schrödinger’s wave mechanics and the Heisenberg’s matrix mechanics, were respectively based on the Hilbert spaces of square integral functions and square summable sequences of complex numbers, which are both separable, and physically equivalent (mathematically isomorphic). Thus, the invariant (or representation-free) description of quantum mechanics, through the abstract Hilbert space of Dirac kets, that was followed, had to be based on a separable Hilbert space too, otherwise it would not be equivalent to the two existing descriptions.
As it happens with the property of completeness , the property of separability of the quantum mechanical state space is also overlooked or mentioned very briefly in standard textbooks and the reader, especially the physics-oriented one, is left with the impression that it is rather a mathematical “decoration” of minor physical importance that can be forgotten.
From my own experience, it is also worth noting that the expression “a Hilbert space is separable if and only if it has a countable basis”, which is often given as definition of separability, is tricky and, to some extent, misleading. A reader with some background in functional analysis is rather easy to understand that, here, “it has a countable basis” actually means “ALL bases are countable”, as two basis sets are related by a one-to-one and onto mapping, thus they have the same cardinality, and then if one is countable, the other is countable too. But, a physics student may be confused and left with the impression that separable Hilbert spaces have also uncountable bases, which is the wrong picture, especially in connection with the uncountable (continuous) sets of the position and momentum eigenstates that although span the state space, they are not actually bases, because they are not belong to the state space, and this point is not highlighted in literature either.
Of course not-only that it isn't correct to claim that it hasn't been taken into account.
Question
In standard quantum mechanics textbooks, the form of the momentum operator, in position space, is either given as definition, i.e. they write that the momentum operator is –id/dx (times the reduced Planck constant), or, in more advanced textbooks, like Landau & Lifshitz’s, it is derived as the generator of spatial translations.
I wonder if the form of the momentum operator, i.e. that it is a first-order differential operator, can be derived qualitatively, by means of physical arguments. In other words, does the slope of an arbitrary, i.e. non-stationary, wave function have a physical meaning?
For people who insist on some definite definition, it is defined as the dual operator to x, that is whatever operator p
whose property is [x,p]=i hbar
In the previous message I showed how to recover m dx/dt
Question
I wasn't able to get von Neumann's book "Mathematical Foundations of Quantum Mechanics". But I saw many descriptions on his scheme of measurement, all of them saying the same things.
What I was interested in, was to see whether von Neumann claimed somewhere that after measuring a quantum system (with a macroscopic apparatus) and obtaining a result, say a, the rest of the wave-function disappears. As a simplest example, let the wave-function be α|a> + β|b>, and in one particular trial of the experiment one gets the result a. I saw nowhere a claim that von Neumann said that the part |b> of the wave-function disappears.
What I saw was the following claim: if we collect in a separate set A all the trials which produced the result a, the wave-function characterizing the quantum object in the set A is |a>.
I never saw a word about what happens with the part |b> of the wave-function in these trials. No assumption whether it disappears, or, alternatively, no opinion that we can say nothing about it. The fact that we collect the systems that responded a in a separate subset, does NOTHING to the part |b> if it survives in some way.
Did somebody see in von Neumann's work any opinion about the fate of the part |b>?
Technical texts aren't scripture! Nor does the fact that anyone appeals to von Neumann-or any other `famous'' name-imply anything about whether the statements made by the people-or by von Neumann or the authority appealed to-are correct, or not. That's why it's utterly futile to focus on the text and not the meaning in scientific issues.
Question
Please, I very need only in the number, not the references to book or common words.
Please, consider a photon as the quantum of electro-magnetic field, and thus, as carrier of magnet field. Thank you in advance.
Is the photon motionless? Obviously not; it is moving at light speed. Then where 'its magnetic field' might be observed? One mile behind this photon, two miles? Stationary observer should experience time-varying field (electric and/or magnetic) or the energy flux in other words. If so, then the photon's energy should steadily decrease, thus its frequency should tend to null. Nothing like that is observed, although there are some hypotheses about 'tired light'. In conclusion: the amplitude of a single photon magnetic field is exactly zero.
Or in the other way: photon does not carry any electric charge and therefore produces no (time-varying!) electric field - so why it would be the source of magnetic field, which is uniquely related to electric field?
Question
This question is a reaction to the fact that some authors hold that the interaction between a microscopic object with a macroscopic object, leads to an entanglement between the states of the microscopic object and states of the macroscopic object. My opinion is that such an entanglement is impossible.
I recommend as auxiliary material the discussion
https://www.researchgate.net/post/What_is_the_quantum_structure_of_a_particle_detector_containing_a_gas_obeying_Maxwell-Boltzman_statistics
THE EXPERIMENT: From a pair of down-conversion photons, the signal photon illuminates the non-ballanced beam-splitter BS1 - see the attached figure. The idler photon is sent to a detector E (not shown) for heralding the presence of the signal photon in the apparatus. The signal photon exits BS1 as a superposition
(1) |1>st|1>a |0>b + ir|0>a |1>b , t2 + r2 = 1.
On each one of the paths is placed an absorbing detector, respectively A and B. The figure shows that the wave-packet |1>a reaches the detector A before |1>b reaches the detector B. Let |A0> ( |B0> ) be the non excited state of the detector A (B), and |Ae> ( |Be> ) the excited state after absorbing a photon.
Some physicists claim that the evolution of the signal photon through the detector A can be written as
(2) |A0> |1>s → (t|Ae> |0>b + ir|A0> |1>b) |0>a .
I claim that this expression is impossible, for a couple of reasons.
1) Are the states |A0> and |Ae> pure quantum states, or mixtures? I claim that a macroscopic object cannot have a pure quantum state, it can be in a mixture of pure states, all compatible with the macroscopic parameters. As supporting material see the discussion recommended above, and also the Feynman theory of path integral - the macroscopic limit.
2) In continuation, when the wave-packet |1>b meets the detector B, the state (2) should evolve into
(3) |A0> |B0> |1>s → (t |Ae> |B0> + ir |A0> |Be>) |0>a |0>b
= (t |Ae> |B0> + irt |Ae> |Be> - irt |Ae> |Be> + ir |A0> |Be>) |0>a |0>b
= [ t |Ae>( |B0> + ir |Be>) + ir|Be> ( |A0> - t |Ae>)].
That is similar with the following situation: if the cat A says "miaw" the cat B remains in the superposition ( |cat B dead> + ir |cat B alive>), and if the cat B says "miaw" the cat A remains in the superposition ( |cat A dead> - t |cat A alive>),
Did somebody see cats in such situations?
The problem with this question is it depends on what you think the wave function represents. Mathematics simply relates symbols, but in physics, strictly speaking the symbols have to represent something in the world. In quantum mechanics the issue depends on what you think ψ represents. If all you do is consider it a mathematical process, then you write your formalism, and in the case of Sofia's question, your answer depends on what your formalism gives you.
I am sorry, Sofia, that I have not tried to give an answer because when push comes to shove, this problem has some similarity to the delayed quantum eraser experiment, and I have argued that there is an alternative possibility, and if the experiment were done properly, you would be able to distinguish them. What should happen depends very much on exactly what the down converter does with a polarised photon, and exactly what happens in the beam splitter. This can be approached by assertion, or one could do the experiment I want done. Most will wave their arms and say there is no need because they "know" what will happen, but if they were that confident, they should simply do it. (In the first step it involves getting the experiment to work as published, then blocking one of the streams of idlers going to the mixer and seeing the effect on the signal photons. The concept is you have to change the nature of what you KNOW gives the effect before you make your choice.)
Question
The Schrödinger self adjoint Hamiltonian operator H correctly predicts the stationary energies and stationary states of the bound electron in a hydrogen atom. To obtain such states and energies it suffices to calculate the eigenvalues and eigenfunctions of H. Since 1926 up to now, and for the foreseeable future of Physics, any theoretical description of the hydrogen atom has to assume this fact.
On the other hand the Schrödinger time dependent unitary evolution equation $\partial \Psi / \partial t = -iH(\Psi)$ is obviously mistaken. So much so that in order to explain transitions between stationary states the unitary law of movement has to be (momentarily?) suspended and then certain "intrinsically probabilistic quantum jumps" are supposed to rule over the process.
Transitions are physical phenomena that consist in the electron passing from an initial stationary state with an initial stationary energy, to another stationary state having a different stationary energy. Physically transitions always involve the respective emission/absorption of a photon. Whenever transitions occur the theoretical unitary evolution is violated.
It is absurd to accept as a law of nature an evolution equation that does not corresponds with the physical phenomena being considered. Electron transitions are not predicted, nor described by, nor deducible from the Schrödinger evolution equation. In fact Schrödinger evolution equation is physically useless. This is the reason for Schrödinger's "Diese verdammte quantenspringerei". Decades of belief in unitary evolution originated countless speculation, contradiction and confusion with enormous waste of human talent and time.
Assume then that physicists accept the mistaken nature of unitary evolution and proposes its replacement with a novel equation that a) is consistent with the predictive virtues of H b) deterministically describes transitions In principle a probability free, common sense, rational, deterministic, well constructed replacement of Quantism should be a welcome relief for physicists and chemists, and for philosophers of science as well. Then, among equations and theories currently accepted by mainstream Physics, which ones would be affected by the eventual replacement of unitary evolution? Here is a short list of prospective candidates that the reader can extend and refine Quantum chemistry Dirac equation Quantum field theories Quantum gravity Standard model Lists of physical theories are available at
For more on the inconsistencies of Quantism and details on a theory that could replace it see our Researchgate Contributions page
With most cordial regards, Daniel Crespin
Dear Christian,
You wrote: "There seems nothing wrong with Schrödinger's equations. Only Bohr and Heisenberg did not like it. They preferred the mystery."
And so did most in the community almost from the start.
De Broglie and Schrödinger already considered transitions as "not instantaneous" even as Schrödinger introduced the wave equation, and this was practically 100 years ago, not a few decades ago, and this is what they were attempting to address, but could interest nobody else.
At the beginning of the 1950's, Schrödinger himself very publicly and futilely protested again about the very idea of instantaneous quantum jumps. Unfortunately, nobody paid attention.
If you had even glanced at the paper I referred Daniel to in the first answer in this thread, you would have found the direct quotes from him and the actual references where you can find them.
It is to this 100 years disconnect that I have been trying to draw attention.
At long last, some research seems to be resuming in the right direction.
Best Regards
André
Question
Personally I don't find Objective-collapse-theory (QMSL), very appealing: Even though problems were resolved in the 1990's there are still inconsistencies in terms of diverging particle densities etc...
However, the Penrose interpretation, which is considered to be a QMSL variant, is a different story altogether: Penrose suggests that the collapse of the wave function is due to the energy differences between quantum states having reached a certain threshold. The limit being the Planck mass of the system/object at hand. In effect this still means that matter can exist in more than one place at one time. Nonetheless, a macroscopic system, like a human being, cannot exist in multiple places at once as the corresponding energy difference is too large to begin with. So a microscopic system on the other hand, like an electron, can exist in more than one location until its space-time curvature separation reaches the collapse threshold, which could be a thousands of years from the emergence of its superposition.
What are your thoughts and opinions on this?
• Note: A macroscopic system, like a human being, could theoretically exist in a superimposed state for a very, very short period of time, at scales of Planck time or less [arXiv:1401.0176] ... So therefore it's not considered significant (if at all possible).
Dear Daniel,
I think you may have forgotten about the Einstein–Smoluchowski Diffusion Equation, which is a differential equation for a classical probability wave in a diffusing system. Classically, I could label one of the particles, then throw it into a room and allow it to be pushed around randomly. I could then calculate the probability distribution at a subsequent time, then find that particle. The probability distribution immediately collapses. The probability distribution has no objective physical existence per se. Same thing in quantum theory, unless you have Bohmian-Vigier tendencies. Probability is a manifestation of context and ignorance, not a property of an object.
Question
The "collapse" postulate says that if part of the wave-function produces a click in a detector, the rest of the wave-function disappears. In the experiment described here, it is shown that no part of the wave-function disappears, namely, given a superposition of two wave-packets, while one wave-packet produces a click in a detector, the other wave-packet produces observable interference effects.
A quantum system is prepared in a state with maximum one particle, a photon, A;
(1) |ψ> = q{ |0>A + p( |1;a>A |0;b>A + eiθ |0;a>A |1;b>A ) },
see figure.
It is shown below that while the wave-packet |1;a>A
The wave-packet |1;b>A illuminates one side of the 50-50%beam-splitter BS, and on the other side lands a coherent beam
(2) |α> = N( |0>B + peiα|1>B + . . . ).
where N is the normalization factor. Thus, we have the total wave-function
(3) Φ = |α>|ψ> = Nq( |0>B + peiα|1>B + . . . ){ |0>A + p( |1;a>A |0;b>A + eiθ|0;a>A |1;b>A ) }.
At the beam-splitter the following transformations take place
(4) |1>B → (1/√2) ( |1;c> |0;d> + i|0;c> |1;d>);
(5) |1;b>A → (1/√2) (i|1;c> |0;d> + |0;c> |1;d>).
Introducing them in (3) one gets the following IMPLICATIONS:
(6) For θ = α - π/2, every click in the detector D is preceded by a the detection of the wave-packet |1>a in the detector U.
(7) For θ = α + π/2, every click in the detector C is preceded by a the detection of the wave-packet |1>a in the detector U.
Thus, one can see that by changing the phase θ, carried by the wave-packet |1;b>A , one can switch between a joint click in D and U, and a joint click in C and U.
CONTRADICTION: one can see in the figure that BS is more distant from the preparation region than the detector U. So, if the collapse hypothesis were correct, the tunning of θ would have no effect, since when the detector U clicks, the wave-packet |1;b>A would disappear instead of reaching the beam-splitter BS.
CONCLUSION: No part of the wave-function disappears - no collapse.
Dear Tina Lindhard,
my viewpoint on Quantum Mechanics is basically expressed in this work:
You are not a physicist but if you raise questions I will search for giving an answer.
Regards
Daniele Sasso
Question
Dear Syed
According to my mathematics called self-field theory (SFT) there may be two different methods of storing memories: one is based on electromagnetic (EM) fields while long term memories are stored as strong nuclear (SN) fields, probably within DNA. The conversion of short and long term memories presumably happens during 'deep' sleep when the two dimensional, EM fields are somehow converted into three-dimensional gluon encoded data within quarks. This is implied by the structure of the mathematics and its connections to particle physics.
The mathematics
If this hypothesis is correct the question is what happens to 'sound' in long term memories?
Could this be useful in your research into consciousness?
With a SIM card and a code we have access to a huge amount of information which is not stored in the cell phone but in the Cloud, i.e. in big servers at another place in space. I claim the same for our memory. It is enough with a code in the brain but the memory of an event is not stored in brain. It does not have to be stored at all as all events past, present and future exists in spacetime, which is at least ontologically 4D but possible to extend to 6D see http://www.drpilotti.info/eng/sixdimensioinal-relativity.html
Question
- A moment is the smallest difference between two states of the same matter in the space.
- Time is the continuous flowing of multiple consecutive moments.
I don't know if you could comment or not, but above is my proposition.
Comment définir le temps ?Aujourd'hui je vous propose une définition simple du temps. Dites-moi ce que vous en pensez !
- L'instant (le moment) c'est la plus petite différence entre deux états d'une même matière dans l'espace.
- Le temps c'est l'écoulement continue de plusieurs moments successifs.
Time and space are percepted as whole by person. We live in and feel chronoto, we remember chronotops in connection with one or other event
Question
What lies outside of the boundaries of space. I find the problem in this question is our knowledge, we have a prior belief that we are exist inside space, this belief that doesn’t based on any evidence, so I think before we asking, what lies outside of space? We must first ask, are the objects lying inside space or outside of it?
I think if objects are exist outside of space, it will suffer from superposition or uncertainty in position depending on the distance from space or how far is it from space.
can i ask what is space?
Question
The standard QM offers no explanation for the collapse postulate.
By the Bohmian mechanics (BM) there is no collapse, there exists a particle following some trajectory, and and detector fires if hit by that particle. Therefore, there is no collapse. However, BM has big problems in what concerns the photon, for which no particle and no trajectory is predicted. Thus, in the case of photons, it is not clear which deterministic mechanism suggests BM instead of the collapse.
The Ghirardi-Rimini-Weber (GRW) theory says that the collapse occurs due to the localization of the wave-function at some point, decided upon by a stochastic potential added to the Schrodinger equation. The probability of localization is very high inside a detector, where the studied particle interacts with the molecules of the material and gathers around itself a bigger and bigger number of molecules. Thus, at some step there grows a macroscopic body which is "felt" by the detector circuitry.
Personally, I have a problem with the idea that the collapse occurs at the interaction of the quantum system with a classical detector. If the quantum superposition is broken at this step, how does it happen that the quantum correlations are not broken?
For instance in the spin singlet ( |↑>|↓> - |↓>|↑>) one gets in a Stern-Gerlach measurement with the two magnetic fields identically oriented, either |↑>|↓>, or |↓>|↑>. The quantum superposition is broken. But the quantum correlation is preserved. On never obtains, if the magnetic fields have the same orientation, |↑>|↑>, or |↓>|↓>.
WHY SO? Practically, what connection may be maintained between the macroscopic body appearing in one detector, and the macroscopic body appearing in another detector, far away from the former? Why the quantum correlation is not broken, as is broken the quantum superposition?
Dear Sofia,
observation”. For example, your friend takes away a one dice without looking from a box with one white dice and one black dice. You know before observation of the remaining dice that your friend will see the white dice with the probability 0.5. The probability will become 1 when you will see the black dice and 0 when you will see the white dice, regardless of distance between you and your friend. Thus, the wave function describes first of all the state of the mind of the observer, according to Born’s interpretation. We can think that only observer’s knowledge changes because of observation in the case of the dices. But the wave-particle duality observed for example in the doubleslit interference experiment cannot be described if only observer’s knowledge changes. Therefore Dirac postulated in 1930 ”that a measurement always causes the system to jump into an eigenstate of the dynamical variable that is being measured”. The wave function describes the states of both the mind of the observer and quantum system according to the Dirac jump or the collapse of the wave-function. This renouncement of the distinction between reality and our knowledge of reality creates the illusion that quantum mechanics can describe the wave-particle duality and other paradoxical quantum phenomena. But it is very strange that only few scientists understood that this renouncement of realism leads to the logical absurdity.
Question
In fact, I'm working on a thesis project on Quantum Information and precisely on quantum error correcting codes. I just started not long ago my research on the subject, and specifically how one can go from a classical signal to a quantum signal to describe the algorithms of error correction codes in physical channels.
There is a lot of work going on here, it is a very active field of research. Also your question does not seem clear. Do you mean the uploading of classical information with a quantum oracle (alike Quantum RAM?)
Question
What experimental evidence (or any other) contradicts the use of non-unitary, non-Hermitian mathematics to represent pure quantum states? This question relates to pure states, not mixed states. Note that rational matrices have rational (real) eigenvalues.
The mathematics used to describe quantum mechanics was chosen to correspond to the observed physical realities of experiment. The reason we use Hermitian operators is because they have real eigenvalues, corresponding to the fact that the numbers produced by measurements are real. While you could certainly claim to "measure" a complex or other sort of number, ultimately that could be seen as measuring a pair of real numbers (the real and imaginary parts), or four real numbers for a quaternion, so we don't lose any possibilities by requiring real numbers.
As a further example of this, measurements DID force us to use spinors rather than just the single complex wave function; again the predictions are collections of real numbers.
The need for unitarity is a consequence of our interpretation of quantum mechanics as probabilistic and evolving via the Schrodinger equation. The wave function is unchanged by an overall constant multiple. By interpreting the magnitude of the wave function to be one, we normalize probabilities in the usual way using the real part of this number. The phase that remains is arbitrary (*relative* phases are not, just an overall one).
If we were to transform a wave function by a non-unitary transformation, it is easy to show (try it!) that the Schrodinger equation does not preserve total probability. This, with the standard interpretation, means that there is not probability one that the particle is *somewhere*. The experimental observation you are asking for is that we do not see particles spontaneously vanish or come into being.
Question
The original question was wrong thus was completed rewritten.
Context:
Suppose I have a 3 d particle placed in a spherical box of infinite well from [-r,r] in all three directions, it's expectation value of position thus equaled to 0.
Thus its surface area equaled to $4 \pi r^2$
From quantum gravity, (existence) https://arxiv.org/pdf/gr-qc/9403008.pdf and (a fairly good numerical approximation) https://en.wikipedia.org/wiki/Planck_length we knew that space and time were quantized fractions. Suppose the minimum length equal to ds, then the maximum partition of the surface of the spherical ball equal to $N= 4 \pi r^2/ds^2$.
Which meant that, as the increase of r, the number of possible segment that our probe could be placed will increase.
In analogy, suppose I have a particle of spin 1/2. Where we place the particle at the center of the ball and measure its spin. Then the ball with larger r could have more "segment" area for observation.
Question 1
Was these analysis true? If not, why? Further what's its implication?
Question 2
Suppose I created a pair of such spin 1/2 particle entangled together. One placed in a ball of $r_a$ the other placed in a ball or $r_b$. If $r_b>r_a$, then our measurement could be more "precise" about the ball b than ball a.
In an imaginary extreme case where $N=4 \pi r_a^2/ds^2=2$, measurement for ball a thus could only be up or down.
What's happened to the information here? Were they still consist?
Clarification:
1 in question 2, since it's an infinite well(although it was not possible in real), It did not had to be exact in the "center" of $(0,0,0)$. By the fact that the particle was not at the boundary, and, since it's spherical coordinates, by symmetry, position expectation value was at the center. In fact, it didn't even have to be at the center position. Wave was good. The encoding was based on the probability of $T_{funning}$ was selected such that it equaled to 0 or $<<1$. Thus could be ignored regard to numerical calculation.
2 the imaginary extrem was based on the fact that electron's classical readius was in e-16 and plunk length was in e-32 thus $N=2$ could not happen, but just to demonstrate the idea.
When the quantum gravity was assumed, it was essentially ignite the continuous of the space-time.
Especially, when taking the length comparable to $ds$(in this case, the plank length). Quote my professor: the superposition of states were destroyed. In the extreme case of $ds^3$, the wave function became a Dirac Delta function.
Thus, by taking the length comparable to $ds$, we essentially altered the particles and thus information was no longer valid, especially, the entanglement was altered.
Question
Is this meant as solitons? I think that I read that EM field does not have solitonslike solutions in most cases. But if they are solitons how is their ability accounted for to 'feel' entire space in almost 0 time (as is evident from Feynman trajectories approach)?
In a box the exitations imerge immediately and comprise the whole length of the box. So there is a probability to detect a photon far from the source. How is this to be consistent with the constancy of speed of light c?
Dear Ilian,
Transient processes are not considered for quantum transitions. Photons creation or phonon it does not matter.
Quantum theory is a theory of metamorphosis. Each quantum process is metamorphosis of initial state to final state. Even if initial and final states are the same, the process is considered as a sort of metamorphosis.
If one consider field in each point as oscillator, these oscillators have to be coupled, because one CAN NOT initiate independent oscillations of each oscillator. If one consider each standing wave as oscillator, these oscillators ARE independent.
Question
i am very clear about momentum, spin, and polarization, performed on entangled particles are found to be correlated.
but i am not understanding ;
In what way 'position measurement' performed on entangled particles are found to be correlated?
Yes. x is a dummy variable. If it represents the polarisation, it is exactly the same as the correlation of polarisation. But in this latter case, the polarisation is represented by only two eigenstates |0> and |1> instead of a continuum |x>. That doesn't change the principle.
Question
in relativistic quantum mechanics(quantum field theory) we take co-ordinate time as a time observable to bring space and time on equal footing ...why can't we take proper time as a time observable ? by this approach we may overcome the problem, called "renormalization."
s h s> why can't we take proper time as a time observable?
The proper time of which objects, distributed throughout the universe? Given at space-time geometry defined by the line element
c22 = gμν(x) dxμdxν
it may be possible to choose a time coordinate such that g00 = 1, such that the proper time for objects at rest in these coordinates is equal to coordinate time. However, this would in most cases be a very bad idea, because then the other components of gμν become horribly ugly.
Question
Energy transitions are classified according to spectral series such as Balmer, Lyman, Paschen, etc., which assume a single transition between two non-contiguous levels (except the first transition). What the question really means is, can cascading transitions from one level to another, emitting a photon at each contiguous level, occur or have been observed?
Question
As we all know, the majority of the softwares used in condensed matter physics are based on DFT (such as VASP, CASTEP, ......).But they can only handle the problems at zero temperature, so is there some ab-initio software which could solve the Schrodinger Equation at finite temperature?
structural relaxation in DMFT has been implemented, and is freely available here:
Question
A non-absorbing detector is supposed, at least in theory, to report that a particle passes through it, though, the particle is allowed to exit the detector, s.t. we can do additional tests on it. No doubt, the non-absorbing detector "collapses" the wave-function, but additional detectors in continuation, may tell us in which state the 1st detector left the particle, which we can only guess we use absorbing detectors.
Now, does somebody know, do we have such detectors "on shelf", i.e. in practice?
Dear @MUNISH KUMAR,
I see on your profile page that you are an expert in radiation detection. I am no experimenter, so, I'd like to ask you some questions.
Please see, you speak in your comment of gamma detection. Well, if the efficiency is poor with gamma, what about alpha detection? A strongly charged particle may leave part of its energy in a medium, and get out of the detector through a back window. Then, can we have non-absorbing alpha detectors with not so low efficiency?
I am aware that inside the medium in the detector the alpha may be scattered, but since it is a charged particle, can we place near the back window collimators consisting in electric fields?
I am also aware that the alpha may attach one or two electrons, s.t. a further detection becomes less probable. Finally, as far as I know about cross sections, th more rapid is the alpha, the cross section of interaction with the medium is smaller. But my question is not if we can achieve very high efficiency, it is about non-negligible efficiency.
With best regards,
Sofia
Question
For understanding my question I invite everybody to read the example.
In the Bohmian mechanics (BM) the velocity formula gives an infinite value to the velocity of the Bohmian particle at points where the wave-function vanishes, but the gradient doesn't vanish.
Do we have any proof that this is wrong, i.e. that attributing superluminal velocity to a particle is wrong? Could it be that this feature of the BM is a flaw, and implies that BM is wrong?
EXAMPLE:
In a Hanbury-Brown and Twiss (HB&T) type experiment with a pair of identical photons, one passing through a slit A and the other passing through a slit B - see attach - the wave-function of the pair looks as follows:
Ψ(r1, r2) = 2 {exp[ik|r1 - rA| exp(ik|r2 - rB|) + exp(ik|r1 - rB|) exp(ik|r2 - rA|) }
= 2 exp[iθ(r1, r2)] cos[ϕ(r1, r2)]
where r1 , r2, denore the positions of the photons, and rA, rB, the positions of the slits.
ϕ(r1, r2) = (κ/2) { (|r1 - rA| + |r2 - rB|) - (|r1 - rB|) + |r2 - rA|) }.
If ϕ is an odd integer multiple of π/2, the wave-function vanishes. Assume that so happens for the pair of points P'1 and P'2. If one places detectors at these two points and, say, the detector at P'1 makes a detection, the detector at P'2 remains silent. By the BM, the particle moving towards P'2 jumps over this point with an infinite velocity, and this is why it cannot be detected.
The problem is that the presence of the detector at P'2 reduces the probability Prob(P'1, Q2) of joint detection at P'1 and Q2, where Q2 is any point below P'2. This probability shows no more interference effect, it is given only by the cross waves from A to Q2 and from B to P'1.
Obviously, the detector at P'1 although cannot detect the particle passing through it, but yes stops something. The probability of joint detection in P'1 and Q2 decreases due to the detector at P'2 not only by the disappearence iof the nterference, but also below the sum of the isolated probabilities of detection from the crossed rays and detection from the direct rays.
Dear Daniel,
Thanks for trying to help. Please see, my question is related to an article. The example there is of Hanbury-Brown and Twiss type, as I wrote in the question. So, semiconductors, or Bohm-Aharonove, are not connected with it. But, I wrote you a message with details, please look at it.
Question
How are you progressing ?
How have you selected your respondents?
All the best from Copenhagen, Bo
I don't take part in medical projects! S.L.
Question
In the case of diffraction of photons by an edge it can be shown simply by means of classical relativistic mechanics that the assumption of quantized angular momentum leads to a diffraction pattern which looks very similar to what is observd in experiments.
Are there any hints that in the very early days of QM Sommerfeld or Bohr or somebody else  ever considered the idea that the observed "interference" patterns in diffraction experiments with electrons and photons might result from the quantization of angular momentum and from an interchange of quantized portions of ang.mom. during the interaction of an electron or photon with the atoms of an edge, slid, double-slid, etc.?
Herb : A copy of  Lande's textbook "New Foundations of Quantum Mechanics" (1965) has been ordered. I am going to reorganize and to complete and to translate my unpublished paper "ZUR INTERPRETATION VON TEILCHENBEUGUNGSPHAENOMENEN"  ( on the interpretation of particle diffraction phenomenons ) which I was working on in 2013.  The revised  English version will be available soon at RG.
Question
Weak measurement are a relatively new trend in quantum experiment. They mean to determine the so-called Bohmian velocity, but in fact they measure the average linear momentum while disturbing very little the wave-function in each trial of the experiment. Thus, in each trial, after such a measurement one can do additional measurements on the wave-function negligibly disturbed.
For a very clear example see:
Sacha Kocsis, Boris Braverman, Sylvain Ravets, Martin J. Stevens, Richard P. Mirin, L. Krister Shalm, Aephraim M. Steinberg, Observing the Average Trajectories of Single Photons in a Two-Slit Interferometer, Science 332, 1170 (2011); DOI: 10.1126/science.1202218
Dear, Sofia
I will read your paper with great interest. It sounds good if you give an experimental criterion to judge which theory is correct.
Best wish
Question