Science topic
Foundations of Quantum Mechanics - Science topic
Principles and interpretations of Quantum Mechanics.
Questions related to Foundations of Quantum Mechanics
A number of Python modules exisit for modelling the quantum outputs of quantum optical systems. With only one or two optical components and simple quantum states, system outputs can be calculated by hand. However, when the complexity increases, the benefits of having a Python module to check results or just save time is obvious. With quantum comms, computers and sensors being investigated seriously, the complexity is already high.
The availability of symbolic algebra programs in Python and Octave certainly are valuable for checking algebra, so you could start from scratch yourself to build somethings. However, in the case of quantum optics there are more rules for how things like creation operators and annihilation operators, hamiltonians etc act on states, so building from scratch is far from trivial.
Given a number of Python modules exist for performing this symbolic algebra, would there be any kind of consensus as to which one might be the best and most versatile to use, with the greatest number of users?
many thanks,
Neil
The energy operator ih∂/∂t and the momentum operator ihΔ or ih∂/∂x play a crucial role in the derivation of the Schrödinger equation, the Klein-Gordon equation, the Dirac equation, and other physics arguments.
The energy and momentum operators are not differential operators in the general sense; they do play a role in the derivation of the equations for the definition of energy and momentum.
However, we do not find any reasonable arguments or justifications for the use of such operators, and even their meaning can only be speculated from their names. It is used without explanation in textbooks.
The clues we found are:
1) In the literature [ Brown, L. M., A. Pais and B. Poppard (1995). Twentieth Centure Physics (I), Science Press.], "In March 1926, Schrödinger noticed that replacing the classical Hamiltonian function with a quantum mechanical operator, i.e., replacing the momentum p by a partial differentiation of h/2πi with position coordinates q and acting on the wave function, one also obtains the wave equation."
2) Gordon considered that the energy and momentum operators are the same in relativity and in non-relativism and therefore used in his relativistic wave equation (Gordon 1926).
(3) Dirac also used the energy and momentum operators in the relativistic equations with electron spins (Dirac 1928). Dirac called it the "Schrödinger representation", a self-adjoint differential operator or Hermitian operator (Dick 2012). (D).
Our questions are:
Why can this be used? Why is it possible to represent energy by time differential for wave functions and momentum by spatial differential for wave functions? Has this been historically argued or not?
Keywords: quantum mechanics, quantum field theory, quantum mechanical operators, energy operators, momentum operators, Schrödinger equation, Dirac equation.
According to one popular interpretation of quantum mechanics, if you are experiencing a challenging life, there exists a universe where you are thriving. Meanwhile, in another universe, you might be a mighty king or queen who is unmatched in power and fame. This idea, known as the "Many-Worlds Interpretation," suggests that every decision and action lead to the creation of a new universe.
For example, when you come to a junction and decide to turn left, an entirely new universe is created where you turned right. In this view, billions of new universes are created in the blink of an eye, constantly branching out from every possible event or choice.
This raises an interesting question: When was our universe created? Was it 40 years ago when a young bachelor proposed to his girlfriend, and she rejected him, causing the creation of our universe where she joyfully accepted the ring, leading to their happy life together? Or was it just three years ago when a toddler fell ill and died in his mother's arms, causing the creation of our universe where he survived?
Given this perspective, can we really determine the age of our universe? Traditional physics suggests that our universe began about 13.8 billion years ago with the Big Bang, but quantum mechanics introduces the possibility of infinitely branching timelines. While we may be able to measure the age of our observable universe, the idea of multiple universes complicates the notion of a singular timeline or fixed creation point. Thus, determining the age of our universe in this context may ultimately be impossible.
Please comment.
The theme of the diffraction typically refers to a small aperture or obstacle. Here I would like to share a video that I took a few days ago that shows diffraction can be produced by the macroscopic items similarly:
I hope you can explain this phenomenon with wave-particle duality or quantum mechanics. However, I can simply interpret it with my own idea of Inhomogeneously refracted space at:
Noether's theorem is a fundamental result in physics stating that every symmetry of the dynamics implies a conservation law. It is, however, deficient in several respects: for one, it is not applicable to dynamics wherein the system interacts with an environment; furthermore, even in the case where the system is isolated, if the quantum state is mixed then the Noether conservation laws do not capture all of the consequences of the symmetries[1].
In SR, force-free motion in an inertial frame of reference takes place along a straight-line path with constant velocity. Viewed from a non-inertial frame, on the other hand, this path of motion will be a geodesic curve in a flat spacetime. Einstein made the plausible assumption that this geodesic motion also holds in the non-flat case, i.e. in a spacetime region for which it is impossible to find a coordinate system that leads to the Minkowski metric in SR[2].
All spacetime models can be expressed in terms of the gμν = {4x4} matrix, differing only in the distribution of matrix elements. The gμν of Minkowski spacetime is the unit diagonal matrix {1 -1 -1 -1}; the gμν of Riemann spacetime is { X }. If a new spacetime model is introduced gμν={a0,-a1,-a2,-a3}, which is a non-unit diagonal matrix. (ds)^2=(a0)^2+(a1)^2+(a2)^2+(a3)^2, always holds, interpreting it as a non-uniformly flat spacetime, generalised Minkowski spacetime, and no longer a curved spacetime. Should Noether's theorem maintain its validity in this case.
----------------------------------
References
[1] Marvian, I., & Spekkens, R. W. (2014). Extending Noether's theorem by quantifying the asymmetry of quantum states. Nature Communications, 5(1), 3821. https://doi.org/10.1038/ncomms4821 ;
[2] Rowe, D. E. (2019). Emmy Noether on energy conservation in general relativity. arXiv preprint arXiv:1912.03269.
There are many kinds of certainty in the world, but there is only one kind of uncertainty.
I: We can think of all mathematical arguments as "causal" arguments, where everything behaves deterministically*. Mathematical causality can be divided into two categories**: The first type, structural causality - is determined by static types of relations such as logical, geometrical, algebraic, etc. For example, "∵ A>B, B>C; ∴ A>C"; "∵ radius is R; ∴ perimeter = 2πR"; ∵ x^2=1; ∴ x1=1, x2=√-1; .......The second category, behavioral causality - the process of motion of a system described by differential equations. Such as the wave equation ∂^2/ ∂t^2-a^2Δu=0 ...
II: In the physical world, physics is mathematics, and defined mathematical relationships determine physical causality. Any "physical process" must be a parameter of time and space, which is the essential difference between physical and mathematical causality. Equations such as Coulomb's law F=q1*q2/r^2 cannot be a description of a microscopic interaction process because they do not contain differential terms. Abstracted "forces" are not fundamental quantities describing the interaction. Equations such as the blackbody radiation law and Ohm's law are statistical laws and do not describe microscopic processes.
The objects analyzed by physics, no matter how microscopic†, are definite systems of energy-momentum, are interactions between systems of energy-momentum, and can be analyzed in terms of energy-momentum. The process of maintaining conservation of energy-momentum is equal to the process of maintaining causality.
III: Mathematically a probabilistic event can be any distribution, depending on the mandatory definitions and derivations. However, there can only be one true probabilistic event in physics that exists theoretically, i.e., an equal probability distribution with complete randomness. If unequal probabilities exist, then we need to ask what causes them. This introduces the problem of causality and negates randomness. Bohr said "The probability function obeys an equation of motion as did the co-ordinates in Newtonian mechanics "[1]. So, Weinberg said of the Copenhagen rules, "The real difficulty is that it is also deterministic, or more precisely, that it combines a probabilistic interpretation with deterministic dynamics" [2].
IV: The wave function in quantum mechanics describes a deterministic evolution energy-momentum system [3]. The behavior of the wave function follows the Hamiltonian principle [4] and is strictly an energy-momentum evolution process***. However, the Copenhagen School interpreted the wave function as "probabilistic" nature [23]. Bohr rejected Einstein's insistence on causality by replacing the term "complementarity" with his own invention, "complementarity". Bohr rejects Einstein's insistence on causality, replacing it with his own invention of "complementarity" [5].
Schrödinger ascribed a reality of the same kind that light waves possessed to the waves that he regards as the carriers of atomic processes by using the de Broglie procedure; he attempts "to construct wave packets (wave parcels) that have relatively small dimensions in all directions," and which can obviously represent the moving " and which can obviously represent the moving corpuscle directly [4][6].
Born and Heisenberg believe that an exact representation of processes in space and time is quite impossible and that one must then content oneself with presenting the relations between the observed quantities, which can only be interpreted as properties of the motions in the limiting classical cases [6]. Heisenberg, in contrast to Bohr, believed that the wave equation gave a causal, albeit probabilistic description of the free electron in configuration space [1].
The wave function itself is a function of time and space, and if the "wave-function collapse" at the time of measurement is probabilistic evolution, with instantaneous nature, [3], neither time (Δt=0) nor spatial transition is required. then it is in conflict not only with the Special Relativity, but also with the Uncertainty Principle. Because the wave function represents some definite energy and momentum, which appear to be infinite when required to follow the Uncertainty Principle [7], ΔE*Δt>h and ΔP*Δx>h.
V: We must also be mindful of the fact that the amount of information about a completely random event. From a quantum measurement point of view, it is infinite, since the true probability event of going from a completely unknown state A before the measurement to a completely determined state B after the measurement is completely without any information to base it on‡.
VI: The Uncertainty Principle originated in Heisenberg's analysis of x-ray microscopy [8] and its mathematical derivation comes from the Fourier Transform [8][10]. E and t, P and x, are two pairs of commuting quantities [11]. While the interpretation of the Uncertainty Principle has been long debated [7][9], "Either the color of the light is measured precisely or the time of arrival of the light is measured precisely." This choice also puzzled Einstein [12], but because of its great convenience as an explanatory "tool", physics has extended it to the "generalized uncertainty principle " [13].
Is this tool not misused? Take for example a time-domain pulsed signal of width τ, which has a Stretch (Scaling Theorem) property with the frequency-domain Fourier transform [14], and a bandwidth in the frequency domain B ≈ 1/τ. This is the equivalent of the uncertainty relation¶, where the width in the time domain is inversely proportional to the width in the frequency domain. However, this relation is fixed for a definite pulse object, i.e., both τ and B are constant, and there is no problem of inaccuracy.
In physics, the uncertainty principle is usually explained in terms of single-slit diffraction [15]. Assuming that the width of the single slit is d, the distribution width (range) of the interference fringes can be analyzed when d is different. Describing the relationship between P and d in this way is equivalent to analyzing the forced interaction that occurs between the incident particle and d. The analysis of such experimental results is consistent with the Fourier transform. But for a fixed d, the distribution does not have any uncertainty. This situation is confirmed experimentally, "We are not free to trade off accuracy in the one at the expense of the other."[16].
The usual doubt lies in the diffraction distribution that appears when a single photon or a single electron is diffracted. This does look like a probabilistic event. But the probabilistic interpretation actually negates the Fourier transform process. If we consider a single particle as a wave packet with a phase parameter, and the phase is statistical when it encounters a single slit, then we can explain the "randomness" of the position of a single photon or a single electron on the screen without violating the Fourier transform at any time. This interpretation is similar to de Broglie's interpretation [17], which is in fact equivalent to Bohr's interpretation [18][19]. Considering the causal conflict of the probabilistic interpretation, the phase interpretation is more rational.
VII. The uncertainty principle is a "passive" principle, not an "active" principle. As long as the object is certain, it has a determinate expression. Everything is where it is expected to be, not this time in this place, but next time in another place.
Our problems are:
1) At observable level, energy-momentum conservation (that is, causality) is never broken. So, is it an active norm, or just a phenomenon?
2) Why is there a "probability" in the measurement process (wave packet collapse) [3]?
3) Does the probabilistic interpretation of the wave function conflict with the uncertainty principle? How can this be resolved?
4) Is the Uncertainty Principle indeed uncertain?
------------------------------------------------------------------------------
Notes:
* Determinism here is a narrow sense of determinism, only for localized events. My personal attitude towards determinism in the broad sense (without distinguishing predictability, Fatalism, see [20] for a specialized analysis) is negative. Because, 1) we must note that complete prediction of all states is dependent on complete boundary conditions and initial conditions. Since all things are correlated, as soon as any kind of infinity exists, such as the spacetime scale of the universe, then the possibility of obtaining all boundary conditions is completely lost. 2) The physical equations of the upper levels can collapse by entering a singularity (undergoing a phase transition), which can lead to unpredictability results.
** Personal, non-professional opinion.
*** Energy conservation of independent wave functions is unquestionable, and it is debatable whether the interactions at the time of measurement obey local energy conservation [21].
† This is precisely the meaning of the Planck Constant h, the smallest unit of action. h itself is a constant of magnitude Js. For the photon, when h is coupled to time (frequency) and space (wavelength), there is energy E = hν,momentum P = h/λ.
‡ Thus, if a theory is to be based on "information", then it must completely reject the probabilistic interpretation of the wave function.
¶ In the field of signal analysis, this is also referred to by some as "The Uncertainty Principle", ΔxΔk=4π [22].
------------------------------------------------------------------------------
References:
[1] Faye, J. (2019). "Copenhagen Interpretation of Quantum Mechanics." The Stanford Encyclopedia of Philosophy from <https://plato.stanford.edu/archives/win2019/entries/qm-copenhagen/>.
[2] Weinberg, S. (2020). Dreams of a Final Theory, Hunan Science and Technology Press.
[3] Bassi, A., K. Lochan, S. Satin, T. P. Singh and H. Ulbricht (2013). "Models of wave-function collapse, underlying theories, and experimental tests." Reviews of Modern Physics 85(2): 471.
[4] Schrödinger, E. (1926). "An Undulatory Theory of the Mechanics of Atoms and Molecules." Physical Review 28(6): 1049-1070.
[5] Bohr, N. (1937). "Causality and complementarity." Philosophy of Science 4(3): 289-298.
[6] Born, M. (1926). "Quantum mechanics of collision processes." Uspekhi Fizich.
[7] Busch, P., T. Heinonen and P. Lahti (2007). "Heisenberg's uncertainty principle." Physics Reports 452(6): 155-176.
[8] Heisenberg, W. (1927). "Principle of indeterminacy." Z. Physik 43: 172-198. “不确定性原理”源论文。
[9] https://plato.stanford.edu/archives/sum2023/entries/qt-uncertainty/; 对不确定性原理更详细的历史介绍,其中包括了各种代表性的观点。
[10] Brown, L. M., A. Pais and B. Poppard (1995). Twentieth Centure Physics(I), Science Press.
[11] Dirac, P. A. M. (2017). The Principles of Quantum Mechanics, China Machine Press.
[12] Pais, A. (1982). The Science and Life of Albert Einstein I
[13] Tawfik, A. N. and A. M. Diab (2015). "A review of the generalized uncertainty principle." Reports on Progress in Physics 78(12): 126001.
[15] 曾谨言 (2013). 量子力学(QM), Science Press.
[16] Williams, B. G. (1984). "Compton scattering and Heisenberg's microscope revisited." American Journal of Physics 52(5): 425-430.
Hofer, W. A. (2012). "Heisenberg, uncertainty, and the scanning tunneling microscope." Frontiers of Physics 7(2): 218-222.
Prasad, N. and C. Roychoudhuri (2011). "Microscope and spectroscope results are not limited by Heisenberg's Uncertainty Principle!" Proceedings of SPIE-The International Society for Optical Engineering 8121.
[17] De Broglie, L. and J. A. E. Silva (1968). "Interpretation of a Recent Experiment on Interference of Photon Beams." Physical Review 172(5): 1284-1285.
[18] Cushing, J. T. (1994). Quantum mechanics: historical contingency and the Copenhagen hegemony, University of Chicago Press.
[19] Saunders, S. (2005). "Complementarity and scientific rationality." Foundations of Physics 35: 417-447.
[21] Carroll, S. M. and J. Lodman (2021). "Energy non-conservation in quantum mechanics." Foundations of Physics 51(4): 83.
[23] Born, M. (1955). "Statistical Interpretation of Quantum Mechanics." Science 122(3172): 675-679.
=========================================================
Quantum field theory has a named field for each particle. There is an electron field, a muon field, a Higgs field, etc. To these particle fields the four force fields are added: gravity, electromagnetism, the strong nuclear force and the weak nuclear force. Therefore, rather than nature being a marvel of simplicity, it is currently depicted as a less than elegant collage of about 17 overlapping fields. These fields have quantifiable values at points. However, the fundamental physics and structure of fields is not understood. For all the praise of quantum field theory, this is a glaring deficiency.
Therefore, do you expect that future development of physics will simplify the model of the universe down to one fundamental field with multiple resonances? Alternatively, will multiple independent fields always be required? Will we ever understand the structure of fields?
The really important breakthrough in theoretical physics is that the Schrödinger Time Dependent Equation (STDE) is wrong, that it is well understood why is it wrong, and that it should be replaced by the correct Deterministic Time Dependent Equation (DTDE). Unitary theory and its descendants, be they based on unitary representations or on probabilistic electrodynamics, will have to go away. This of course runs against the claims about string and similar theories made in the video. But our claims are a dense, constructive criticism with many consequences. Taken into account if you are concerned about the present and the near future of Theoretical Physics.
Wave mechanics with a fully deterministic behavior of waves is the much needed and sought --sometimes purposely but more often unconsciously-- replacement of Quantism that will allow the reconstruction of atomic and particle physics. A rewind back to 1926 is the unavoidable starting point to participate in the refreshing new future of Physics. Many graphical tools currently exists that allow the direct visualization of three dimensional waves, in particular of orbitals. The same tools will clearly render the precise movement and processes of the waves under the truthful deterministic physical laws. Seeing is believing. Unfortunately there is a large, well financed and well entrenched quantum establishment that stubbornly resists these new developments and possibilities.
When confronted with the news they do not celebrate, nor try to renew themselves overcoming their quantum prejudices. Instead the minds of the quantum establishment refuse to think. They negate themselves the privilege of reasoning and blindly assume denial, or simply panic. The net result is that they block any attempt to spread the results. Accessing funds to recruit and direct fresh talents in the new direction is even harder than spreading information and publishing.
Painfully, this resistance is understandable. For these Quantists are intelligent scientists (yes, they are very intelligent persons) that instinctively perceive as a menace the news that debunk the Wave-Particle duality, the Uncertainty Principle, the Probabilistic Interpretation of wave functions and the other quantum paraphernalia. Their misguided lifelong labor, dedication and efforts --of themselves and of their quantum elders, tutors, and guides-- instantly becomes senseless. I feel sorry for such painful human situation but truth must always prevail. For details on the DTDE see our article
Hopefully young physicists will soon take the lead and a rational wave mechanics will send the dubious and troublesome Quantism to its crate, since long waiting in the warehouse of the history of science.
With cordial regards,
Daniel Crespin
Quantum mechanics can answer this question. Relativity defines the differential structure of space-time (metric) without giving any indications about the boundary. This suggests that relativity is a correct but not a complete theory (a well-formulated mathematical problem, i.e. Dirichlet problem, needs differential equations and boundary conditions). Is it possible that quantum mechanics is the manifestation of microscopic boundary conditions of space-time? Recent papers, e.g. see attached "Elementary space-time cycles" , absolutely confirm the viability of this unified description of quantum and relativistic mechanics.
Article Elementary spacetime cycles
Continuation of the former discussion
(27) Are there Dead Ends in Fabric of Reality possible_.pdf - see the attached file:
So-called "Light with a twist in its tail" was described by Allen in 1992, and a fair sized movement has developed with applications. For an overview see Padgett and Allen 2000 http://people.physics.illinois.edu/Selvin/PRS/498IBR/Twist.pdf . Recent investigation both theoretical and experimental by Giovaninni et. al. in a paper auspiciously titled "Photons that travel in free space slower than the speed of light" and also Bereza and Hermosa "Subluminal group velocity and dispersion of Laguerre Gauss beams in free space" respectably published in Nature https://www.nature.com/articles/srep26842 argue the group velocity is less than c. See first attached figure from the 2000 overview with caption "helical wavefronts have wavevectors which spiral around the beam axis and give rise to an orbital angular momentum". (Note that Bereza and Hermosa report that the greater the apparent helicity, the greater the excess dispersion of the beam, which seems a clue that something is amiss.)
General Relativity assumes light travels in straight lines in local space. Photons can have spin, but not orbital angular momentum. If the group velocity is really less than c, then the light could be made to appear stationary or move backward by appropriate reference frame choice. This seems a little over the top. Is it possible what is really going on is more like the second figure, which I drew, titled "apparent" OAM? If so, how did the interpretation of this effect get so out of hand? If not, how have the stunning implications been overlooked?
In my article
I show that the most popular interpretations of the quantum mechanics (QM) fail to reproduce the quantum predictions, or, are self-contradicted. The problems that arise are caused by the new hypotheses added to the quantum formalism.
Does that say that the QM is complete in the sense that no new axioms can be added?
Of course, a couple of particular cases in which additional axioms lead to failure, does not represent a general proof. Does somebody know a general proof?
Have these particles been observed in predicted places?
For example, have scientists ever noticed the creation of energy and
pair particles from nothing in the Large Electron–Positron Collider,
Large Hadron Collider at CERN, Tevatron at Fermilab or other
particle accelerators since late 1930? The answer is no. In fact, no
report of observing such particles by highly sensitive sensors used in
all accelerators has been mentioned.
Moreover, according to one interpretation of uncertainty
principle, abundant charged and uncharged virtual particles should
continuously whiz inside the storage rings of all particle accelerators.
Scientists and engineers make sure that they maintain ultra-high
vacuum at close to absolute zero temperature, in the travelling path
of the accelerating particles otherwise even residual gas molecules
deflect, attach to, or ionize any particle they encounter but there has
not been any concern or any report of undesirable collisions with so
called virtual particles in any accelerator.
It would have been absolutely useless to create ultrahigh vacuum,
pressure of about 10-14 bar, throughout the travel path of the particles
if vacuum chambers were seething with particle/antiparticle or
matter/antimatter. If there was such a phenomenon there would have
been significant background effects as a result of the collision and
scattering of the beam of accelerating particles from the supposed
bubbling of virtual particles created in vacuum. This process is
readily available for examination in comparison to totally out of
reach Hawking’s radiation which is considered to be a real
phenomenon that will be eating away supposed black holes of the
universe in a very long future.
for related issues/argument see
A user, Richard Lewis, proposes as basic principles of the quantum mechanics (QM), the following:
- wave / particle duality
- the uncertainty principle
- the correspondence principle
- quantum superposition?
- the exclusion principle
I suggest additional principles:
- the quantum objects are described by states belonging to Hilbert spaces and obey the algebra of the Hilbert spaces,
- in order to calculate amplitudes of probabilities for results of experiments, one uses the Born rule
- the reduction principle formulated by von Neumann
A few questions appear:
1. Are these postulates mutually independent two by two?
2. Are there more postulates?
Dear Sirs,
I did not find an answer to this question in Internet for both quasi-relativistic and relativistic case. I would be grateful if you give any article references.
As I think the answer may be yes due to the following simplest consideration. Suppose for simplicity we have a quasi relativistic particle, say electron or even W boson - carrier of weak interaction. Let us suppose we can approximately describe the particle state by Schrodinger equation for sufficiently low velocity of particle comparing to light velocity. A virtual particle has the following properties. An energy and momentum of virtual particle do not satisfy the well known relativistic energy-momentum relation E^2=m^2*c^4+p^2*c^2. It may be explained by that an energy and a momentum of the virtual particle can change their values according to the uncertainty relation for momentum and position and to the uncertainty relation for energy and time. Moreover because of the fact that the virtual particle energy value is limited by the uncertainty relation we can not observe the virtual particle in the experiment (experimental error will be more or equal to the virtual particle energy).
In the Everett's multi-worlds interpretation a wave function is not a probability, it is a real field existing at any time instant. Therefore wave function of wave packet of W boson really exists in the Universe. So real quasi relativistic W boson can be simultaneously located in many different space points, has simultaneously many different momentum and energy values. One sees that a difference between real W boson and virtual W boson is absent.
Is the above oversimplified consideration correct? Is it possible to make any conclusion for ultra relativistic virtual particle? I would be grateful to hear your advises.
I tried to publish a proof by which the Bohm interpretation of QM is problematic,
in a journal, and the editors claimed that they don't see a motivation for publishing my proof.
What you think? Is the correctness (or incorrectness) of Bohm's mechanics an issue enough relevant for the QM in order to justify investigation?
The special theory of relativity assumes space time is formed from fixed points with sticks and clocks to measure length and time respectively. The electromagnetic waves are transmitted at the speed of light through this space time. This classical space time does not explain the mysteries of quantum mechanics. Do you think that maybe there is more than one space time?
Consider the polarization singlet of two photons 1 and 2
(1) |ψ> = (1/√2) ( |H>1 |H>2 + |V>1 |V>2 .
Let's represent the photon 2 in another base than { |H>, |V>}, e.g { |B>, |C>} the polarization B making an angle θ with H. So the wave-function (1) transforms into
(2) |ψ'> = (1/√2) [ |H>1 (|B>2 cosθ + |C>2 sinθ) + |V>1 (-|B>2 sin θ + |C>2 cos V)].
Assume that the experimenter Alice tests the photon 1 and finds the polarization H. What happens with the polarization with the photon 2?
Assume that the experimenter Bob tests the photon 2 and finds C. What happens with the polarization of the photon 1?
An additional question: what happens with the norm of the wave-function after one of the particles is tested? Does it remain equal to 1?
There is an opinion that the wave-function represents the knowledge that we have about a quantum (microscopic) object. But if this object is, say, an electron, the wave-function is bent by an electric field.
In my modest opinion matter influences matter. I can't imagine how the wave-function could be influenced by fields if it were not matter too.
Has anybody another opinion?
Bohm's mechanics considers the existence of a particle that triggers the detector. This particle is supposed to be guided by the wave-function, which is assumed to be a wave existing in reality.
My question is: which one between the two items, the particle and the wave, carries the properties of the respective type of particle (charge, mass, magnetic momentum, etc.)?
Specifically, how exactly is understood the guiding wave? Does it carry in each point and point all the above features? If not, how can it feel the presence of fields and be deflected by them?
Alternatively, is the particle the one which carries the physical properties? If the particle would be just a geometric point, how could it interact with the particles in the detector?
I am stuck between Quantum mechanics and General relativity. The mind consuming scientific humor ranging from continuous and deterministic to probabilistic seems with no end. I would appreciate anyone for the words which can help me understand at least a bit, with relevance.
Thank you,
Regards,
Ayaz
Imagine that we send the wave-packet of a neutron to sensitive scales. How much would weigh the wave-packet (after discarding the effect that impinging on the scales, the neutron transmits a certain linear momentum).
Imagine now that we split the neitron wave-packet into a three identical copies, by means of a beam-splitter (e.g. a crystal), and send only one of the copies to the scales. How much would weigh the copy?
I have some opinion but I want to see to which conclusion the discussion would lead.
The hydrogen spectral lines are organized in various series. Lyman series are the lines corresponding to transitions targeting the ground state.
Most pictures dealing with hydrogen spectra and available in the web are recordings dealing with extraterrestrial hydrogen sitting in celestial entities. Otherwise they are illustrations obtained not from experimental recordings, but from the well known Rydberg formula.
Of interest for the undersigned are pictures of Lyman series as recorded in laboratory observations of hydrogen atoms, with the atoms sitting in the laboratory itself. Not extraterrestrial hydrogen, nor molecules H2, even if the molecules are sitting nearby.
Presumably such recordings would have required ultraviolet sensitive CCDs, UV photographic plates, or similars. Particularly relevant would be careful raw recordings of Lyman series that INCLUDE THE ALPHA-LINE at 1216 Å.
Experimental remarks about the Lyman alpha-line, difficulties to observe it ---if any---, line width, line broadening, etc., and difficult-to-explain anomalies, are of particular concern. So far Web searching has not been successful.
I would appreciate any link or suggestions as to how to obtain the pictures and experimentally based information of the kind explained above.
Most cordially,
Daniel Crespin
square of amplitude of quantum wave function refers to volumetric probability distribution of a quantum wave-particle. But what does the real and imaginary parts physically mean? or what does the phase angle physically infer?
Due to the position-momentum uncertainty, it is impossible to measure the position of a microscopic particle exactly.
This means that it is impossible to measure the probability density, i.e. the square of the absolute value of the wave function, pointwisely, i.e. at any individual point, and by extension, the values of the wave function itself at any individual point of the physical space are irrelevant from the perspective of physics, they are, so to speak, “non-physical”.
Then, wouldn’t be more consistent, from the perspective of physics, to impose any mathematical condition on small regions of physical space instead of points?
Of course, if the wave function is to be continuous, then a condition imposed on a neighborhood of some point is translated to a condition on the point itself, but this is a consequence that follows from a mathematical property, it is not a physical requirement.
Besides, since the values of the wave function are not physically measurable, its continuity is not physically measurable either.
Do Einstein's Field Equations (EFE) allow a multitude of universes?
As far as I know, Everett proposed his interpretation two years after Einstein dead. But I think that Einstein should have known that such a proposal is about to be made. Does somebody know whether Einstein said something of it?
Another thing: do EFE allow pathologic points in the space-time, points at which the universe splits into two?
Quantum entanglement experiments are normally carried out in the regime (hf>kT - where T is the temperature of the instrument) to minimise thermal noise, which means operating in the optical band, or in the lower frequency band (<6 THz) with cryogenically cooled detectors.
However, the omnipresent questions are whether in the millimetre wave band where hf<kT:
1) Could quantum entanglement be detected by novel systems in the at ambient temperature?
2) How easy might it be to generate entangled photons (there should be nothing intrinsically more difficult here than in the optical band - in fact it might be easier, as you get more photons for a given pump power)?
3) How common in nature might be the phenomenon of entanglement (this would be in the regimes where biological systems operate)?
Answers to 1) may lead to routes to answering 2) and 3).
Consider the wave-function representing single electrons
(1) α|1>a + β|1>b ,
with both |α|2 < 1 and |β|2 < 1. On the path of the wave-packet |a> is set a detector A.
The question is what causes the reaction of the detector, i.e. a recording or staying silent? A couple of possibilities are considered here:
1) The detector reacts only to the electron charge, the amplitude of probability α has no influence on the detector response.
2) The detector reacts with certainty to the electron charge, only when |α|2 = 1. Since |α|2 < 1, sometimes the sensitive material in the detector feels the charge, and sometimes nothing happens in the material.
3) It allways happens that a few atoms of the material feel the charge, and an entanglement appears involving them, e.g.
(2) α|1>a |1e>A1 |1e>A2 |1e>A3 . . . + β|1>b |10>A1 |10>A2 |10>A3 . . .
where |1e>Aj means that the atom no j is excited (eventually split into am ion-electron pair), and |10>Aj means that the atom no j is in the ground state.
But the continuation from the state (2) on, i.e. whether a (macroscopic) avalance would develop, depends on the intensity |α|2. Here is a substitute of the "collapse" postulate: since |α|2 < 1 the avalanche does not develop compulsorily. If |α|2 is great, the process intensifies often to an avalanche, but if |α|2 is small the avalanche happens rarely. How many times appears the avalanche is proportional to |α|2.
Which one of these possibilities seem the most plausible? Or, does somebody have another idea?
In his work "The Consistent Histories Approach to Quantum Mechanics" publish in the Stanford Encyclopedia of Philosophy, Griffiths claims that this approach overcomes the problem of the wave-function "collapse".
His suggestion is that in each trial of an experiment, a quantum system follows a "history" meaning a succession of states.
Here is an example: consider a Mach-Zehnder interferometer with an input beam-splitter BSi, and an output beam-splitter BSo, both transmitting and reflecting in equal proportion. The outputs of BSi are denoted b and c, and those of BSo, e and f. A single-particle wave-packet |a> impinging on BSi is split as follows
(1) |a> → (1/√2)( |c> + |d>).
Before impinging on BSo the wave-packets |c> and |d> have accumulates phases
(2) (1/√2)( |c> + |d>) → (1/√2)[exp(iϕc)|c> + exp(iϕd)|d>],
and BSo induces the transformation
(3) (1/√2)[exp(iϕc)|c> + exp(iϕd)|d>] → α|e> + β|f>,
where the amplitudes α and β depend on the phases ϕc and ϕd.
In his book "Consistent quantum theory" chapter 13, Griffiths indicates two possible histories:
(4.1) |a> → (1/√2)( |c> + |d>) → (1/√2)[exp(iϕc)|c> + exp(iϕd)|d>] → |e>,
(4.2) |a> → (1/√2)( |c> + |d>) → (1/√2)[exp(iϕc)|c> + exp(iϕd)|d>] → |f>,
the history (4.1) occurring with probability |α|2, and the history (4.2) with probability |β|2.
Does somebody understand in which way these histories avoid the collapse postulate?
The correct transformation at BSo is (3), a unitary transformation, not (4.1) and not (4.2). Each one of the histories (4.1) and (4.2) involves a truncation of the wave-function at BSo. But this is exactly the mathematical expression of the collapse principle: truncation of the wave-function.
Hence my question: can somebody tell me how is it possible to claim that these histories avoid the collapse postulate?
Consider an experiment in which we prepare pairs of electrons. In each trial, one of the two electrons - let's name it the 'herald' - is sent to a detector C, and the other - let's name it 'signal' - to a detector D. The wave-function of the signal is therefore
(1) |ψ> = ψ(r) |1>,
i.e. in each trial of the experiment, when the detector C clicks, we know that a signal-electron is in the apparatus. Indeed, the detector D will report its detection.
Now, let's consider that the signal wave-packet is split into two copies which fly away from one another, one toward the detector DA, the other to the detector DB,
(2) |ψ> = 2-½ ψA(r) |1>A + 2-½ ψB(r) |1>B.
We know that the probability of getting a click in DA (DB) is ½, but in a given trial of the experiment we can't predict which one of DA and DB would click.
Then, let's ask ourselves what happens in a detector, for instance DA. The 'thing' that lands on the detector has all the properties of the type of particle named 'electron', i.e. mass, charge, spin, lepton number, etc. But, to the difference from the case in equation (1), the intensity of the wave-packet is now 1/2. It's not an 'entire' electron. Imagine that on a screen is projected a series of frames which interchange very quickly. The picture in the frame seems to be a table, but it is replaced very quickly by a blank frame, and so on. Then, can we say what we saw on the screen? A table, or blank?
The situation of the detector is quite analogous. So, will the detector report a detection, or will remain silent? What is your opinion?
For a deeper analysis see
NOTE: Please answer the question ONLY if you read what is the question.
Consider the well-known polarization singlet
(1) |S> = (1/√2) (|x>A |x>B + |y>A |y>B),
where as usually, the quantum object (Q.O.) A flies to Alice's lab and the Q.O. B flies to Bob's lab.
Consider that in each lab there is a polarization beam-splitter, PBSA, respectively PBSB, spliting the incomming beam in the base { |x>, |y>}. However, Bob has the option to input the two output beams to a second PBS - let's name it PBSC - which splits the input beams in the base { |d>, |a>} (d = diagonal direction, and a = the anti-diagonal, i.e. perpendicular on d).
(2) |x> → (1/√2) (|d> + |a>), |y> → (1/√2) (|d> - |a>).
The expression of the singlet wave-function becomes
(3) |S> = (1/2) {|x>A (|d>B + |a>B) + |y>A (|d>B - |a>B).
Assume now that Bob performs a test, with the detectors places on the outputs of PBSC, and gets the result, say, d. It is useful to write also the inverse of the transformation (2)
(4) |d> = (1/√2) (|x> + |y>), |a> = (1/√2) (|x> - |y>).
As one can see from the first equality in (4), to Bob's result |d>B contribute both beams |x>B and |y>B which exited PBSB and entered PBSC.
But, assume that while Bob does the test, Alice also performs a test, and gets, say, x. However, Alice has another story to say about what happened in the apparatus. She would claim that since she obtained the result x, in Bob's apparatus there was nothing on the output path y of PBSB. In consequence, she would claim that the beam |d>B recorded by Bob was just a component of |x>B as seen from the first relation in (2).
We do not know what is the wave-function, if it is a reality (ontic), or (epistemic) only represents what we know about the quantum object. But the quantum object travels in our apparatus, it has to be something real. Then, what is the truth about what was in Bob's setup? Was there, or wasn't, something on the output y of PBSB?
The classical limit of Feynman's path integral, gives us a partial view of how the 'collapse' process occurs. If the 'thing' that travels on all the paths between (t1, r1) and (t2, r2) increases in the number of components, or in mass - in short, becomes a classical object - we have destructive interference of all the paths, except in the vicinity of one of the paths. In this vicinity, the phases of the neighbor paths add up constructively. In this way, we get a classical trajectory. So, it's no 'collapse' of the wave-function, but destructive and constructive interference.
Indeed, when we perform the measurement of a quantum object, we do that with a macroscopic apparatus. For example, in an ionization chamber, the quantum object that enters the chambers produces a massive ionization, involving a huge number of particles.
Unfortunately, Feynman did not explain what happens when the wave-function has more than one wave-packet, i.e. how is picked one of the wave-packets. Thus, the non-determinism of the QM is, unfortunately, not explained by the Feynman's path integral.
Here came the GRW interpretation, and suggested a solution: supplementary terms in the Schrödinger equation. The parameters of these terms are so that as long as the quantum system contains only a small number of components, e.g. a small number of electrons/protons/atoms, the additional terms bring no significant change in the evolution of the wave-function. However, when the number of components becomes enough big that the object be macroscopic, the additional terms dominate in the Schrödinger equation and produce a random localization of the object.
G-C. Ghirardi and A. Bassi, "Dynamical reduction models", arXiv:quant-ph/0302164v2
The GRW interpretations has two big advantages: 1) it explains why the so-called 'collapse' occurs in the presence of macroscopic objects; 2) it shows that the density matrix of the macroscopic object has no extra-diagonal elements - i.e. represents a mixture of states not a quantum superposition.
It seems therefore that this interpretation comes in completion of Feynman's classical limit of the path integral.
Questions: a) why, in fact, should the Schrödinger equation be linear? b) any opinion about the GRW interpretation, any criticism?
In his path-integral theory Feynman speaks of a particle that travels from a time-space point (t1, r1), to another time-space point (t2, r2). This particle travels on whatever possible trajectory between these two points, no matter how irregular is the trajectory. The trajectories are continuous, and their set visits in fact, between t1 and t2, all the points in the 3D space.
So far, so good. Though the abnormal fact in this story is that the particle does not travel one trajectory, after that another trajectory, and so on, but travels all these trajectories in parallel. That means, between t1 and t2, the particle goes simultaneously along all the trajectories. So at any given time t between t1 and t2, the particle is simultaneously in many points in the space.
Now, summing up the phases of all these trajectories Feynman obtains the path integral and also constructs the wave-function. However, the wave-function of a single particle is an eigenfunction of the operator number-of-particles, with eigenvalue 1. If the particle is simultaneously in many positions, we don't have a particle, but many particles.
How to explain this contradiction?
Consider the simple wave-function describing single particles
(1) ψ(r, t) = 2-½[ψL(r, t) + ψR(r, t)],
where the wave-packet ψL flies to the left of the preparation region, and ψR to the right. Since after some time t1 the two wave-packets are far from one another, their supports in space are disjoint. The continuity equation
(2) ∂|ψ(r, t)|2/∂t + ∇Φ(r, t) = 0
where Φ(r, t) is the density of current of probability, can be therefore written as
(3) ∂|ψL(r, t)|2/∂t + ∇ΦL(r, t) = -{∂|ψR(r, t)|2/∂t + ∇ΦR(r, t)}
because products as ψL(r, t)ψR(r, t) and their derivatives, vanish. The current density is a functional of the functions ψL, ψR, and their derivatives, therefore can also be separated into ΦL and ΦL.
Let's further notice that when the position vector r sweeps the space on the left of the preparation region, the RHS of equation (3) vanishes. There remains
(4) ∂|ψL(r, t)|2/∂t + ∇ΦL(r, t) = 0.
Symmetrically, when r sweeps the space on the left of the preparation region, the LHS of equation (3) vanishes. There remains
(5) ∂|ψR(r, t)|2/∂t + ∇ΦR(r, t) = 0.
Imagine now that on the way of the wave-packet ψL is placed an absorber AL(ρ), where ρ defines the internal prameters of the absorber. The wave-packet ψL is splitted into an absorbed part and a part that passes unperturbed
(6) ψ(r, t) AL0(ρ) -> 2-½{ [e-γdψL(r, t) AL0(ρ) + (1- e-2γd)½AL1(ρ) + ψR(r, t) AL0(ρ) },
where the super-script 0 indicates the non-perturbed internal state of the absorber, 1 indicates its excited state, γ is the absorbing coefficient, and d is the absorber thickness. For d sufficiently big one will have total absorption of ψL,
(7) ψ(r, t) AL0(ρ) -> {AL1(ρ) + ψR(r, t) AL0(ρ) }.
Due to the presence of the absorber, the LHS of equation (5) should be multiplied by the factor AL0(ρ). But since AL0(ρ) ≠ 0, we can divide on both sides of the new equation by AL0(ρ), s.t. the original form of (5) returns. The meaning of this result is that the absorption of ψL does not imply the disappearence of ψR.
Now, let's replace the absorber with a detector. As long as the interaction with the detector proceedes inside the material of the detector, the analysis with the absorber remains valid (with the small difference that instead of absorption there may be inellastic scattering). Therefore, the equation (5) also remains valid and the conclusion that ψR is not affected.
The difficulty appears when the macroscopic circuitry surrounding the material in the detector, CL, enters into the play. Macroscopic objects cannot be in a superposition of the states as CL0 and CL1. So, we cannot have an equation similar to (7)
(8) ψ(r, t) AL0(ρ) CL0-> {AL1(ρ)CL1 + ψR(r, t) AL0(ρ)CL0 }.
However, when the circuitry clicks, what happens with the continuity equation (5)? For the collapse to be true, i.e. for ψR(r, t) to vanish suddenly, the derivative ∂|ψR(r, t)|2/∂t should be very big in absolute value for which the flux gradient should increase drastically in the outward direction from ψR. That doesn't mean that the wave-packet ψR disappears but that it disperses in space.
What is your opinion?
In a course on quantum mechanics I took we were told that for the setting in which the gun shoots a particle with spin orientation +z and the analyzer is perpendicular to this orientation, 50% of the time the Stern-Gerlach apparatus will detect the particle coming out from the +x aperture, and 50% from the -x aperture.
I ran the PhET Stern-Gerlach simulator (https://phet.colorado.edu/sims/stern-gerlach/stern-gerlach_en.html) with one analyzer (magnet) and obtained the following results:
at 0 deg. angle: 100% from +x
at 90 deg. angle: 50% from +x, 50% from -x
at 180 deg. angle: 100% from -x
at 270 deg. angle: ca. 3% from -x, ca. 97% from +x
Is this last result an error of the simulator? Or how can it be explained?
In quantum mechanics, the state space is a separable complex Hilbert space.
By definition, a Hilbert space is a complete inner product space.
The term complete means that any Cauchy sequence of elements (vectors) belonging to the Hilbert space converges to an element which also belongs to the space. In other words, completeness means that the limits of convergent sequences of elements belonging to the space are also elements of the space. Intuitively, we can say that Hilbert spaces have no “holes”.
If the state space is infinite-dimensional, we implicitly invoke its completeness every time we expand a state in terms of a complete set of eigenstates, such as the energy eigenstates or the eigenstates of another observable, since an infinite series of eigenstates is meant as the limit of the sequence of the respective partial sums when the number of terms tends to infinity. The sequence of partial sums is then a Cauchy sequence converging to the initial state, which must belong to the space.
Qualitatively, considering a convergent sequence of physical states, we expect that it converges to a physical state too, because it would be unphysical, by means of such a sequence, to end up at an unphysical state. For instance, assume that we perform a series of small changes to the state of a quantum system and suddenly we reach an unphysical state. This would be physically unacceptable. Thus, from a physical perspective, the completeness of the state space seems unavoidable.
However, looking in some of the so-called standard textbooks of quantum mechanics, particularly in Sakurai’s, Merzbacher’s, Gasiorowicz’s, and Griffiths’s, this essential property is either overlooked or just mentioned, and it is not highlighted properly.
In quantum mechanics, the state space is a separable complex Hilbert space.
A Hilbert space is separable if and only if it has a countable orthonormal basis [1, 2].
Why the quantum mechanical state space must be separable?
In [3], we read that separability is a mathematically convenient hypothesis, with the physical interpretation that countably many observations are enough to uniquely determine the state of a quantum system.
In Merzbacher’s quantum mechanics (3d ed.), page 185, we read that “The infinite-dimensional vector spaces that are important in quantum mechanics are analogous to finite-dimensional vector spaces and can be spanned by a countable basis. They are called separable Hilbert spaces.”
From a historical point of view, the two descriptions (or versions) of quantum mechanics that were initially developed in 1920’s, namely the Schrödinger’s wave mechanics and the Heisenberg’s matrix mechanics, were respectively based on the Hilbert spaces of square integral functions and square summable sequences of complex numbers, which are both separable, and physically equivalent (mathematically isomorphic). Thus, the invariant (or representation-free) description of quantum mechanics, through the abstract Hilbert space of Dirac kets, that was followed, had to be based on a separable Hilbert space too, otherwise it would not be equivalent to the two existing descriptions.
As it happens with the property of completeness [4], the property of separability of the quantum mechanical state space is also overlooked or mentioned very briefly in standard textbooks and the reader, especially the physics-oriented one, is left with the impression that it is rather a mathematical “decoration” of minor physical importance that can be forgotten.
From my own experience, it is also worth noting that the expression “a Hilbert space is separable if and only if it has a countable basis”, which is often given as definition of separability, is tricky and, to some extent, misleading. A reader with some background in functional analysis is rather easy to understand that, here, “it has a countable basis” actually means “ALL bases are countable”, as two basis sets are related by a one-to-one and onto mapping, thus they have the same cardinality, and then if one is countable, the other is countable too. But, a physics student may be confused and left with the impression that separable Hilbert spaces have also uncountable bases, which is the wrong picture, especially in connection with the uncountable (continuous) sets of the position and momentum eigenstates that although span the state space, they are not actually bases, because they are not belong to the state space, and this point is not highlighted in literature either.
In standard quantum mechanics textbooks, the form of the momentum operator, in position space, is either given as definition, i.e. they write that the momentum operator is –id/dx (times the reduced Planck constant), or, in more advanced textbooks, like Landau & Lifshitz’s, it is derived as the generator of spatial translations.
I wonder if the form of the momentum operator, i.e. that it is a first-order differential operator, can be derived qualitatively, by means of physical arguments. In other words, does the slope of an arbitrary, i.e. non-stationary, wave function have a physical meaning?
I wasn't able to get von Neumann's book "Mathematical Foundations of Quantum Mechanics". But I saw many descriptions on his scheme of measurement, all of them saying the same things.
What I was interested in, was to see whether von Neumann claimed somewhere that after measuring a quantum system (with a macroscopic apparatus) and obtaining a result, say a, the rest of the wave-function disappears. As a simplest example, let the wave-function be α|a> + β|b>, and in one particular trial of the experiment one gets the result a. I saw nowhere a claim that von Neumann said that the part |b> of the wave-function disappears.
What I saw was the following claim: if we collect in a separate set A all the trials which produced the result a, the wave-function characterizing the quantum object in the set A is |a>.
I never saw a word about what happens with the part |b> of the wave-function in these trials. No assumption whether it disappears, or, alternatively, no opinion that we can say nothing about it. The fact that we collect the systems that responded a in a separate subset, does NOTHING to the part |b> if it survives in some way.
Did somebody see in von Neumann's work any opinion about the fate of the part |b>?
Please, I very need only in the number, not the references to book or common words.
Please, consider a photon as the quantum of electro-magnetic field, and thus, as carrier of magnet field. Thank you in advance.
This question is a reaction to the fact that some authors hold that the interaction between a microscopic object with a macroscopic object, leads to an entanglement between the states of the microscopic object and states of the macroscopic object. My opinion is that such an entanglement is impossible.
I recommend as auxiliary material the discussion
https://www.researchgate.net/post/What_is_the_quantum_structure_of_a_particle_detector_containing_a_gas_obeying_Maxwell-Boltzman_statistics
THE EXPERIMENT: From a pair of down-conversion photons, the signal photon illuminates the non-ballanced beam-splitter BS1 - see the attached figure. The idler photon is sent to a detector E (not shown) for heralding the presence of the signal photon in the apparatus. The signal photon exits BS1 as a superposition
(1) |1>s → t|1>a |0>b + ir|0>a |1>b , t2 + r2 = 1.
On each one of the paths is placed an absorbing detector, respectively A and B. The figure shows that the wave-packet |1>a reaches the detector A before |1>b reaches the detector B. Let |A0> ( |B0> ) be the non excited state of the detector A (B), and |Ae> ( |Be> ) the excited state after absorbing a photon.
Some physicists claim that the evolution of the signal photon through the detector A can be written as
(2) |A0> |1>s → (t|Ae> |0>b + ir|A0> |1>b) |0>a .
I claim that this expression is impossible, for a couple of reasons.
1) Are the states |A0> and |Ae> pure quantum states, or mixtures? I claim that a macroscopic object cannot have a pure quantum state, it can be in a mixture of pure states, all compatible with the macroscopic parameters. As supporting material see the discussion recommended above, and also the Feynman theory of path integral - the macroscopic limit.
2) In continuation, when the wave-packet |1>b meets the detector B, the state (2) should evolve into
(3) |A0> |B0> |1>s → (t |Ae> |B0> + ir |A0> |Be>) |0>a |0>b
= (t |Ae> |B0> + irt |Ae> |Be> - irt |Ae> |Be> + ir |A0> |Be>) |0>a |0>b
= [ t |Ae>( |B0> + ir |Be>) + ir|Be> ( |A0> - t |Ae>)].
That is similar with the following situation: if the cat A says "miaw" the cat B remains in the superposition ( |cat B dead> + ir |cat B alive>), and if the cat B says "miaw" the cat A remains in the superposition ( |cat A dead> - t |cat A alive>),
Did somebody see cats in such situations?
- A moment is the smallest difference between two states of the same matter in the space.
- Time is the continuous flowing of multiple consecutive moments.
I don't know if you could comment or not, but above is my proposition.
Comment définir le temps ?Aujourd'hui je vous propose une définition simple du temps. Dites-moi ce que vous en pensez !
- L'instant (le moment) c'est la plus petite différence entre deux états d'une même matière dans l'espace.
- Le temps c'est l'écoulement continue de plusieurs moments successifs.
The Schrödinger self adjoint Hamiltonian operator H correctly predicts the stationary energies and stationary states of the bound electron in a hydrogen atom. To obtain such states and energies it suffices to calculate the eigenvalues and eigenfunctions of H. Since 1926 up to now, and for the foreseeable future of Physics, any theoretical description of the hydrogen atom has to assume this fact.
On the other hand the Schrödinger time dependent unitary evolution equation $\partial \Psi / \partial t = -iH(\Psi)$ is obviously mistaken. So much so that in order to explain transitions between stationary states the unitary law of movement has to be (momentarily?) suspended and then certain "intrinsically probabilistic quantum jumps" are supposed to rule over the process.
Transitions are physical phenomena that consist in the electron passing from an initial stationary state with an initial stationary energy, to another stationary state having a different stationary energy. Physically transitions always involve the respective emission/absorption of a photon. Whenever transitions occur the theoretical unitary evolution is violated.
It is absurd to accept as a law of nature an evolution equation that does not corresponds with the physical phenomena being considered. Electron transitions are not predicted, nor described by, nor deducible from the Schrödinger evolution equation. In fact Schrödinger evolution equation is physically useless. This is the reason for Schrödinger's "Diese verdammte quantenspringerei". Decades of belief in unitary evolution originated countless speculation, contradiction and confusion with enormous waste of human talent and time.
Assume then that physicists accept the mistaken nature of unitary evolution and proposes its replacement with a novel equation that
a) is consistent with the predictive virtues of H
b) deterministically describes transitions
In principle a probability free, common sense, rational, deterministic, well constructed replacement of Quantism should be a welcome relief for physicists and chemists, and for philosophers of science as well.
Then, among equations and theories currently accepted by mainstream Physics, which ones would be affected by the eventual replacement of unitary evolution? Here is a short list of prospective candidates that the reader can extend and refine
Quantum chemistry
Dirac equation
Quantum field theories
Quantum gravity
Standard model
Lists of physical theories are available at
https://en.wikipedia.org/wiki/Theoretical_physics
https://en.wikipedia.org/wiki/Branches_of_physics
could be relevant for this question.
For more on the inconsistencies of Quantism and details on a theory that could replace it see our Researchgate Contributions page
With most cordial regards,
Daniel Crespin
Personally I don't find Objective-collapse-theory (QMSL), very appealing: Even though problems were resolved in the 1990's there are still inconsistencies in terms of diverging particle densities etc...
However, the Penrose interpretation, which is considered to be a QMSL variant, is a different story altogether: Penrose suggests that the collapse of the wave function is due to the energy differences between quantum states having reached a certain threshold. The limit being the Planck mass of the system/object at hand. In effect this still means that matter can exist in more than one place at one time. Nonetheless, a macroscopic system, like a human being, cannot exist in multiple places at once as the corresponding energy difference is too large to begin with. So a microscopic system on the other hand, like an electron, can exist in more than one location until its space-time curvature separation reaches the collapse threshold, which could be a thousands of years from the emergence of its superposition.
What are your thoughts and opinions on this?
- Note: A macroscopic system, like a human being, could theoretically exist in a superimposed state for a very, very short period of time, at scales of Planck time or less [arXiv:1401.0176] ... So therefore it's not considered significant (if at all possible).
The "collapse" postulate says that if part of the wave-function produces a click in a detector, the rest of the wave-function disappears. In the experiment described here, it is shown that no part of the wave-function disappears, namely, given a superposition of two wave-packets, while one wave-packet produces a click in a detector, the other wave-packet produces observable interference effects.
A quantum system is prepared in a state with maximum one particle, a photon, A;
(1) |ψ> = q{ |0>A + p( |1;a>A |0;b>A + eiθ |0;a>A |1;b>A ) },
see figure.
It is shown below that while the wave-packet |1;a>A
The wave-packet |1;b>A illuminates one side of the 50-50%beam-splitter BS, and on the other side lands a coherent beam
(2) |α> = N( |0>B + peiα|1>B + . . . ).
where N is the normalization factor. Thus, we have the total wave-function
(3) Φ = |α>|ψ> = Nq( |0>B + peiα|1>B + . . . ){ |0>A + p( |1;a>A |0;b>A + eiθ|0;a>A |1;b>A ) }.
At the beam-splitter the following transformations take place
(4) |1>B → (1/√2) ( |1;c> |0;d> + i|0;c> |1;d>);
(5) |1;b>A → (1/√2) (i|1;c> |0;d> + |0;c> |1;d>).
Introducing them in (3) one gets the following IMPLICATIONS:
(6) For θ = α - π/2, every click in the detector D is preceded by a the detection of the wave-packet |1>a in the detector U.
(7) For θ = α + π/2, every click in the detector C is preceded by a the detection of the wave-packet |1>a in the detector U.
Thus, one can see that by changing the phase θ, carried by the wave-packet |1;b>A , one can switch between a joint click in D and U, and a joint click in C and U.
CONTRADICTION: one can see in the figure that BS is more distant from the preparation region than the detector U. So, if the collapse hypothesis were correct, the tunning of θ would have no effect, since when the detector U clicks, the wave-packet |1;b>A would disappear instead of reaching the beam-splitter BS.
CONCLUSION: No part of the wave-function disappears - no collapse.
Dear Syed
According to my mathematics called self-field theory (SFT) there may be two different methods of storing memories: one is based on electromagnetic (EM) fields while long term memories are stored as strong nuclear (SN) fields, probably within DNA. The conversion of short and long term memories presumably happens during 'deep' sleep when the two dimensional, EM fields are somehow converted into three-dimensional gluon encoded data within quarks. This is implied by the structure of the mathematics and its connections to particle physics.
The mathematics
If this hypothesis is correct the question is what happens to 'sound' in long term memories?
Could this be useful in your research into consciousness?
What lies outside of the boundaries of space. I find the problem in this question is our knowledge, we have a prior belief that we are exist inside space, this belief that doesn’t based on any evidence, so I think before we asking, what lies outside of space? We must first ask, are the objects lying inside space or outside of it?
I think if objects are exist outside of space, it will suffer from superposition or uncertainty in position depending on the distance from space or how far is it from space.
The standard QM offers no explanation for the collapse postulate.
By the Bohmian mechanics (BM) there is no collapse, there exists a particle following some trajectory, and and detector fires if hit by that particle. Therefore, there is no collapse. However, BM has big problems in what concerns the photon, for which no particle and no trajectory is predicted. Thus, in the case of photons, it is not clear which deterministic mechanism suggests BM instead of the collapse.
The Ghirardi-Rimini-Weber (GRW) theory says that the collapse occurs due to the localization of the wave-function at some point, decided upon by a stochastic potential added to the Schrodinger equation. The probability of localization is very high inside a detector, where the studied particle interacts with the molecules of the material and gathers around itself a bigger and bigger number of molecules. Thus, at some step there grows a macroscopic body which is "felt" by the detector circuitry.
Personally, I have a problem with the idea that the collapse occurs at the interaction of the quantum system with a classical detector. If the quantum superposition is broken at this step, how does it happen that the quantum correlations are not broken?
For instance in the spin singlet ( |↑>|↓> - |↓>|↑>) one gets in a Stern-Gerlach measurement with the two magnetic fields identically oriented, either |↑>|↓>, or |↓>|↑>. The quantum superposition is broken. But the quantum correlation is preserved. On never obtains, if the magnetic fields have the same orientation, |↑>|↑>, or |↓>|↓>.
WHY SO? Practically, what connection may be maintained between the macroscopic body appearing in one detector, and the macroscopic body appearing in another detector, far away from the former? Why the quantum correlation is not broken, as is broken the quantum superposition?
In fact, I'm working on a thesis project on Quantum Information and precisely on quantum error correcting codes. I just started not long ago my research on the subject, and specifically how one can go from a classical signal to a quantum signal to describe the algorithms of error correction codes in physical channels.
What experimental evidence (or any other) contradicts the use of non-unitary, non-Hermitian mathematics to represent pure quantum states? This question relates to pure states, not mixed states. Note that rational matrices have rational (real) eigenvalues.
The original question was wrong thus was completed rewritten.
Context:
Suppose I have a 3 d particle placed in a spherical box of infinite well from [-r,r] in all three directions, it's expectation value of position thus equaled to 0.
Thus its surface area equaled to $4 \pi r^2$
From quantum gravity, (existence) https://arxiv.org/pdf/gr-qc/9403008.pdf and (a fairly good numerical approximation) https://en.wikipedia.org/wiki/Planck_length we knew that space and time were quantized fractions. Suppose the minimum length equal to ds, then the maximum partition of the surface of the spherical ball equal to $N= 4 \pi r^2/ds^2$.
Which meant that, as the increase of r, the number of possible segment that our probe could be placed will increase.
In analogy, suppose I have a particle of spin 1/2. Where we place the particle at the center of the ball and measure its spin. Then the ball with larger r could have more "segment" area for observation.
Question 1
Was these analysis true? If not, why? Further what's its implication?
Question 2
Suppose I created a pair of such spin 1/2 particle entangled together. One placed in a ball of $r_a$ the other placed in a ball or $r_b$. If $r_b>r_a$, then our measurement could be more "precise" about the ball b than ball a.
In an imaginary extreme case where $N=4 \pi r_a^2/ds^2=2 $, measurement for ball a thus could only be up or down.
What's happened to the information here? Were they still consist?
Clarification:
1 in question 2, since it's an infinite well(although it was not possible in real), It did not had to be exact in the "center" of $(0,0,0)$. By the fact that the particle was not at the boundary, and, since it's spherical coordinates, by symmetry, position expectation value was at the center. In fact, it didn't even have to be at the center position. Wave was good. The encoding was based on the probability of $T_{funning}$ was selected such that it equaled to 0 or $<<1$. Thus could be ignored regard to numerical calculation.
2 the imaginary extrem was based on the fact that electron's classical readius was in e-16 and plunk length was in e-32 thus $N=2$ could not happen, but just to demonstrate the idea.
Feel free to ask question about the context.
Is this meant as solitons? I think that I read that EM field does not have solitonslike solutions in most cases. But if they are solitons how is their ability accounted for to 'feel' entire space in almost 0 time (as is evident from Feynman trajectories approach)?
In a box the exitations imerge immediately and comprise the whole length of the box. So there is a probability to detect a photon far from the source. How is this to be consistent with the constancy of speed of light c?
i am very clear about momentum, spin, and polarization, performed on entangled particles are found to be correlated.
but i am not understanding ;
In what way 'position measurement' performed on entangled particles are found to be correlated?
in relativistic quantum mechanics(quantum field theory) we take co-ordinate time as a time observable to bring space and time on equal footing ...why can't we take proper time as a time observable ? by this approach we may overcome the problem, called "renormalization."
Preprint PLAYING DICE WITH PROPER TIME
Energy transitions are classified according to spectral series such as Balmer, Lyman, Paschen, etc., which assume a single transition between two non-contiguous levels (except the first transition). What the question really means is, can cascading transitions from one level to another, emitting a photon at each contiguous level, occur or have been observed?
As we all know, the majority of the softwares used in condensed matter physics are based on DFT (such as VASP, CASTEP, ......).But they can only handle the problems at zero temperature, so is there some ab-initio software which could solve the Schrodinger Equation at finite temperature?
A non-absorbing detector is supposed, at least in theory, to report that a particle passes through it, though, the particle is allowed to exit the detector, s.t. we can do additional tests on it. No doubt, the non-absorbing detector "collapses" the wave-function, but additional detectors in continuation, may tell us in which state the 1st detector left the particle, which we can only guess we use absorbing detectors.
Now, does somebody know, do we have such detectors "on shelf", i.e. in practice?
For understanding my question I invite everybody to read the example.
In the Bohmian mechanics (BM) the velocity formula gives an infinite value to the velocity of the Bohmian particle at points where the wave-function vanishes, but the gradient doesn't vanish.
Do we have any proof that this is wrong, i.e. that attributing superluminal velocity to a particle is wrong? Could it be that this feature of the BM is a flaw, and implies that BM is wrong?
EXAMPLE:
In a Hanbury-Brown and Twiss (HB&T) type experiment with a pair of identical photons, one passing through a slit A and the other passing through a slit B - see attach - the wave-function of the pair looks as follows:
Ψ(r1, r2) = 2-½ {exp[ik|r1 - rA| exp(ik|r2 - rB|) + exp(ik|r1 - rB|) exp(ik|r2 - rA|) }
= 2-½ exp[iθ(r1, r2)] cos[ϕ(r1, r2)]
where r1 , r2, denore the positions of the photons, and rA, rB, the positions of the slits.
ϕ(r1, r2) = (κ/2) { (|r1 - rA| + |r2 - rB|) - (|r1 - rB|) + |r2 - rA|) }.
If ϕ is an odd integer multiple of π/2, the wave-function vanishes. Assume that so happens for the pair of points P'1 and P'2. If one places detectors at these two points and, say, the detector at P'1 makes a detection, the detector at P'2 remains silent. By the BM, the particle moving towards P'2 jumps over this point with an infinite velocity, and this is why it cannot be detected.
The problem is that the presence of the detector at P'2 reduces the probability Prob(P'1, Q2) of joint detection at P'1 and Q2, where Q2 is any point below P'2. This probability shows no more interference effect, it is given only by the cross waves from A to Q2 and from B to P'1.
Obviously, the detector at P'1 although cannot detect the particle passing through it, but yes stops something. The probability of joint detection in P'1 and Q2 decreases due to the detector at P'2 not only by the disappearence iof the nterference, but also below the sum of the isolated probabilities of detection from the crossed rays and detection from the direct rays.
- Is the GHZ argument more useful than BKS theorem or is only a misinterpretation of EPR argument?
How are you progressing ?
How have you selected your respondents?
All the best from Copenhagen, Bo
In the case of diffraction of photons by an edge it can be shown simply by means of classical relativistic mechanics that the assumption of quantized angular momentum leads to a diffraction pattern which looks very similar to what is observd in experiments.
Are there any hints that in the very early days of QM Sommerfeld or Bohr or somebody else ever considered the idea that the observed "interference" patterns in diffraction experiments with electrons and photons might result from the quantization of angular momentum and from an interchange of quantized portions of ang.mom. during the interaction of an electron or photon with the atoms of an edge, slid, double-slid, etc.?
Weak measurement are a relatively new trend in quantum experiment. They mean to determine the so-called Bohmian velocity, but in fact they measure the average linear momentum while disturbing very little the wave-function in each trial of the experiment. Thus, in each trial, after such a measurement one can do additional measurements on the wave-function negligibly disturbed.
For a very clear example see:
Sacha Kocsis, Boris Braverman, Sylvain Ravets, Martin J. Stevens, Richard P. Mirin, L. Krister Shalm, Aephraim M. Steinberg, Observing the Average Trajectories of Single Photons in a Two-Slit Interferometer, Science 332, 1170 (2011); DOI: 10.1126/science.1202218