Science method
Decoherence - Science method
Explore the latest questions and answers in Decoherence, and find Decoherence experts.
Questions related to Decoherence
Quantum computing faces several challenges, including quantum decoherence, where qubits lose their quantum state due to interactions with the environment. Achieving quantum error correction and building robust, scalable quantum hardware are also significant challenges to ensure reliable and stable quantum computations.
Hi folks!
It is well known that the apparent stochasticity of turbulent processes stems from the extreme sensitivity of the DETERMINISTIC underlying differential equations to very small changes in the initial and boundary conditions human beings aren't able to measure.
Given our limitations, for the same measured conditions, a deterministic turbulent flow can thus display a wide array of different behaviours.
Can QUANTUM MECHANICAL random fluctuations also change the initial and boundary conditions in such a way that the turbulent flow would behave in a different manner?
In other words, if we assume that quantum mechanics is genuinely indetermistic, can it propagate that "true" randomness towards (some) turbulent processes and flows?
Or would decoherence hinder this from happening?
I wasn't able to find any peer-reviewed papers on this.
Many thanks for your answers!
Literature searches show that there are many papers and books on emerging quantum theory, but it seems that no specific model has ever been proposed. Here is such a proposal: https://www.researchgate.net/publication/361866270 The model describes wave functions as blurred tangles, using ideas by Dirac and by Battey-Pratt and Racey. The tangles are the skeletons of wave functions that determine all their properties. The preprint tries to be as clear and as pedagogical as possible.
In quantum theory, tangles reproduce spin 1/2, Dirac's equation, antiparticles, entanglement, decoherence and wave function collapse. More interestingly, the deviations from quantum theory that the model predicts imply that only a limited choice of elementary fermions and bosons can arise in nature. Classifying (rational) tangles yields the observed elementary and composed particles, and classifying their deformations yields the known gauge theories.
Given that the text aims to be as understandable and enjoyable as possible, feel free to point out any issue that a reader might have.
The main result of decoherence theory is that the non-diagonal elements of a quantum object's density matrix become zero due to uncontrolled interactions with the environment. For me, that only means that there will we no more interference effects between the superposed states. But there still remain the diagonal elements of the density matrix. So there is still a superposition of classical alternatives left. How does that solve the measurement problem ?
Moreover, doesn't the mathematical derivation of the decoherence effect involve an ensemble average over all possible environmental disturbances ? How does this help when we are interested in the behavior of a specific system in a specific environment ?
Dear colleagues, if I am not mistaken, decoherence time outlines that a particle experiencing superposition will fall into a single state. So my question is: Does superposition allows for a particle to exist in more than a binary state? With this, can a particle be in more than three states? And, if possible, in how many states can a particle be? - Thank you! (my apologies if this question seems absurd. I am working in a contradiction in Artificial Intelligence which is equally absurd).
As NV centers are surrounded by electrons, and electron spin is used as qubits but nuclear spin acts as a source of decoherence by creating a varying magnetic field. Now if the NV center is in quantum superposition state due to the decoherence the changes in energy levels will cause dephasing and eventually, the loss of quantum state. To counter this we apply an RF pulse to invert the state of NV center which inverts the effect of the magnetic field on the spin. This is justified by the fact that 'if we have the same time before and after this flip the effect of the field is canceled and quantum state is protected.' But how? And will the noise present in the system affect the protected quantum state? Can't these controlled spin be manipulated in a way so that they can act as qubits and help in carry extra information?
Consider the wave-function representing single electrons
(1) α|1>a + β|1>b ,
with both |α|2 < 1 and |β|2 < 1. On the path of the wave-packet |a> is set a detector A.
The question is what causes the reaction of the detector, i.e. a recording or staying silent? A couple of possibilities are considered here:
1) The detector reacts only to the electron charge, the amplitude of probability α has no influence on the detector response.
2) The detector reacts with certainty to the electron charge, only when |α|2 = 1. Since |α|2 < 1, sometimes the sensitive material in the detector feels the charge, and sometimes nothing happens in the material.
3) It allways happens that a few atoms of the material feel the charge, and an entanglement appears involving them, e.g.
(2) α|1>a |1e>A1 |1e>A2 |1e>A3 . . . + β|1>b |10>A1 |10>A2 |10>A3 . . .
where |1e>Aj means that the atom no j is excited (eventually split into am ion-electron pair), and |10>Aj means that the atom no j is in the ground state.
But the continuation from the state (2) on, i.e. whether a (macroscopic) avalance would develop, depends on the intensity |α|2. Here is a substitute of the "collapse" postulate: since |α|2 < 1 the avalanche does not develop compulsorily. If |α|2 is great, the process intensifies often to an avalanche, but if |α|2 is small the avalanche happens rarely. How many times appears the avalanche is proportional to |α|2.
Which one of these possibilities seem the most plausible? Or, does somebody have another idea?
In other threads vacuum space was discussed for possibilities of scale relativity, squeezed quantum states, and hierarchy of Plancks. In each or these a quantum state of local space is associated with large scale continuum space of GR.
Roger Penrose has on several ocations published gravity curvature as a possible cause of state vector reduction.
Also in his book Emperor's New Mind the unitary evolution U in a quantum system continues to a critical point where state vector reduction R occurs, followed by a new evolution U to some higher state, and another state vector reduction R. The critical point was said to be an excess of gravitational curvature, building up a system of superimposed quantum states, entanglements until the excess energy causes state reduction, collapse of the wave function, selection of one state and rejection of the competing state.
When applied to vacuum space as Penrose did in the 1996 paper the state vector reduction was argued as a decoherence caused by increasing complexity and some additional triggering device that Penrose proposed as an accumulated gravitational curvature in excess of the system stability limit.
In many threads the researchers have been discussing the limitations of GR in respect to high speed transport in deep space. With little difficulty those discussions could be recast in the terminology of Penrose and state vector reduction.
The implication of the Penrose publications and the conclusions of high speed in deep space is that the many degrees of freedom in vacuum space entangle with the few degrees of freedom in a quantum system. That is to say the vacuum interacts physically with the objects that pass through it.
The present question relates to kinetic field energy at high speed in deep space where the progression of scales and change from one scale to another appears to be the same as U and R but in other terminology.
Does State Vector Reduction Occur In Vacuum Space Time?
Two concepts are being explored relating to gravity:
- that gravity occurs as a result of entropic forces, and that the concept can be used to derive the General Relativity Field Equations (and Newtonian Gravity in the appropriate limit)
- that gravitational time dilation can cause Quantum Decoherence, and thus essentially explain the transition from the quantum world to the classical world
Some researchers claim that experimental evidence on ultra-cold neutron energy levels in gravitational fields invalidate the concept of Entropic Gravity? Is such a conclusion valid, and, if so, does it also invalidate the claim that gravitational time dilatation causes Quantum Decoherence?
The explanation of quantum entanglement by "hidden variables" seems to have been eliminated by tests for violation of Bell's Inequalities first by the experiments of Alain Aspect and subsequently by numerous experimenters who produced extended versions which closed "loopholes" in the original tests. All of these seem to confirm that the measurements on the particles, while correlated, must be truly random. However, does that necessarily eliminate hard determinism? If the measurement result comes from decoherence due to interaction with the measuring system, acting as a thermodynamic heatsink, could it actually be the result of the combined properties of the large ensemble of particles such that it is deterministic but unknowable?
Many thinkers reject the idea that large scale persistent coherence can exist in the brain because it is too warm, wet, noisy and constantly interacts, and consequently, is 'measured' by the environment via the senses.
The problem of decoherence is, I suggest, in part at least, a problem of perception - the cognitive stance that we adopt toward the problem. If we examine the problem of interaction with the environment, common sense suggests that we perceive the primary utility of this interaction as being the survival of the organism within its environment. It seems to follow that if coherence is involved in the senses then evolution must have found a way of preserving this quantum state in order to preserve its functional utility - a difficult problem to solve!
I believe that this is wrong! I believe that the primary 'utility' of cognition is that it enables large scale coherent states to emerge and to persist. In other words, I believe that we are perceiving the problem in the wrong way. Instead of asking 'How do large scale coherent states exist and persist given the constant interaction with the environment?', we should ask instead - 'How is cognition instrumental in promoting large scale robust quantum states?'
I think the key to this question lies in appreciating that cognition is NOT a reactive process - it is a pre-emptive process!
The standard QM offers no explanation for the collapse postulate.
By the Bohmian mechanics (BM) there is no collapse, there exists a particle following some trajectory, and and detector fires if hit by that particle. Therefore, there is no collapse. However, BM has big problems in what concerns the photon, for which no particle and no trajectory is predicted. Thus, in the case of photons, it is not clear which deterministic mechanism suggests BM instead of the collapse.
The Ghirardi-Rimini-Weber (GRW) theory says that the collapse occurs due to the localization of the wave-function at some point, decided upon by a stochastic potential added to the Schrodinger equation. The probability of localization is very high inside a detector, where the studied particle interacts with the molecules of the material and gathers around itself a bigger and bigger number of molecules. Thus, at some step there grows a macroscopic body which is "felt" by the detector circuitry.
Personally, I have a problem with the idea that the collapse occurs at the interaction of the quantum system with a classical detector. If the quantum superposition is broken at this step, how does it happen that the quantum correlations are not broken?
For instance in the spin singlet ( |↑>|↓> - |↓>|↑>) one gets in a Stern-Gerlach measurement with the two magnetic fields identically oriented, either |↑>|↓>, or |↓>|↑>. The quantum superposition is broken. But the quantum correlation is preserved. On never obtains, if the magnetic fields have the same orientation, |↑>|↑>, or |↓>|↓>.
WHY SO? Practically, what connection may be maintained between the macroscopic body appearing in one detector, and the macroscopic body appearing in another detector, far away from the former? Why the quantum correlation is not broken, as is broken the quantum superposition?
Based on the popular conclusion that collapse of a false vacuum destabilizes matter, the creation of false vacuums in higher energy scales might be expected to increase the stability of matter.
In other threads I have been exploring the possibility that extreme kinetic field energy of a fast moving vehicle modifies the quantum state of local space, boosting it to a higher scale of false vacuum.
In this context the normal scale represents a ZPE oscillator with spin angular momentum of h. This h is constant of Planck and extends over the complete range where General Relativity gives accurate predictions.
Higher scales 2*h, 3*h, 4*h, 5*h are the possible modifications of space in the most extreme cases where Quantum Mechanics can not be ignored. These are called the Hierarchy of Plancks and represent the spin angular momentum of ZPE oscillators in quantum states of higher energy. None of these states have constant h. The states also represent false vacuums of higher scales in Scale Relativity and Squeezed Quantum States. Another way to view the same properties is found in the folding of space into layers described by TGD theory, an essential character of space to make worm holes possible.
Matti Pitkänen might explain the physics differently than my engineering representation. We have never agreed on how to compare his work and mine. He provided missing pieces of technology that allowed completion of my engineering project last year. So I have become involved with TGD in unexpected ways.
Article Topological geometrodynamics
In other threads the effects of extreme speed on machines and people were explored.
Ulla Mattfolk has collaborated with me on the micro physical foundation of biology and possible changes that might occur in higher scales of false vacuums at extreme high speed. Greater stability is suggesting that activation energies increase with scale and reactions chemical and biological slow down affecting response times. Together we developed an argument in support of free will some years ago.
In the present question the stability of isotopes is being considered.
If isotopes of uranium, thorium, and plutonium become progressively more stable as the local false vacuum increases from lower scales to higher scales, then a different operating point on control rods would be expected for each quantum state.
Will Nuclear Power Reactors Operate At Different Control Points In Deep Space At High Speed?
Consider this. The box which has the cat is completely transparent and there are two experimenters on site. At time Ts when the experiment starts, one (A) is blindfolded while the other (B) has a 20/20 eyesight. For A the cat is in the infamous superposition while for B it is alive until the cat dies at time Td. At this point in time, B breaks the news to A who is now "measuring" the collapse of the cat's wave function. Now suppose that instead of blindfolding A we equip him with special glasses whose resolution is larger than the dimensions of the box. Obviously he can't tell what is inside the box and goes back to describing the cat as a superposition.
Yet consider the next twist on the experiment. B is located right at the site of the box while A is being boosted immediately at Ts to a distance which is a lightyear away. Obviously for B the cat is alive for the entire time delT = Td-Ts, while for A it is still in a superposition long after Td.
It seems then that the issue at heart is information vis a vis spacetime resolution. Since information travels in the speed of light the question of spacetime resolution and information are connected. What, A, doesn't know is either due to inadequate resolution or lack of information (and the combination of both).
Now, the LISA Pathfinder rules out breakdown of the quantum superposition principle at the macroscale (due to the mass of the cat - the DP model) -- https://arxiv.org/pdf/1606.04581.pdf. Also note, that quantum gravity induced decoherence (as per Ellis et al.) has been ruled out by LISA.
Obviously we can't account for the mechanism of collapse with the known and possible physics from the macroscale down to quantum gravity.
Now since for, B, the theory of probability describes adequately the entire time, delT, aren't we pushed to conclude that when it comes to collapse QM is of use to A but is an incomplete theory for B?
All the research papers I found so far, are just showing measurement of the squeezing parameter or quantum Fisher Information (QFI). Of course authors mention that, due to large QFI or strong squeezing this setup can be used for metrological purposes beyond standard quantum limit (SQL). I could not find any papers, which actually perform estimation of the unknown phase and show that the precision is beyond SQL. I am curious from the point of view of estimation in the presence of decoherence (which is always present). Theoretical papers indicate that entangled states are basically useless if frequency is estimated (e.q. Ramsey spectroscopy).
It is stated in some interpretations of QM (exceptionally to mention Copenhagen interpretation) that an object stays in superposition (SP) as long as not a human (good physicist– just kidding) brain observes it and so projects it on a definite state (dead/alive). But as I think the following very simple example disproves this statement. I consider a working clock in a closed room. It was left at 2 o’clock and it is not observed (even put in vacuum to avoid decoherence induced by the medium) – e.g. closed there for 2 hours and then by entering the room one sees of course that it is showing 4 o’clock instead of 2 o’clock. But according to the above mentioned interpretations every instant it must have been in a superposion of showing the next second or not. If observation is what causes it to move then there were not such and hence it must still be showing 2 o’clock. (analogously as the cat is both dead/alive) So I think this disproves the statements for staying in SP until a brain or environment appears to project it. Do you agree?
What are the practical applications for three-dimensional representation of a geometric phenomenon in the fourth dimension (movement / metamorphosis of an object over time)?
With the help of multidimensional descriptive geometry it can be represented in three-dimensional or two-dimensional projections all information related to movement or metamorphosis in time of an object.I am looking for practical applications of this method witch could be used in various fields of activity.
How length of vector(state) gets shortens in Bloch sphere?When state lies in or outside the Bloch sphere,it depolarizes its length,direction remain unchanged,when we physically implement a depolarizing channel. Plz explain.
Different physicists disagree on whether there is such a thing as the wave function of the universe.
- In favor of its existence is the fact that, in the Big Bang picture, all particles (and hence downstream objects) were correlated at the inception of the Universe, and a correlation that has existed at some point in the past ever so loosely continues thereafter since full decoherence never truly sets in. A number of pictures - Ghirardi-Rimini-Weber, Bohm, even Hugh Everett, et al., - require the existence of the wave function of the universe, denoted Ψ(U).
- Two main categories of objections however belie its existence.
The first category ultimately boils down to a not very solid rejection of non-separability, i.e. to an argument that a full separation between an observer and an observed must always be upheld if any observation or measure is to be objectively valid, and a wave function ascertainable.
The second argument is more compelling, and says that if Ψ exists, then Ψ(U)=Ψ(Ψ,U) in a closed, self-referential loop. Ψ has thereby become an non-observable, unknowable, and as such better relegated to the realm of metaphysics than physics.
What say you?
Non-locality is a curious feature, yet essentially a quantum attribute that is linked to the violation of Bell inequality of any form. It arises from the impossibility of simultaneous joint measurements of observables. The Clauser-Horne-Shimony-Holt (CHSH) inequality is the only extremal Bell inequality with two settings and two outcomes per site. This inequality provides a basis to compare predictions of quantum theories with those linked local realism.
During non-Markovian dynamics of open quantum systems, there is break down of the well known Markovian model. This may occur due to strong system-environment coupling or when un-factorized initial conditions exist between the system and environment. Notably, a statistical interpretation of the density matrix is not defined for non-Markovian evolution.
My question is: Is there increased non-locality when a system undergoes non-Markovian dynamics and if so, how can this be quantified. I used the word "increased" because non-locality may be present in the case of Markovian dynamics, and the query focusses on whether certain aspects of non-Markovian dynamics accentuates non-locality.
In many experiments in quantum mechanics, a single photon is sent to a mirror which it passes through or bounces off with 50% probability, then the same for some more similar mirrors, and at the end we get interference between the various paths. This is fairly easy to observe in the laboratory.
The interference means there is no which-path information stored anywhere in the mirrors. The mirrors are made of 10^20-something atoms, they aren't necessarily ultra-pure crystals, and they're at room temperature. Nonetheless, they act on the photons as very simple unitary operators. Why is it that the mirrors retain no or very little trace of the photon's path, so that very little decoherence occurs?
In general, how do I look at a physical situation and predict when there will be enough noisy interaction with the environment for a quantum state to decohere?
Take the example of spin measurement on entangled pair of particles particle1 and particle2. The correlation between the spin of particle1 (spin1) and the spin of particle 2 (spin2) is the result of angular momentum conservation. If the decoherence interpretation is correct then the angular momentum is conserved within each of infinite number of decohering branches separately. However the conservation laws in quantum mechanics, (formulated in Heisenberg Picture) only demand that the commutator of the operator representing the conserved observable with the total hamiltonian of the whole system is zero. There seems to be no reason why it should be conserved within each decohering branch separately.
The equations from the Standard Model of quantum field theory are time reversal invariant. However, there are papers that suggest decoherence leads to entropy, e.g. Decoherence and Dynamical Entropy Generation in Quantum Field Theory,Jurjen F. Koksma et al., Phys. Lett. B (2011). There are other papers that argue entropy emerges from simple two-particle entanglement. Is there any definitive answer? Re. general relativity, there have been numerous papers arguing that gravity is an entropic force, But, as I understand, the proper way of demonstrating this, theoretically, is still unsettled. Is that right?