Science method

Decoherence - Science method

Explore the latest questions and answers in Decoherence, and find Decoherence experts.
Questions related to Decoherence
  • asked a question related to Decoherence
Question
4 answers
Quantum computing faces several challenges, including quantum decoherence, where qubits lose their quantum state due to interactions with the environment. Achieving quantum error correction and building robust, scalable quantum hardware are also significant challenges to ensure reliable and stable quantum computations.
Relevant answer
Answer
Dear Professor/Researcher,
Book Chapter proposal is invited for the edited book titled “Quantum Machine Learning (QML): Platform, Tools & Applications”.
The main goal of this book is to deliberate upon the various aspects of Quantum Machine Learning in distributed systems, cryptography and security by a galaxy of intellectuals from academia, researcher, professional community and industry. While this book would dwell on the foundations of Quantum Machine Learning as a part of transparency, scalability, integrity, security, it will also focus on contemporary topics for Research and Development on QML.
Topics for which Chapter proposals are invited:
Topic 4. Quantum Error Mitigation(QEM)
4.1 Introduction to quantum errors and noise
4.2 Quantum error mitigation techniques
4.3 Integrating QEM to the QML framework
Topic 5. Quantum Error Correction(QEC)
5.1. Introduction to quantum error correction
5.2 Quantum error correction techniques
5.3 Fault-tolerant quantum computing
Publisher:
ELSEVIER
Series: Advances in Computers Serial
Volume 140
Editors
Prof Shiho Kim[Chief Editor]
School of Integrated Technology, Yonsei University, South Korea
Ganesh Chandra Deka
Directorate General of Training, Ministry of Skill Development and Entrepreneurship, INDIA
With warm regards,
Shiho Kim
GC Deka
  • asked a question related to Decoherence
Question
3 answers
Hi folks!
It is well known that the apparent stochasticity of turbulent processes stems from the extreme sensitivity of the DETERMINISTIC underlying differential equations to very small changes in the initial and boundary conditions human beings aren't able to measure.
Given our limitations, for the same measured conditions, a deterministic turbulent flow can thus display a wide array of different behaviours.
Can QUANTUM MECHANICAL random fluctuations also change the initial and boundary conditions in such a way that the turbulent flow would behave in a different manner?
In other words, if we assume that quantum mechanics is genuinely indetermistic, can it propagate that "true" randomness towards (some) turbulent processes and flows?
Or would decoherence hinder this from happening?
I wasn't able to find any peer-reviewed papers on this.
Many thanks for your answers!
Relevant answer
Answer
Howdy Marc Fischer,
I applaud your clear statement that "random" and "stochastic" are terms to be applied to human perceptions and efforts, since the underlying dynamics of fluids in turbulence is deterministic. Very rare observation, very welcome! (The fluid knows.)
As far as "quantum mechanics is genuinely indetermistic," some of us think the jury is still out on that because your comments on turbulence apply to human limitations, including ignorance, there also.
Given the experience of "QUANTUM MECHANICAL random fluctuations" by turbulent processes that is possible, I doubt that they could affect the state of the turbulence as having too little energy. "Random" absorption of a photon by a molecule that enhanced the molecule's internal energy would have an effect, of course, by increasing the thermal-motion energy of the fluid, but I doubt it could be noticed in turbulence, even by the flow being triggered to new behavior. This affect is very different from application of human precision in initial conditions that enters mathematical expressions with such strong leverage because of the mathematical non-linearity.
I also applaud the association you have made as one of the ways to trigger insight, but too often the answer in a specific case is no.
Happy Trails, Len
  • asked a question related to Decoherence
Question
1 answer
Literature searches show that there are many papers and books on emerging quantum theory, but it seems that no specific model has ever been proposed. Here is such a proposal: https://www.researchgate.net/publication/361866270 The model describes wave functions as blurred tangles, using ideas by Dirac and by Battey-Pratt and Racey. The tangles are the skeletons of wave functions that determine all their properties. The preprint tries to be as clear and as pedagogical as possible.
In quantum theory, tangles reproduce spin 1/2, Dirac's equation, antiparticles, entanglement, decoherence and wave function collapse. More interestingly, the deviations from quantum theory that the model predicts imply that only a limited choice of elementary fermions and bosons can arise in nature. Classifying (rational) tangles yields the observed elementary and composed particles, and classifying their deformations yields the known gauge theories.
Given that the text aims to be as understandable and enjoyable as possible, feel free to point out any issue that a reader might have.
Relevant answer
Answer
A related development is, by Torsten Asselmeyer-Maluga, https://arxiv.org/pdf/1910.09966.pdf
  • asked a question related to Decoherence
Question
3 answers
The main result of decoherence theory is that the non-diagonal elements of a quantum object's density matrix become zero due to uncontrolled interactions with the environment. For me, that only means that there will we no more interference effects between the superposed states. But there still remain the diagonal elements of the density matrix. So there is still a superposition of classical alternatives left. How does that solve the measurement problem ?
Moreover, doesn't the mathematical derivation of the decoherence effect involve an ensemble average over all possible environmental disturbances ? How does this help when we are interested in the behavior of a specific system in a specific environment ?
Relevant answer
Answer
Thanks to 'Juan Weisz' and 'L. I. Plimak' for your quick answers!
I just want to add that the reason for my question was an article ( https://arxiv.org/pdf/1612.00676.pdf ), in which physicists were surveyed about their attitudes concerning the foundations of quantum mechanics. I was shocked to see (in Fig.6) that 29% considered the measurement problem as solved by decoherence, and 17% considered it even as a pseudoproblem. I my opinion, the measurement problem is absolutely important, but still unsolved.
  • asked a question related to Decoherence
Question
12 answers
Dear colleagues, if I am not mistaken, decoherence time outlines that a particle experiencing superposition will fall into a single state. So my question is: Does superposition allows for a particle to exist in more than a binary state? With this, can a particle be in more than three states? And, if possible, in how many states can a particle be? - Thank you! (my apologies if this question seems absurd. I am working in a contradiction in Artificial Intelligence which is equally absurd).
Relevant answer
Answer
Hello, Mr. Steinburg, there is a very fundamental difference between how classical and quantum mechanics are being described. In classical mechanics, a particle or an electron can be deterministically described but not in the quantum world. Because the uncertainty principle says we cannot measure both position and momentum at the same time. We don't know the initial state of a particle with sufficient accuracy. In quantum mechanics, what we do is, we measure the probability of finding a particle. As there is no deterministic way to describe a particle in quantum mechanics, we use wave function to describe a body. So, within the range, we can find a particle anywhere. So, if the range is +infite to - infinite (say in the universe), then the particle is within anywhere in that range. We do not know about the position of the particle unless we measure it. Going back to your example, in the quantum world, you do not name a million object with a single word, rather you give a name to a single entity but unless you observe that, that word does not deterministically say anything about your desired entity. The example of the electron that you gave follows the same, we will not have any unique electron unless you measure that. That is why electron in the orbital is described by electron cloud in the atom. However, another question that you came up with that whether the electron is in the orbit and in the universe at the same time, that actually is solved by the issue of the measurement range. We not measuring electron of Mg or Al, in the universe, but we are measure within a substance. But yes, if the electron is free within the universe, the electron is anywhere in the universe. We will only know about that electron when we measure it. So, back to your first question, in quantum mechanics, in a superposition state, a particle can experience an infinite number of states within a range. The probability of finding a particle with a range can be, 0, 0.1, 0.001, 0.000...million..zillion..01 up to 0.999999...million...zillion....9, 9.99 to 1. I hope, that will suffice your query. Thank you.
  • asked a question related to Decoherence
Question
3 answers
As NV centers are surrounded by electrons, and electron spin is used as qubits but nuclear spin acts as a source of decoherence by creating a varying magnetic field. Now if the NV center is in quantum superposition state due to the decoherence the changes in energy levels will cause dephasing and eventually, the loss of quantum state. To counter this we apply an RF pulse to invert the state of NV center which inverts the effect of the magnetic field on the spin. This is justified by the fact that 'if we have the same time before and after this flip the effect of the field is canceled and quantum state is protected.' But how? And will the noise present in the system affect the protected quantum state? Can't these controlled spin be manipulated in a way so that they can act as qubits and help in carry extra information?
Relevant answer
Answer
In spin systems in particular, commonly used protocols for dynamical decoupling include the Carr-Purcell and the Carr-Purcell-Meiboom-Gill schemes. They are based on the Hahn spin echo technique of applying periodic pulses to enable refocusing and hence extend the coherence times of qubits.
  • asked a question related to Decoherence
Question
67 answers
Consider the wave-function representing single electrons
(1) α|1>a + β|1>b ,
with both |α|2 < 1 and |β|2 < 1. On the path of the wave-packet |a> is set a detector A.
The question is what causes the reaction of the detector, i.e. a recording or staying silent? A couple of possibilities are considered here:
1) The detector reacts only to the electron charge, the amplitude of probability α has no influence on the detector response.
2) The detector reacts with certainty to the electron charge, only when |α|2 = 1. Since |α|2 < 1, sometimes the sensitive material in the detector feels the charge, and sometimes nothing happens in the material.
3) It allways happens that a few atoms of the material feel the charge, and an entanglement appears involving them, e.g.
(2) α|1>a |1e>A1 |1e>A2 |1e>A3 . . . + β|1>b |10>A1 |10>A2 |10>A3 . . .
where |1e>Aj means that the atom no j is excited (eventually split into am ion-electron pair), and |10>Aj means that the atom no j is in the ground state.
But the continuation from the state (2) on, i.e. whether a (macroscopic) avalance would develop, depends on the intensity |α|2. Here is a substitute of the "collapse" postulate: since |α|2 < 1 the avalanche does not develop compulsorily. If |α|2 is great, the process intensifies often to an avalanche, but if |α|2 is small the avalanche happens rarely. How many times appears the avalanche is proportional to |α|2.
Which one of these possibilities seem the most plausible? Or, does somebody have another idea?
Relevant answer
Answer
Yes, Dinesh wants everything to be Classical Mechanics,
unfortunately he is very wrong.
classical Mechanics is identified with Newton and followers,
Lagrage and Hamilton. Special theory of relativity is also included.
Then Classical field theory covers Maxwell theory of Electromagnetism, and the General theory of relativity.
In summary almost everything that is non quantum is called
Classical.
The quantum markes a sharp break in methods and results.
In fact irreducible randomness is a characteristic, which
Dinesh would rather not admit.
Thermodynamics and Statistical Mechanics also admit randomness, although at not so fundamental level, unless it is quantum statistical mechanics.
  • asked a question related to Decoherence
Question
5 answers
In other threads vacuum space was discussed for possibilities of scale relativity, squeezed quantum states, and hierarchy of Plancks. In each or these a quantum state of local space is associated with large scale continuum space of GR.
Roger Penrose has on several ocations published gravity curvature as a possible cause of state vector reduction.
Also in his book Emperor's New Mind the unitary evolution U in a quantum system continues to a critical point where state vector reduction R occurs, followed by a new evolution U to some higher state, and another state vector reduction R. The critical point was said to be an excess of gravitational curvature, building up a system of superimposed quantum states, entanglements until the excess energy causes state reduction, collapse of the wave function, selection of one state and rejection of the competing state.
When applied to vacuum space as Penrose did in the 1996 paper the state vector reduction was argued as a decoherence caused by increasing complexity and some additional triggering device that Penrose proposed as an accumulated gravitational curvature in excess of the system stability limit.
In many threads the researchers have been discussing the limitations of GR in respect to high speed transport in deep space. With little difficulty those discussions could be recast in the terminology of Penrose and state vector reduction.
The implication of the Penrose publications and the conclusions of high speed in deep space is that the many degrees of freedom in vacuum space entangle with the few degrees of freedom in a quantum system. That is to say the vacuum interacts physically with the objects that pass through it.
The present question relates to kinetic field energy at high speed in deep space where the progression of scales and change from one scale to another appears to be the same as U and R but in other terminology.
Does State Vector Reduction Occur In Vacuum Space Time?
Relevant answer
Answer
Thank you Stam Nicolis for reply.
I edited the title to be Vacuum Space Time rather than Vacuum Space where time is assumed. You make a number of points that should be considered.
In quantum systems phase space may differ from space time depending on the system. When vacuum space time is represented as the quantum system under consideration then the phase space must be the same as space time or be contained with in it.
I will argue that the ground state of quantum space time can be uniquely defined by isotropic CMB in flat space, a view shared by some famous names, but not generally accepted in the research community.
In your explanation the other operator M is represented in my system as a fast moving vehicle which locally meets the measurement requirement. In most cases and in all of those described by GR, the moving vehicle referenced to isotropic CMB will not generate a high probability of excited states in vacuum space time.
Unusual cases of very high speed and kinetic energy are most interesting to me. This is where local quantum effects cannot be ignored in GR. I have argued that the probability of excited states become significant and become superimposed upon each other and upon the ground state.
As kinetic field energy continues to increase some limit is reached at which a state vector reduction may occur. Physically it is represented as a change of spin angular momentum of the space time quanta from h to 2h. This is the hierarchy of Plancks that occur in other theories and research of the past 40 years.
If a state vector reduction occurs as R, the unitary evolution U continues locally as kinetic energy continues to increase, but this is not described by GR because h has become a local variable and local space time is in a squeezed quantum state. In this representation the velocity relative to CMB is approaching a substantial fraction of light speed, and the kinetic field energy is sufficient to change the quantum state of local space time as the vehicle passes through it.
In the present question I am asking if quantized vacuum space time can undergo a state vector reduction.
  • asked a question related to Decoherence
Question
5 answers
Two concepts are being explored relating to gravity:
- that gravity occurs as a result of entropic forces, and that the concept can be used to derive the General Relativity Field Equations (and Newtonian Gravity in the appropriate limit)
- that gravitational time dilation can cause Quantum Decoherence, and thus essentially explain the transition from the quantum world to the classical world
Some researchers claim that experimental evidence on ultra-cold neutron energy levels in gravitational fields invalidate the concept of Entropic Gravity? Is such a conclusion valid, and, if so, does it also invalidate the claim that gravitational time dilatation causes Quantum Decoherence?
Relevant answer
Answer
The so-called "entropy " doesn't exist at all.
During the process of deriving the so-called entropy, in fact, ΔQ/T can not be turned into dQ/T. That is, the so-called "entropy " doesn't exist at all.
The so-called entropy was such a concept that was derived by mistake in history.
It is well known that calculus has a definition,
any theory should follow the same principle of calculus; thermodynamics, of course, is no exception, for there's no other calculus at all, this is common sense.
Based on the definition of calculus, we know:
to the definite integral ∫T f(T)dQ, only when Q=F(T), ∫T f(T)dQ=∫T f(T)dF(T) is meaningful.
As long as Q is not a single-valued function of T, namely, Q=F( T, X, …), then,
∫T f(T)dQ=∫T f(T)dF(T, X, …) is meaningless.
1) Now, on the one hand, we all know that Q is not a single-valued function of T, this alone is enough to determine that the definite integral ∫T f(T)dQ=∫T 1/TdQ is meaningless.
2) On the other hand, In fact, Q=f(P, V, T), then
∫T 1/TdQ = ∫T 1/Tdf(T, V, P)= ∫T dF(T, V, P) is certainly meaningless. ( in ∫T , T is subscript ).
We know that dQ/T is used for the definite integral ∫T 1/TdQ, while ∫T 1/TdQ is meaningless, so, ΔQ/T can not be turned into dQ/T at all.
that is, the so-called "entropy " doesn't exist at all.
  • asked a question related to Decoherence
Question
48 answers
The explanation of quantum entanglement by "hidden variables" seems to have been eliminated by tests for violation of Bell's Inequalities first by the experiments of Alain Aspect and subsequently by numerous experimenters who produced extended versions which closed "loopholes" in the original tests. All of these seem to confirm that the measurements on the particles, while correlated, must be truly random. However, does that necessarily eliminate hard determinism? If the measurement result comes from decoherence due to interaction with the measuring system, acting as a thermodynamic heatsink, could it actually be the result of the combined properties of the large ensemble of particles such that it is deterministic but unknowable?
Relevant answer
Answer
Dear Sofia,
Sorry for the long delay.
SW: Nooooo! Oh, I thought that we know what we mean (in the domain of fundaments of QM) by the phrase "common cause". So, you don't know this expression. I'll explain you.
I know it well but perhaps in a different form so I'll explain my version. In some studies we might find that two parameters, say A and B, have a relationship and if A occurs before B we might conclude that "A causes B". However that may not be the case because it may be that both A and B have been caused by a third influence C which preceded both, thus the correct picture would be "C causes A and C causes B".
SW: Consider two or more entangled particles - let's speak of two for simplicity, A and B. Common cause of the results produced by A and B when measured, is a hidden variable Λ which takes values at the preparation step, i.e. when a pair is prepared, both A and B are attributed a value of λ of Λ, the same for both particles.
Yes, that seems to be the same as my general version.
SW: The word "common" means common to the two particles in the pair, not common to two subsequent pairs.
Yes, that is what I said, so the idea of delaying one measurement so that the pairs overlap in time doesn't have any consequence as far as I can see, what is common to one pair remains separate from what is common to the other pair. George: "Right, but the wave-function that defines one pair of entangled electrons is independent of the wave-function that defines the next pair, there isn't a single hidden variable, local or non-local, associated with one pair that would influence the other pair." Wooow! My God! Let's make order! The wave-function, George, is one and the same all along the experiment. In each trial and trial of the experiment the pair is prepared in the same way, i.e. the same wave-function. For instance, if the wave-function is, say 2-½( α|x>A |x>B + β|y>A|y>B ), each pair of particles exits the preparation region in the state 2-½( α|x>A |x>B + β|y>A|y>B).
OK, it is my terminology that is probably a little off the conventional. I understand that that one equation defines the nature of the pair but what I was saying was that two trials are independent so in a sense two separate examples of the same wavefunction. I'm not familiar with the correct jargon here.
SW: The wave-function is not a hidden variable!
Oh certainly not. If anything, I think it is called "hidden" specifically because it would be a physical parameter which does not appear in the wave-function.
SW: If the issue is not clear to you I invite you to tell me. Please reformulate according to the correct terminlogy and concepts.
It is quite clear and entirely agreed, my layman language is causing confusion.
Not knowing the correct terminology gives me a problem, see if these diagrams help:
SW: So, as you say that you admit that there are no HVs, you rule out common causes.
Not quite, that is where I have a doubt. I've added two diagrams
In the first, we have a spacetime diagram showing the worldlines of two entangled photons (red lines) from where they are generated "G" to their measurement at "A" and "B". Each measurement involves an interaction (horizontal green lines) with an instrument which constitutes a local environment Ea and Eb respectively. The interaction causes decoherence giving a definite result. The idea of "hidden variables" would be a parameter "hv" carried by the photons but not visible, this it seems has been ruled out by experiment.
My question then is whether there might be a back door as shown in the second diagram. The state of the environments is dependent only on what lay within their past light cones (slanting green lines). While they don't fully overlap, the conditions of everything of relevance at some time in the past (blue horizontal line) determines each, Ea by the conditions in region P-Q and Eb by the region R-S. Now the possibility is that there might be a common cause "C.C." which influenced the whole of the region P-S.
What that requires is not that "decoherence causes the measured value" but that "the state of the environment Ea is a common cause of both decoherence and the value measured as A", and similarly for Eb and B.
I'm not arguing that this is the case, just wondering how confidently it can be eliminated.
  • asked a question related to Decoherence
Question
41 answers
Many thinkers reject the idea that large scale persistent coherence can exist in the brain because it is too warm, wet, noisy and constantly interacts, and consequently, is 'measured' by the environment via the senses.
The problem of decoherence is, I suggest, in part at least, a problem of perception - the cognitive stance that we adopt toward the problem. If we examine the problem of interaction with the environment, common sense suggests that we perceive the primary utility of this interaction as being the survival of the organism within its environment. It seems to follow that if coherence is involved in the senses then evolution must have found a way of preserving this quantum state in order to preserve its functional utility - a difficult problem to solve!
I believe that this is wrong! I believe that the primary 'utility' of cognition is that it enables large scale coherent states to emerge and to persist. In other words, I believe that we are perceiving the problem in the wrong way. Instead of asking 'How do large scale coherent states exist and persist given the constant interaction with the environment?', we should ask instead - 'How is cognition instrumental in promoting large scale robust quantum states?'
I think the key to this question lies in appreciating that cognition is NOT a reactive process - it is a pre-emptive process!
Relevant answer
Answer
Let us take an extreme position and see if we can make progress
If we assume, that instead of quantum coherence being a subsequent add-on to the living process, that it is, in fact, intrinsic to the living process. And if we further assume that quantum coherence in living systems is intrinsically robust, and necessarily so, in order to perform its biological function. Then we may be able to address the problem a different way: by paring the issue down to its very basics we may simplify it enough to see the way forward:-
If it is true that consciousness correlates with a macroscopic quantum coherent state.
And if it is also true that this coherent state can effect change in the world of classical physics
Then, given the evidence of our own ontology, the beginning of life on this planet would have coincided with the moment that quantum coherence found a way of breaking through the de-coherence barrier and maintaining coherence employing Occam's razor] as a direct consequence of the way in which
that change is effected.
If this argument holds, and if the soliton instrumental in the
process of catalysis maintains coherence through the process then, we should discover that cognition is not an aspect of life, -but definitive of it.
  • asked a question related to Decoherence
Question
37 answers
The standard QM offers no explanation for the collapse postulate.
By the Bohmian mechanics (BM) there is no collapse, there exists a particle following some trajectory, and and detector fires if hit by that particle. Therefore, there is no collapse. However, BM has big problems in what concerns the photon, for which no particle and no trajectory is predicted. Thus, in the case of photons, it is not clear which deterministic mechanism suggests BM instead of the collapse.
The Ghirardi-Rimini-Weber (GRW) theory says that the collapse occurs due to the localization of the wave-function at some point, decided upon by a stochastic potential added to the Schrodinger equation. The probability of localization is very high inside a detector, where the studied particle interacts with the molecules of the material and gathers around itself a bigger and bigger number of molecules. Thus, at some step there grows a macroscopic body which is "felt" by the detector circuitry.
Personally, I have a problem with the idea that the collapse occurs at the interaction of the quantum system with a classical detector. If the quantum superposition is broken at this step, how does it happen that the quantum correlations are not broken?
For instance in the spin singlet ( |↑>|↓> - |↓>|↑>) one gets in a Stern-Gerlach measurement with the two magnetic fields identically oriented, either |↑>|↓>, or |↓>|↑>. The quantum superposition is broken. But the quantum correlation is preserved. On never obtains, if the magnetic fields have the same orientation, |↑>|↑>, or |↓>|↓>.
WHY SO? Practically, what connection may be maintained between the macroscopic body appearing in one detector, and the macroscopic body appearing in another detector, far away from the former? Why the quantum correlation is not broken, as is broken the quantum superposition?
Relevant answer
Answer
Dear Sofia,
I have written about crises, about philosophy, about history of physics, about free will since these issues are directly relevant to your question. It is impossible to answer correctly on your question without an understanding of the Kant philosophy. Your question is VERY difficult without understanding of the Kant philosophy and the answer on this question will be simple and unambiguous if you, as Einstein, will saturate himself with the Kant philosophy. You write: “In the 21th century the result may be at all passed to a completely automatic process. But not this is the issue, whether the observer is human or a machine”. This opinion is typical for most modern scientists who sure that we can know the Nature as ’thing-in-itself’ and who don't make the distinction between ’res cogitans’ (thinking entities) and ’res extensa’ (extended entities). I draw your attention on the problem of free will in quantum mechanics since this problem is actual only if the observer is human since no machine can have free will. If the observer is a machine and the act of the observation in quantum mechanics is a real interaction of quantum system with a macroscopic body or apparatus, as you think, then the Nobel prize winner Gerard 't Hooft tries to solve a nonexistent problem of free will and numerous publications refuting realism (for example The BIG Bell Test Collaboration, Challenging local realism with human choices. Nature 557, 212216 (2018)) make no sense. I tried to draw the attention in my letter to an Associate Editor of Nature (see attached file) that the believers in quantum mechanics belong to different confessions: the believers of one confession refute realism whereas the believers of other confession don’t want to hear that quantum mechanics contradict realism. Your questions and remarks witness that you belong to the latter confession as well as most physicists of Soviet school. You are sure that quantum mechanics describes real processes. Therefore it's not obvious for you that the "collapse" of the wave-function occurs under influence of the mind of the observer. It was not obvious for most physicists but was obvious for Einstein, Schrodinger, Bell and other critics of quantum mechanics. It is obvious according to the Born interpretation of the wave-function. In order to understand the collapse of the wave-function occurs under influence of the mind of the observer it is needed to put the question: ”Why the realistic understanding of the wave function proposed by Schrodinger was rejected and Born’s interpretation was accepted?” The answer is obvious: ”We cannot think that a real density can change due to an observation whereas we know from our everyday experience that a probability of observation changes at first
observation”. For example, your friend takes away a one dice without looking from a box with one white dice and one black dice. You know before observation of the remaining dice that your friend will see the white dice with the probability 0.5. The probability will become 1 when you will see the black dice and 0 when you will see the white dice, regardless of distance between you and your friend. Thus, the wave function describes first of all the state of the mind of the observer, according to Born’s interpretation. We can think that only observer’s knowledge changes because of observation in the case of the dices. But the wave-particle duality observed for example in the doubleslit interference experiment cannot be described if only observer’s knowledge changes. Therefore Dirac postulated in 1930 ”that a measurement always causes the system to jump into an eigenstate of the dynamical variable that is being measured”. The wave function describes the states of both the mind of the observer and quantum system according to the Dirac jump or the collapse of the wave-function. This renouncement of the distinction between reality and our knowledge of reality creates the illusion that quantum mechanics can describe the wave-particle duality and other paradoxical quantum phenomena. But it is very strange that only few scientists understood that this renouncement of realism leads to the logical absurdity.
  • asked a question related to Decoherence
Question
2 answers
Based on the popular conclusion that collapse of a false vacuum destabilizes matter, the creation of false vacuums in higher energy scales might be expected to increase the stability of matter.
In other threads I have been exploring the possibility that extreme kinetic field energy of a fast moving vehicle modifies the quantum state of local space, boosting it to a higher scale of false vacuum.
In this context the normal scale represents a ZPE oscillator with spin angular momentum of h. This h is constant of Planck and extends over the complete range where General Relativity gives accurate predictions.
Higher scales 2*h, 3*h, 4*h, 5*h are the possible modifications of space in the most extreme cases where Quantum Mechanics can not be ignored. These are called the Hierarchy of Plancks and represent the spin angular momentum of ZPE oscillators in quantum states of higher energy. None of these states have constant h. The states also represent false vacuums of higher scales in Scale Relativity and Squeezed Quantum States. Another way to view the same properties is found in the folding of space into layers described by TGD theory, an essential character of space to make worm holes possible.
Matti Pitkänen might explain the physics differently than my engineering representation. We have never agreed on how to compare his work and mine. He provided missing pieces of technology that allowed completion of my engineering project last year. So I have become involved with TGD in unexpected ways.
In other threads the effects of extreme speed on machines and people were explored.
Ulla Mattfolk has collaborated with me on the micro physical foundation of biology and possible changes that might occur in higher scales of false vacuums at extreme high speed. Greater stability is suggesting that activation energies increase with scale and reactions chemical and biological slow down affecting response times. Together we developed an argument in support of free will some years ago.
In the present question the stability of isotopes is being considered.
If isotopes of uranium, thorium, and plutonium become progressively more stable as the local false vacuum increases from lower scales to higher scales, then a different operating point on control rods would be expected for each quantum state.
Will Nuclear Power Reactors Operate At Different Control Points In Deep Space At High Speed?
Relevant answer
Answer
A clue is found in the squeezed quantum states of lasers like the ones used in LIGO facilities, where the uncertainty principle is affected, lowering the uncertainty of energy, momentum, time, and distance.
Uncertainty is a factor in decay modes and half times of unstable particles, suggesting again that smaller uncertainty provides greater stability.
  • asked a question related to Decoherence
Question
14 answers
Consider this. The box which has the cat is completely transparent and there are two experimenters on site. At time Ts when the experiment starts, one (A) is blindfolded while the other (B) has a 20/20 eyesight. For A the cat is in the infamous superposition while for B it is alive until the cat dies at time Td. At this point in time, B breaks the news to A who is now "measuring" the collapse of the cat's wave function. Now suppose that instead of blindfolding A we equip him with special glasses whose resolution is larger than the dimensions of the box. Obviously he can't tell what is inside the box and goes back to describing the cat as a superposition.
Yet consider the next twist on the experiment. B is located right at the site of the box while A is being boosted immediately at Ts to a distance which is a lightyear away. Obviously for B the cat is alive for the entire time delT = Td-Ts, while for A it is still in a superposition long after Td.
It seems then that the issue at heart is information vis a vis spacetime resolution. Since information travels in the speed of light the question of spacetime resolution and information are connected. What, A, doesn't know is either due to inadequate resolution or lack of information (and the combination of both).
Now, the LISA Pathfinder rules out breakdown of the quantum superposition principle at the macroscale (due to the mass of the cat - the DP model) -- https://arxiv.org/pdf/1606.04581.pdf. Also note, that quantum gravity induced decoherence (as per Ellis et al.) has been ruled out by LISA.
Obviously we can't account for the mechanism of collapse with the known and possible physics from the macroscale down to quantum gravity.
Now since for, B, the theory of probability describes adequately the entire time, delT, aren't we pushed to conclude that when it comes to collapse QM is of use to A but is an incomplete theory for B?
Relevant answer
Answer
The mystery vanishes in the de Broglie-Bohm theory. The fate on the cat depends on the initial position of the particle inside the wave density only which deliver an radioactive decay or not . An observer plays no role. There is no measuring effect on the cat until you modify the experiment/wave function.
See for example the youtube videos:
  • asked a question related to Decoherence
Question
3 answers
All the research papers I found so far, are just showing measurement of the squeezing parameter or quantum Fisher Information (QFI). Of course authors mention that, due to large QFI or strong squeezing this setup can be used for metrological purposes beyond standard quantum limit (SQL). I could not find any papers, which actually perform estimation of the unknown phase and show that the precision is beyond SQL. I am curious from the point of view of estimation in the presence of decoherence (which is always present). Theoretical papers indicate that entangled states are basically useless if frequency is estimated (e.q. Ramsey spectroscopy).
Relevant answer
Answer
I am not actually expert in this field and I am not sure whether the following paper is of your help, I just referred this if I could learn something from you and others.
Entanglement-free Heisenberg-limited phase estimation
BL Higgins, DW Berry, SD Bartlett, HM Wiseman, GJ Pryde, Nature 450 (7168), 393
  • asked a question related to Decoherence
Question
104 answers
It is stated in some interpretations of QM (exceptionally to mention Copenhagen interpretation) that an object stays in superposition (SP) as long as not a human (good physicist– just kidding) brain observes it and so projects it on a definite state (dead/alive). But as I think the following very simple example disproves this statement. I consider a working clock in a closed room. It was left at 2 o’clock and it is not observed (even put in vacuum to avoid decoherence induced by the medium) – e.g. closed there for 2 hours and then by entering the room one sees of course that it is showing 4 o’clock instead of 2 o’clock. But according to the above mentioned interpretations every instant it must have been in a superposion of showing the next second or not. If observation is what causes it to move then there were not such and hence it must still be showing 2 o’clock. (analogously as the cat is both dead/alive) So I think this disproves the statements for staying in SP until a brain or environment appears to project it. Do you agree?
Relevant answer
Answer
Danielle,
I wonder in your theory what is the width of a photon? It turns out that is has macroscopic dimensions? (X=lambda and even more d+2X) All current data show that it is pointlike. At least in your theory it can react with electron which is much smaller.
Very important - does your photon splits in two in DSE and why is never observed splitted?
  • asked a question related to Decoherence
Question
5 answers
What are the practical applications for three-dimensional representation of a geometric phenomenon in the fourth dimension (movement / metamorphosis of an object over time)? 
With the help of multidimensional descriptive geometry it can be represented in three-dimensional or two-dimensional projections all information related to movement or metamorphosis in time of an object.I am looking for practical applications of this method witch could be used in various fields of activity.
Relevant answer
Answer
It is necessary to split the four-dimensional manifold into 3-space and time. If 4-space is flat (Minkowski space), this splitting is most light in the reference system, which moves with a constant velocity. The 3-space is orthogonal to the time in his case. If we use the curved space-time of GR, this operation is possible only locally (for every 4-point). We split the space-time (locally) into 3-space (it can rotate, deform and move with acceleration), the time can depend on the velocity of the 3-space rotation and of the gravitational potential of every point). This operation is showed in detail in Zelmanov theory of the physical observable values. May be, it is possible to find another analogical theories, which are built on the space, whose metric is the sum of the fundamental metric tensor and the product of the unit 4-vector velocity, which is connected with a observer. But Zelmanov theory contents exact method of calculations of all spatial characteristics (gravitational force, rotation, deforming and 3-curvature tensor). It lets also to calculate the temp of the time.
  • asked a question related to Decoherence
Question
3 answers
How length of vector(state) gets shortens in Bloch sphere?When state lies in or outside the Bloch sphere,it depolarizes its length,direction remain unchanged,when we physically implement a depolarizing channel. Plz explain.
Relevant answer
Answer
It may be helpful to think about depolarizing channels in another way:  with some probability the qubit is discarded and replaced with a qubit in a maximally mixed state; otherwise it is unchanged.  So the state of the qubit becomes a mixture of the original state and the maximally mixed state.
In the Bloch sphere picture, the original state is represented by some vector, and the maximally mixed state is represented by the zero vector.  A mixture of the two will be presented by a linear combination of the two vectors, which will be in the same direction as the vector for the original state, but shorter.
  • asked a question related to Decoherence
Question
98 answers
Different physicists disagree on whether there is such a thing as the wave function of the universe.
- In favor of its existence is the fact that, in the Big Bang picture, all particles (and hence downstream objects) were correlated at the inception of the Universe, and a correlation that has existed at some point in the past ever so loosely continues thereafter since full decoherence never truly sets in. A number of pictures  - Ghirardi-Rimini-Weber, Bohm, even Hugh Everett, et al., - require the existence of the wave function of the universe, denoted Ψ(U).
- Two main categories of objections however belie its existence.
The first category ultimately boils down to a not very solid rejection of non-separability, i.e. to an argument that a full separation between an observer and an observed must always be upheld if any observation or measure is to be objectively valid, and a wave function ascertainable.
The second argument is more compelling, and says that if Ψ exists, then Ψ(U)=Ψ(Ψ,U) in a closed, self-referential loop. Ψ has thereby become an non-observable, unknowable, and as such better relegated to the realm of metaphysics than physics.
 
What say you?
Relevant answer
Answer
The question puts me in the situation to answer two times yes - where both answers are based on reasons that belong much more to those who reject this theory as to them accepting it.
1. I am much more a convinced fan of the determinist views of physics as of the non-determinist theories. The paradox is that as a determinist I agree more with the existence of a wave function of the universe as with its non-existence. A general coherence, or what remained, matches better with my intuition of strict and many-dimensional chain of causalities.
2. I cannot accept the big-bang  as possible begin of the whole universe. I accept it just as one accepts the hypothesis of a local event that determines all the visible universe and maybe much more, but just a small neighborhood of us relatively to the universe. Seen like this, the big bang is also a theory with local applicability, as all other physical theories before. But exactly because of its locality, I have a reason less to deny the existence of the universal wave function, although the circularity Ψ(U)=Ψ(Ψ,U) would locally persist!
Isn't this a funny situation?
  • asked a question related to Decoherence
Question
12 answers
Non-locality is a curious feature, yet essentially a quantum attribute that is linked to the violation of Bell inequality of any form. It arises from the impossibility of simultaneous joint measurements of observables. The Clauser-Horne-Shimony-Holt (CHSH) inequality is the only extremal Bell inequality with two settings and two outcomes per site. This inequality provides a basis to compare predictions of quantum theories with those linked local realism.
During non-Markovian dynamics of open quantum systems, there is break down of the well known Markovian model. This may occur due to strong system-environment coupling or when un-factorized initial conditions exist between the system and environment. Notably, a statistical interpretation of the density matrix is not defined for non-Markovian evolution.
My question is: Is there increased non-locality when a system undergoes non-Markovian dynamics and if so, how can this be quantified. I used the word "increased" because non-locality may be present in the case of Markovian dynamics, and the query focusses on whether certain aspects  of  non-Markovian dynamics accentuates non-locality.
Relevant answer
Answer
@Leyvraz  Non-Markovianity has been linked quantitatively to different distance measures (e.g., trace distance, Bures distance, Hilbert-Schmidt distance). However these measures vary  due to inherently differet characteristics,  and as such there is no  unique definition of non-Markovianity in quantum systems. In a sense, if there is a measure that responds to deviations from the continuous, memoryless, completely positive semi-group feature of Markovian evolution, then it is may qualify as a tool to quantify non-Markovianity. The concepts of divisibility and distinguishability  have been very useful  in identifying  measures which violate the complete positivity during the evolution of a  system. So to answer your question: there is no single quantity that I have in mind.....as long as "it" is linked to the breakdown of the complete positivity during  evolution, it will do ok. Even the increase of trace distance during time intervals (for instance) may be  taken as a sufficient but not necessary signature of  non-Markovianity. This shows the complexities of non-Markovian dynamics.
It is challenging as well to quantify Non-locality, and there are attempts to involve  the  Bell inequality and variations thereof  to quantify the boundaries of non-local events. So we have hurdles in defining the problem even before resorting to solving it!
Perhaps it is a good start to seek a link (implicit or otherwise) between non-Markovianity and non-locality. To this end, it would be interesting to seek  the role of mathematical mappings known as the assignment or extension maps within the context of this problem. Assignment maps provide description as to how a subsystem is embedded within a larger system. These maps can take on negative values in some instances, and the  quantum system proceeds to evolve with signatures of non-Markovian dynamics for a given period of time. Will further examination of the Assignment maps help establish a link  between increased non-Markovianity and non-locality? What is the role of the Minkowski space & geometric algebra in this regard? Are they better platforms with which to examine this problem, which appears ill defined at the moment.
  • asked a question related to Decoherence
Question
8 answers
In many experiments in quantum mechanics, a single photon is sent to a mirror which it passes through or bounces off with 50% probability, then the same for some more similar mirrors, and at the end we get interference between the various paths. This is fairly easy to observe in the laboratory.
The interference means there is no which-path information stored anywhere in the mirrors. The mirrors are made of 10^20-something atoms, they aren't necessarily ultra-pure crystals, and they're at room temperature. Nonetheless, they act on the photons as very simple unitary operators. Why is it that the mirrors retain no or very little trace of the photon's path, so that very little decoherence occurs?
In general, how do I look at a physical situation and predict when there will be enough noisy interaction with the environment for a quantum state to decohere?
Relevant answer
Answer
I cannot give quite a satisfactory answer. However, I do believe an essential ingredient is the fact that the interaction of any photon with any particular atom in the beam splitter is exceedingly small. The reflection arises as a collective, or coherent, effect. For this reason, at least, the large number of atoms does not necessarily lead to decoherence. As for temperature, I would argue that room temperature does not lead to the presence of optical thermal photons, so that for an optical photon things are OK. In other words, thermal lattice vibrations do not interact with the incident photon, because of a considerable energy and frequency mismatch.
Initially I might have said: it is because no energy gets transferred from the photon to the beam splitter. This, however, is not true, since there are plenty of instances in which decoherence arises without energy transfer. 
  • asked a question related to Decoherence
Question
19 answers
Take the example of spin measurement on entangled pair of particles particle1 and particle2. The correlation between the spin of particle1 (spin1) and the spin of particle 2 (spin2) is the result of angular momentum conservation. If the decoherence interpretation is correct then the angular momentum is conserved within each of infinite number of decohering branches separately. However the conservation laws in quantum mechanics, (formulated  in Heisenberg Picture) only demand that the commutator of the operator representing the conserved observable with the total hamiltonian  of the whole system is zero. There seems to be no reason why it should be conserved within each decohering branch separately.
Relevant answer
Answer
It is somewhat disheartening that, apparently, many physicists are not even capable  to correctly describe a Stern-Gerlach spin-component measurement. In such a measurement, a specific COMPONENT of the quantum-mechanical spin of, e.g., an atom is measured - let's say the z-component. in this experiment, the measuring device does NOT exhibit full rotational symmetry, but only symmetry under rotations around the z-axis; (the device typically will involve an external magnetic field parallel to the z-axis, which breaks full rotational invariance, but preserves the symmetry of rotations around the z-axis). Hence, in the course of a measurement, only the z-component of the total spin of the observed system is conserved! The total spin, as measured by \vec{S}\cdot\vec{S}, is NOT conserved in the measurement. (For two atoms, each with spin 1/2, prepared in a spin-singlet initial state (i.e., s=0), the state AFTER a measurement of the z-component of one of the two atoms is a superposition of states of spin s=0 and spin s=1; but the z-component of the total spin remains 0!)
Obviously, it is IMPOSSIBLE to measure ALL components of a quantum-mechanical spin in a single measurement, because the operators representing these components do not commute among each other.
When one observes photons, instead of atoms, one measures HELICITIES of photons (rather than "spin"). The direction of propagation of the photons (as determined, for example, by some optical fiber through which they propagate) then breaks rotational symmetry. It is therefore inaccurate to say that the observed correlations between the helicities of two photons is a consequence of the initial state of the two photons having "spin = 0"!
  • asked a question related to Decoherence
Question
16 answers
The equations from the Standard Model of quantum field theory are time reversal invariant. However, there are papers that suggest decoherence leads to entropy, e.g. Decoherence and Dynamical Entropy Generation in Quantum Field Theory,Jurjen F. Koksma et al., Phys. Lett. B (2011).  There are other papers that argue entropy emerges from simple two-particle entanglement. Is there any definitive answer? Re. general relativity, there have been numerous papers arguing that gravity is an entropic force,  But, as I understand, the proper way of demonstrating this, theoretically, is still unsettled. Is that right? 
Relevant answer
Answer
Re. entropy and QM, this paper is interesting: http://cdn.intechopen.com/pdfs-wm/29594.pdf