Content uploaded by Sergei Viznyuk
All content in this area was uploaded by Sergei Viznyuk on Jun 30, 2020
Content may be subject to copyright.
I critically assess some published values for decoherence times. I show there
are two characteristic times inversely proportional to each other: decoherence
time, and probability decay time, with second often mistaken for the first. I
present formulas for decoherence and decay times
While decoherence has been extensively studied, surprisingly little is available in terms of
numeric results, either calculated from formulas [1, 2] or measured in experiments [3, 4]. The
reason for researchers shying away from publishing numbers becomes apparent once these
numbers are estimated from widely quoted expressions [5, 1]. The estimated values of decoherence
times span an incredible range from for a large molecule immersed into cosmic
microwave background , to for a “canonical” classical object  with mass ,
characteristic length , temperature , and assumed characteristic “relaxation” time of .
An interval of cannot conceivably correspond to any physical process in a macroscopic
object, as no parameter can undergo a change in any meaningful way during such time. To note, it
takes times longer, i.e. for light to cross proton radius. Due to time-energy
uncertainty, the duration of of a change in object’s state would result in energy
uncertainty of , enough to evaporate object. Such improbable times call into question
the premise decoherence estimates are based on.
Part of the problem, at least for some authors, is that … no clear, unambiguous and universally
accepted deﬁnition of coherence (which is supposed to get lost in the process) is available .
Which leads to a confusion as far as which process’ characteristic time to take as decoherence
time. The common view links coherence with presence of interference terms in density matrix,
and their reduction – with decoherence . With this understanding, can the characteristic time of
reduction of interference be taken as decoherence time? As I explain below, that is not the case.
The reduction of object’s density matrix is achieved by tracing out entangled ancilla system,
often assumed to be the environment . As has been shown elsewhere , the “tracing out”
operation is nothing else but a measurement performed on ancilla
. The decoherence time is,
therefore, the interval between state preparation
, and subsequent measurement, even if only on
ancilla part of the state. The classical information extracted by measurement on ancilla reduces
ambiguity in measurement of entangled object’s state , i.e. reduces interference terms in density
matrix. The reduction of density matrix signifies decay of object’s probability distribution.
One would intuitively expect the faster ancilla is measured, the faster will interference decay,
but, in fact, the relation between decoherence time defined above, and the decay of probability
distribution is inverse, i.e. the smaller is , the slower is the decay
. One can understand this by
considering measurements on ancilla as random walk from the surface of generalized Bloch ball
 towards its interior. A point on the surface of Bloch ball corresponds to the initial pure state,
and the point in the center corresponds to the complete mixture. The larger are the steps (i.e. the
larger is ), the fewer steps are needed to reduce density matrix, the faster is the probability decay.
It can also be schematically proven as follows. The distance from the surface of Bloch ball signifies
and not by ancilla, as often mistakenly claimed 
the preparation is also measurement, just with a different device, in a different measurement basis
this is also known as quantum Zeno effect 
the probability decay. The random walk is described by binomial distribution with probability
to make a step in either direction. The variance of binomial distribution after random
steps is given by:
The standard deviation , multiplied by the length of each step, gives the distance gained from
the surface of Bloch ball, i.e. the probability decay. If elapsed time is , then
. Using (1):
, i.e. the probability decay time is inversely proportional to
decoherence time . The expression for probability decay has been obtained in Section 4 of :
, where is the elapsed time; is the characteristic decay time; is explained below; is
calculated from object’s energy spectrum as:
A generic formula for decoherence time has been obtained in  using Fermi’s golden rule:
, where is density of states, i.e. the number of states per unit energy interval around energy
of the object. From (3,5) the characteristic probability decay time is:
For a two-level system, is simply equal to the difference in energy levels: . Assuming
no degeneracy, there is 1 state per energy interval, i.e.
. Then, from (5,6):
Thus, for non-degenerate two-level system, the probability decay time and decoherence time are
both equal to Margolus-Levitin bound . A sensible value for decoherence time of has been
obtained in  from experimental data on spectral linewidth, effectively using formula (7), even
though (7) is only applicable to non-degenerate two-level systems.
The value of can be quite large for macroscopic objects. That explains why probability
decay time (6) can be incredibly small. One has to keep in mind, the probability decay (3) does
not describe a change in any particular object’s state. It only means the reduction in correlation
between measurements outcomes obtained by different devices when performing measurements
on ensemble of identically prepared objects. Each measurement and ensuing [partial] decoherence
has characteristic time (5). One cannot measure probability distribution (3) on timescales shorter
than (5). On such timescales (3) is a mere abstraction.
I shall now expound on the value of parameter in (3). A measurement outcome is one of
possible events . The measurement event sample represents the collected information about
object’s state . Here is the number of occurrences of event in the sample. There are
distinct ways to collect event sample ; where equals statistical weight of the sample:
, where is Boltzmann’s entropy of the sample. Different ways to collect the sample (i.e.
different event sequences) relate to different correlations between events. One of the ways to
collect event sample would have the same sequence of events as that of the measurement sample
collected on initial pure state. In full decoherence, all ways to collect event sample have equal
is the minimum probability the measured object is in initial pure state,
with confidence (fidelity) provided in .
M. Schlosshauer, "Quantum Decoherence," arXiv:1911.06282 [quant-ph], 2019.
J. Anglin, J. Paz and W. Zurek, "Deconstructing Decoherence," arXiv:quant-ph/9611045,
A. Berkley, H. Xu, R. Ramos, M. Gubrud, F. Strauch, P. Johnson, J. Anderson, A. Dragt,
C. Lobb and F. Wellstood, "Entangled Macroscopic Quantum States in Two
Superconducting Qubits," Science, vol. 300, no. 1548-1550, p. 5625, 2003.
M. Steffen, M. Ansmann, R. Bialczak, N. Katz, E. Lucero, R. McDermott, M. Neeley, E.
Weig, A. Cleland and J. Martinis, "Measurement of the Entanglement of Two
Superconducting Qubits via State Tomography," Science, vol. 313, no. 5792, pp. 1423-
W. Zurek, "Reduction of the Wavepacket: How Long Does it Take?," arXiv:quant-
E. Okon and D. Sudarsky, "Less Decoherence and More Coherence in Quantum Gravity,
Inﬂationary Cosmology and Elsewhere," arXiv:1512.05298 [quant-ph], 2015.
S. Viznyuk, "No decoherence by entanglement," 2020. [Online]. Available:
D. Massimiliano and S. Bianchi, "The extended Bloch representation of quantum
mechanics and the hidden-measurement solution to the measurement problem," Annals of
Physics, vol. 351, pp. 975-1025, 2014.
S. Viznyuk, "From QM to KM," 2020. [Online]. Available:
N. Margolus and L. Levitin, "The maximum speed of dynamic evolution," arXiv:quant-
S. Viznyuk, "The measurement and the state evaluation," 2020. [Online]. Available:
B. Misra and E. Sudarshan, "The Zeno's paradox in quantum theory," Journal of
Mathematical Physics, vol. 18, no. 4, pp. 756-763, 1977.