Page 1
arXiv:quant-ph/0603135v1 15 Mar 2006
1
Interaction in Quantum Communication
Hartmut Klauck, Ashwin Nayak, Amnon Ta-Shma and David Zuckerman
Hartmut is with the Department of Computer Science and Mathematics, University of Frankfurt, Robert Mayer
Strasse 11-15, 60054 Frankfurt am Main, Germany. His research is supported by DFG grant KL 1470/1. E-
mail: klauck@thi.informatik.uni-frankfurt.de. Most of this work was done while Hartmut was with the University
of Frankfurt, and later with CWI, supported by the EU 5th framework program QAIP IST-1999-11234 and
by NWO grant 612.055.001. Ashwin is with Department of Combinatorics and Optimization, and Institute for
Quantum Computing, University of Waterloo, 200 University Ave. W., Waterloo, ON N2L 3G1, Canada, E-mail:
anayak@math.uwaterloo.ca. He is also Associate Member, Perimeter Institute for Theoretical Physics, Canada.
Ashwin’s research is supported in part by NSERC, CIAR, MITACS, CFI, and OIT (Canada). Parts of this work
were done while Ashwin was at University of California, Berkeley, DIMACS Center and AT&T Labs, and California
Institute of Technology. Amnon is with the Dept. of Computer Science, Tel-Aviv University, Israel 69978, E-mail:
amnon@post.tau.ac.il. This research was supported in part by Grant No 2004390 from the United States-Israel
Binational Science Foundation (BSF), Jerusalem, Israel. A part of this work was done while Amnon was at the
University of California at Berkeley, and supported in part by a David and Lucile Packard Fellowship for Science
and Engineering and NSF NYI Grant CCR-9457799. David is with the Dept. of Computer Science, University
of Texas, Austin, TX 78712, E-mail: diz@cs.utexas.edu. This work was done while David was on leave at the
University of California at Berkeley. Supported in part by a David and Lucile Packard Fellowship for Science
and Engineering, NSF Grant CCR-9912428, NSF NYI Grant CCR-9457799, and an Alfred P. Sloan Research
Fellowship.
February 1, 2008 DRAFT
Page 2
2
Abstract
In some scenarios there are ways of conveying information with many fewer, even exponentially fewer,
qubits than possible classically [1], [2], [3]. Moreover, some of these methods have a very simple structure—
they involve only few message exchanges between the communicating parties. It is therefore natural to
ask whether every classical protocol may be transformed to a “simpler” quantum protocol—one that has
similar efficiency, but uses fewer message exchanges.
We show that for any constant k, there is a problem such that its k+1 message classical communication
complexity is exponentially smaller than its k message quantum communication complexity. This, in
particular, proves a round hierarchy theorem for quantum communication complexity, and implies, via
a simple reduction, an Ω(N1/k) lower bound for k message quantum protocols for Set Disjointness for
constant k.
Enroute, we prove information-theoretic lemmas, and define a related measure of correlation, the
informational distance, that we believe may be of significance in other contexts as well.
I. Introduction
A recurring theme in quantum information processing has been the idea of exploiting
the exponential resources afforded by quantum states to encode information in very non-
obvious ways. One representative result of this kind is due to Ambainis, Schulman, Ta-
Shma, Vazirani, and Wigderson [2]. They show that two players can deal a random set
√N cards each, from a pack of N cards, by the exchange of O(logN) quantum bits
of
between them. Another example is given by Raz [3] who shows that a natural geometric
promise problem that has an efficient quantum protocol, is hard to solve via classical
communication. Both are examples of problems for which exponentially fewer quantum
bits are required to accomplish a communication task, as compared to classical bits. A
third example is the O(√N logN) qubit protocol for Set Disjointness due to Buhrman,
Cleve, and Wigderson [1], which represents quadratic savings in the communication cost
over classical protocols.
The protocols presented by Ambainis et al. [2] and Raz [3] share the feature that they
require minimal interaction between the communicating players. For example, in the
protocol of Ambainis et al. [2] one player prepares a set of qubits in a certain state and
sends half of the qubits across as the message, after which both players measure their qubits
to obtain the result. In contrast, the protocol of Buhrman, Cleve and Wigderson [1] for
February 1, 2008 DRAFT
Page 3
3
checking set disjointness (DISJ) requires Ω(√N) messages. This raises a natural question:
Can we exploit the features of quantum communication and always reduce interaction
while maintaining the same communication cost? In particular, are there efficient quantum
protocols for DISJ that require only a few messages?
Kitaev and Watrous [4] show that every efficient quantum interactive proof can be trans-
formed into a protocol with only three messages of similar total length. This suggests that
it might be possible to reduce interaction in other protocols as well. In this paper we show
that for any constant k, there is a problem such that its k + 1 message classical commu-
nication complexity is exponentially smaller than its k message quantum communication
complexity, thus answering the above question in the negative. This, in particular, proves
a round hierarchy theorem for quantum communication complexity, and implies, via a
simple reduction, polynomial lower bounds for constant round quantum protocols for Set
Disjointness.
Our Separation Results
The role of interaction in classical communication is well-studied, especially in the con-
text of the Pointer Jumping function [5], [6], [7], [8], [9]. Our first result is for a subprob-
lem Skof Pointer Jumping that is singled out in Miltersen et al. [10] (see Section V-A for
a formal definition of Sk). We show:
Theorem I.1: For any constant k, there is a problem Sk+1such that any quantum pro-
tocol with only k messages and constant probability of error requires Ω(N1/(k+1)) commu-
nication qubits, whereas it can be solved with k + 1 messages by a deterministic protocol
with O(logN) bits.
A more precise version of this theorem is given in Section V-D and implies a round
hierarchy even when the number of messages k grows as a function of input size N, up
to k = Θ(logN/loglogN). Our analysis of Skfollows the same intuition as that behind
the result of Miltersen et al. [10], but relies on entirely new ideas from quantum information
theory. The resulting lower bound is optimal for a constant number of rounds.
Next, we study the Pointer Jumping function itself. Let fkdenote the Pointer Jumping
function with path length k + 1 on graphs with 2n vertices, as defined in Section VI.
The input length for the Pointer Jumping function fkis N = 2nlogn, independent of k,
February 1, 2008 DRAFT
Page 4
4
whereas the input length for Skis exponential in k. The function fkis thus usually more
appropriate for studying the effect of rounds on communication when k grows rapidly as
a function of the input length.
We first show an improved upper bound on the classical complexity of Pointer Jumping,
further closing the gap between the known classical upper and lower bounds. We then
turn into proving a quantum lower bound. We prove:
Theorem I.2: For any constant k, there is a classical deterministic protocol with k mes-
sage exchanges, that computes fkwith O(logn) bits of communication, while any k − 1
round quantum protocol with constant error for fkneeds Ω(n) qubits communication.
The lower bound of Theorem I.2 decays exponentially in k, and leads only to separation
results for k = O(logN). We believe it is possible to improve this dependence on k, but
leave it as an open problem. Note that in the preliminary version of this paper [11] this
decay was even doubly exponential, and the improvement here is obtained by using a
quantum version of the Hellinger distance.
Our lower bounds for Skand Pointer Jumping also have implications for Set Disjointness.
The problem of determining the quantum communication complexity of DISJ has inspired
much research in the last few years, yet the best known lower bound prior to this work
was Ω(logn) [2], [12]. We mentioned earlier the protocol of Buhrman et al. [1] which
solves DISJ with O(√N logN) qubits and Ω(√N) messages. Buhrman and de Wolf [12]
observed (based on a lower bound for random access codes [13], [14]) that any one message
quantum protocol for DISJ has linear communication complexity. We describe a simple
reduction from Pointer Jumping in a bounded number of rounds to DISJ and prove:
Corollary I.3: For any constant k, the communication complexity of any k-message
quantum protocol for Set Disjointness is Ω(N1/k).
A model of quantum communication complexity that has also been studied in the lit-
erature is that of communication with prior entanglement (see, e.g., Refs. [15], [12]). In
this model, the communicating parties may hold an arbitrary input-independent entangled
state in the beginning of a protocol. One can use superdense coding [16] to transmit n
classical bits of information using only ⌈n/2⌉ qubits when entanglement is allowed. The
players may also use measurements on EPR-pairs to create a shared classical random key.
February 1, 2008 DRAFT
Page 5
5
While the first idea often decreases the communication complexity by a factor of two, the
second sometimes saves logn bits of communication. It is unknown if shared entangle-
ment may sometimes decrease the communication more than that. Currently no general
methods for proving super-logarithmic lower bounds on the quantum communication com-
plexity with prior entanglement and unrestricted interaction are known. Our results all
hold in this model as well.
Our interest in the role of interaction in quantum communication also springs from the
need to better understand the ways in which we can access and manipulate information
encoded in quantum states. We develop information-theoretic techniques that expose
some of the limitations of quantum communication. We believe our information-theoretic
results are of independent interest.
The paper is organized as follows. In Section II we give some background on classical
and quantum information theory. We recommend Preskill’s lecture notes [17] or Nielsen
and Chuang’s book [18] as thorough introductions into the field. In Section III we present
new lower bounds on the quantum relative entropy function (Section III-A) and introduce
the informational distance (Section III-B). In Section IV we explain the communication
complexity model, followed by Section V where we prove our separation results and the
reduction to Set Disjointness (Section V-C). In Section VI we give our new upper bound
(Section VI-B) and quantum lower bound (Section VI-C) for the pointer-jumping problem.
Subsequent Results
Subsequent to the publication of the preliminary version of this paper [11] several new
related results have appeared.
communication complexity of the Set Disjointness problem is indeed Ω(√N), no matter
how many rounds are allowed. An upper bound of O(√N) is given by Aaronson and
First, Razborov proves in Ref. [19] that the quantum
Ambainis [20]. A result by Jain, Radhakrishnan, and Sen in Ref. [21] shows that the
complexity of protocols solving this problem in k rounds is at least Ω(n/k2). The same
authors show in Ref. [22] that quantum protocols with k−1 rounds for the Pointer Jumping
function fkhave complexity Ω(n/k4), but this result seems to hold only for the case of
protocols without prior entanglement. The same authors [23] also consider the complexity
of quantum protocols for the version of the Pointer Jumping function, in which not only
February 1, 2008 DRAFT
Page 6
6
one bit of the last vertex has to be computed, but its full name. Several papers ([24], [25],
[21], [22], [26]) have used the information theoretic techniques developed in the present
paper.
In this paper, we improve the dependence of communication complexity lower bounds
on the number of rounds, as compared to our results in Ref. [11]. To achieve this, we use a
different information-theoretic tool based on the quantum Hellinger distance. The version
of our Average Encoding Theorem based on Hellinger distance was independently found
by Jain et al. [21].
II. Information Theory Background
The quantum mechanical analogue of a random variable is a probability distribution
over superpositions, also called a mixed state. For the mixed state X = {pi,|φi?}, where
|φi? has probability pi, the density matrix is defined as ρX =
matrices are Hermitian, positive semi-definite, and have trace 1. I.e., a density matrix has
?
ipi|φi??φi|. Density
an eigenvector basis, all the eigenvalues are real and between zero and one, and they sum
up to one.
A. Trace Norm And Fidelity
The trace norm of a matrix A is defined as ?A?t= Tr√A†A, which is the sum of the
magnitudes of the singular values of A. Note that if ρ is a density matrix, then it has
trace norm one. If φ1,φ2are pure states then:
?|φ1??φ1| − |φ2??φ2|?t
=2
?
1 − |?φ1|φ2?|2.
We will need the following consequence of Kraus representation theorem (see for example
Preskill’s lecture notes [17]):
Lemma II.1: For each Hermitian matrix ρ and each trace-preserving completely positive
superoperator T: ?T(ρ)?t≤ ?ρ?t.
A useful alternative to the trace metric as a measure of closeness of density matrices is
fidelity. Let ρ be a mixed state with support in a Hilbert space H. A purification of ρ is
any pure state |φ? in an extended Hilbert space H ⊗ K such that TrK|φ??φ| = ρ. Given
February 1, 2008DRAFT
Page 7
7
two density matrices ρ1,ρ2on the same Hilbert space H, their fidelity is defined as
F(ρ1,ρ2) = sup |?φ1|φ2?|2,
where the supremum is taken over all purifications |φi? of ρiin the same Hilbert space.
Jozsa [27] gave a simple proof, for the finite dimensional case, of the following remarkable
equivalence first established by Uhlmann [28].
Fact II.2 (Jozsa) For any two density matrices ρ1,ρ2 on the same finite dimensional
space H,
F(ρ1,ρ2) =
?
Tr
??
ρ11/2ρ2ρ11/2??2
= ?√ρ1√ρ2?2
t.
Using this equivalence, Fuchs and van de Graaf [29] relate fidelity to the trace distance.
Fact II.3 (Fuchs, van de Graaf) For any two mixed states ρ1,ρ2,
1 −
?
F(ρ1,ρ2) ≤
1
2?ρ1− ρ2?t
≤
?
1 − F(ρ1,ρ2).
While the definition of fidelity uses purifications of the mixed states and relates them
via the inner product, fidelity can also be characterized via measurements (see Nielsen and
Chuang [18]).
Fact II.4: For two probability distributions p,q on finite sample spaces, let F(p,q) =
√piqi)2denote their fidelity. Then, for any two mixed states ρ1,ρ2,
(?
i
F(ρ1,ρ2)= min
{Em}F(pm,qm),
where the minimum is over all POVMs {Em}, and pm= Tr(ρ1Em),qm= Tr(ρ2Em) are
the probability distributions created by the measurement on the states.
A useful property of the trace distance ?ρ1− ρ2?tas a measure of distinguishability is
that it is a metric, and hence satisfies the triangle inequality. This is not true for fidelity
F(ρ1,ρ2) or for 1−F(ρ,ρ2). Fortunately, a variant of fidelity is actually a metric. Denote
by
?
the quantum Hellinger distance. Clearly h(ρ1,ρ2) inherits most of the desirable properties
h(ρ1,ρ2)=1 −
?
F(ρ1,ρ2)
of fidelity, like unitary invariance, definability as a maximum over all measurements of the
classical Hellinger distance of the resulting distributions, and so on. To see that h(ρ1,ρ2)
February 1, 2008 DRAFT
Page 8
8
is actually a metric one can simply use Fact II.4 to reduce this problem to showing that
the classical Hellinger distance is a metric, which is well known.
Analogously to Lemma II.1, due to the monotonicity of fidelity [18], we have:
Lemma II.5: For all density matrices ρ1,ρ2and each trace-preserving completely posi-
tive superoperator T: h(T(ρ1),T(ρ2)) ≤ h(ρ1,ρ2).
Let us also note the following relation between the Hellinger distance and the trace
norm that follows directly from Fact II.3.
Lemma II.6: For any two mixed states ρ1,ρ2,
h2(ρ1,ρ2) ≤
1
2?ρ1− ρ2?t
≤
√2 · h(ρ1,ρ2).
We will sometimes work with h2(·,·) instead of h(·,·). This is not a metric, but it is
true that for all density matrices ρ1,ρ2,ρ3:
h2(ρ1,ρ2) ≤ (h(ρ1,ρ3) + h(ρ3,ρ2))2≤ 2h2(ρ1,ρ3) + 2h2(ρ3,ρ2).
B. Local Transition Between Bipartite States
Jozsa [27] proved:
Theorem II.7 (Jozsa) Suppose |φ1?,|φ2? ∈ H ⊗ K are the purifications of two density
matrices ρ1,ρ2 in H. Then, there is a local unitary transformation U on K such that
F(ρ1,ρ2) = |?φ1|(I ⊗ U)|φ2?|2.
As noticed by Lo and Chau [30] and Mayers [31], Theorem II.7 immediately implies
that if two states have close reduced density matrices, than there exists a local unitary
transformation transforming one state close to the other. Formally,
Lemma II.8: (Local Transition Lemma, based on Refs. [30], [31], [27], [29]) Let ρ1,ρ2
be two mixed states with support in a Hilbert space H. Let K be any Hilbert space of
dimension at least dim(H), and |φi? any purifications of ρiin H ⊗ K.
Then, there is a local unitary transformation U on K that maps |φ2? to |φ′
such that
2? = I⊗U |φ2?
h(|φ1??φ1|,|φ′
2??φ′
2|) = h(ρ1,ρ2).
Furthermore,
?|φ1??φ1| − |φ′
2??φ′
2|?t
≤ 2?ρ1− ρ2?
1
2
t.
February 1, 2008 DRAFT
Page 9
9
Proof: (Of Lemma II.8): By Theorem II.7, there is a (local) unitary transformation U
on K such that (I ⊗U)|φ2? = |φ′
Hence the statement about the Hellinger distance holds.
2?, a state which achieves fidelity: F(ρ1,ρ2) = |?φ1|φ′
2?|2.
By Lemma II.6
?|φ1??φ1| − |φ′
≤ 2√2 · h(|φ1??φ1|,|φ′
= 2√2 · h(ρ1,ρ2)
≤ 2 · ?ρ1− ρ2?
2??φ′
2|?t
2??φ′
2|)
1
2
t.
C. Entropy, Mutual Information, And Relative Entropy.
H(·) denotes the binary entropy function H(p) = plog(1
non entropy S(X) of a classical random variable X on a finite sample space is?
where px is the probability the random variable X takes value x. The mutual infor-
p)+(1−p)log(
1
1−p). The Shan-
xpxlog(1
px)
mation I(X : Y ) of a pair of random variables X,Y is defined to be I(X : Y ) =
H(X) + H(Y ) − H(X,Y ). For other equivalent definitions, and more background on
the subject see, e.g., the book by Cover and Thomas [32].
We use a simple form of Fano’s inequality.
Fact II.9 (Fano’s inequality) Let X be a uniformly distributed Boolean random vari-
able, and let Y be a Boolean random variable such that Prob(X = Y ) = p. Then I(X :
Y ) ≥ 1 − H(p).
The Shannon entropy and the mutual information functions have natural generalizations
to the quantum setting. The von Neumann entropy S(ρ) of a density matrix ρ is defined
as S(ρ) = −Trρlogρ = −?
of ρ. Notice that the eigenvalues of a density matrix form a probability distribution. In
iλilogλi, where {λi} is the multi-set of all the eigenvalues
fact, we can think of the density matrix as a mixed state that takes the i’th eigenvector
with probability λi. The von Neumann entropy of a density matrix ρ is, thus, the entropy
of the classical distribution ρ defines over its eigenstates.
The mutual information I(X : Y ) of two disjoint quantum systems X,Y is defined to
February 1, 2008DRAFT
Page 10
10
be I(X : Y ) = S(X)+S(Y )−S(XY ), where XY is the density matrix of the system that
includes the qubits of both systems. Then
I(X : Y Z) = I(X : Y ) + I(XY : Z) − I(Y : Z),
I(X : Y Z) ≥ I(X : Y ),
(1)
(2)
Equation (2) is in fact equivalent to the strong sub-additivity property of von Neumann
entropy.
We need the following slight generalization of Theorem 2 in Cleve et al. [15].
Lemma II.10: Let Alice own a state ρAof a register A. Assume Alice and Bob com-
municate and apply local transformations, and at the end register A is measured in the
standard basis. Assume Alice sends Bob at most k qubits, and Bob sends Alice arbitrarily
many qubits. Further assume all these local transformations do not change the state of
register A, if A is in a classical state. Let ρABbe the final state of A and Bob’s private
qubits B. Then I(A : B) ≤ 2k.
Proof: Considering the joint state of register A and Bob’s qubits, there cannot be any
interference between basis states differing on A. Thus we can assume that ρAis measured
in the beginning, i.e., that ρA is classical. In this case the result directly follows from
Theorem 2 in Ref. [15].
Note that in the above lemma Alice and Bob can use Bob’s free communication to set
up an arbitrarily large amount of entanglement independent of ρA.
The relative von Neumann entropy of two density matrices, defined by S(ρ?σ) =
Trρlogρ − Trρlogσ. One useful fact to know about the relative entropy function is
that I(A : B) = S(ρAB?ρA⊗ρB). For more properties of this function see Refs. [17], [18].
III. Informational Distance And New Lower Bounds On Relative
Entropy
A. New Lower Bounds On Relative Entropy
We now prove that the relative entropy S(ρ1?ρ2) is lower bounded by Ω(?ρ1− ρ2?2
and by Ω(h2(ρ1,ρ2)). We believe these results are of independent interest. A classical
t)
February 1, 2008DRAFT
Page 11
11
version of the theorem can be found in, e.g., Cover and Thomas’ book on Information
Theory [32].
Theorem III.1: For all density matrices ρ1,ρ2:
S(ρ1?ρ2)≥
1
2ln2?ρ1− ρ2?2
t.
Although this relationship has appeared in the literature [33], it was rediscovered by
several authors, including us. Below we give a proof of this theorem for completeness. The
earlier version of our paper [11] contained a more complicated proof.
Proof: (Theorem III.1) The proof goes by reduction to the classical case. Consider
the classical distributions ˜ ρ1, ˜ ρ2 obtained by measuring ρ1,ρ2in the basis diagonalizing
their difference ρ1− ρ2. It is known [17], [18] that
? ˜ ρ1− ˜ ρ2?1
= ?ρ1− ρ2?t.
Due to Lindblad-Uhlmann monotonicity of relative von Neumann entropy [17], [18],
S(ρ1?ρ2) ≥ S(˜ ρ1?˜ ρ2).
The classical version of the theorem [32] now gives
S(˜ ρ1?˜ ρ2) ≥
1
2ln2? ˜ ρ1− ˜ ρ2?2
1
2ln2?ρ1− ρ2?2
1
=
t.
This completes the proof.
Now we show an analogous result for the quantum Hellinger distance.
Theorem III.2: For all density matrices ρ1,ρ2:
S(ρ1?ρ2)≥
2
ln2h2(ρ1,ρ2).
This theorem has also been shown independently by Jain et al. [21].
Proof:
We first show that the theorem holds when ρ1and ρ2are classical distribu-
tions, and then generalize this to the quantum case.
In the classical case we first show S(ρ1?ρ2) ≥ −2log(1 − h2(ρ1,ρ2)). This was shown
by Dacunha-Castelle in Ref. [34].
February 1, 2008 DRAFT
Page 12
12
log(1 − h2(ρ1,ρ2)) = log(
?
??
??
F(ρ1,ρ2))
= log
i
?
ρ1(i)ρ2(i)
?
?
?
= log
i
ρ1(i)
?ρ2(i)
?ρ1(i)
??ρ2(i)
?ρ1(i)
≥
?
i
ρ1(i)log
= −1
2S(ρ1?ρ2).
The first equation is by definition of h, the second by definition of the classical fidelity
function, and the inequality is by an application of Jensen’s inequality.
Having that, S(ρ1?ρ2) ≥
the theorem holds in the classical case.
2
ln2h2(ρ1,ρ2) using −ln(1 − x) ≥ x for all 0 ≤ x ≤ 1 and so
To show the quantum case recall that both h(·,·) and S(·?·) can be defined as the max-
imum over all POVM measurements of the classical versions of these functions on the dis-
tributions obtained by the measurements. Fix a POVM {Em} that maximizes h(p,q) for
the distributions p,q obtained from ρ1,ρ2. Then S(ρ1?ρ2) ≥ S(p?q) by Lindblad-Uhlmann
monotonicity, and S(p?q) ≥
result follows.
2
ln2h2(p,q) =
2
ln2h2(ρ1,ρ2) because h(p,q) = h(ρ1,ρ2). The
B. Informational Distance
From Theorem III.2 follows that for a bipartite state ρAB,
I(A : B)=S(ρAB?ρA⊗ ρB)≥
2
ln2h2(ρAB,ρA⊗ ρB).
Thus the distance between the tensor product state and the “real” (possibly entangled)
bipartite state can be bounded in terms of the Hellinger distance. We call the quantity
D(A : B) = h(ρAB,ρA⊗ ρB) the “informational distance.”
amount of correlation between the quantum registers A and B, and can be positive even
D(A : B) measures the
when the system is classical or not entangled. Later we state some of its properties and use
it for proving the quantum communication lower bound on the pointer jumping problem.
February 1, 2008DRAFT
Page 13
13
The next lemma collects a few immediate properties of informational distance.
Lemma III.3: For all states ρXY Zthe following hold:
1. D(X : Y ) = D(Y : X),
2. 0 ≤ D(X : Y ) ≤ 1,
3. D(X : Y ) ≥ h(T(ρXY),T(ρX⊗ ρY)) for all completely positive, trace-preserving su-
peroperators T,
4. D(XY : Z) ≥ D(X : Z),
5. D(X : Y ) ≤?I(X : Y ).
Proof:
(1) is true by definition, (2) follows from the definition and the triangle
inequality, (3,4) follow from Lemma II.5 and (5) from Theorem III.2.
We now examine the informational distance in the special case where ρQX is block
diagonal, with classical ρX. We denote by ρ(x)
Qthe density matrix obtained by fixing X to
some classical value x and normalizing. Pr(x) is the probability of X = x.
Lemma III.4: For all block diagonal ρQX, where ρXcorresponds to a classical distribu-
tion,
1. D2(Q : X) = Exh2?
2. Further assume X is Boolean with Pr(X = 1) = Pr(X = 0) = 1/2. Let there be a
ρ(x)
Q,ρQ
?
.
measurement acting on the Q system only, yielding a Boolean random variable Y with
Pr(X = Y ) ≥ 1 − ǫ and Pr(X ?= Y ) ≤ ǫ. Then D2(Q : X) ≥ 1/8 − ǫ/2.
The first item is true because ρQXis block-diagonal with respect to X. In the second item,
notice that the same measurement applied to ρX⊗ ρQyields a distribution with Pr(X =
Y ) = Pr(X ?= Y ) = 1/2, because Q is independent of X, and X is uniform. Observe
that ?ρXQ− ρX⊗ ρQ?t≥ ?ρXY− ρX⊗ ρY?t≥ 1−2ǫ and then apply Lemma II.6. Note
that this is a rather crude estimate, since D(Q : X) approaches 1 − 1/√2 when ǫ goes to
zero.
C. The Average Encoding Theorem
A corollary of Theorems III.1,III.2 is the following “Average encoding theorem”:
Theorem III.5 (Average encoding theorem) Let x ?→ ρxbe a quantum encoding map-
ping an m bit string x ∈ {0,1}minto a mixed state with density matrix ρx. Let X be
distributed over {0,1}m, where x ∈ {0,1}mhas probability px, let Q be the encoding of X
February 1, 2008DRAFT
Page 14
14
according to this map, and let ¯ ρ =?
xpxρx. Then,
?
x
px? ¯ ρ − ρx?t
≤ [(2ln2) I(Q : X)]1/2
and
?
x
px h2(¯ ρ,ρx) ≤
ln2
2
I(Q : X).
In other words, if an encoding Q is only weakly correlated to a random variable X, then
the “average encoding” ¯ ρ is in expectation (over a random string) a good approximation
of any encoded state. Thus, in certain situations, we may dispense with the encoding
altogether, and use the single state ¯ ρ instead. The preliminary version of our paper [11]
did not include the second statement. The present stronger version was also observed
independently by Jain et al. [21].
Proof: (Of Theorem III.5) In the setting of the Average encoding theorem we have
a random variable that is distributed over {0,1}m, and a quantum encoding x ?→ ρx
mapping m bit strings x ∈ {0,1}minto mixed states with density matrices ρx. Let X be
the register holding the input x and Q be the register holding the encoding. Let us also
define the average encoding ¯ ρ =?
Then, by Theorem III.1,
xpxρx.
I(Q : X) = S(ρQX?ρQ⊗ ρX)≥
1
2ln2?ρQX− ρQ⊗ ρX?2
t
The density matrix ρX of the X register alone is diagonal and contains the values
pxon the diagonal, the density matrix ρQof the Q register alone is ¯ ρ, and the density
matrix ρQ⊗ ρXis block diagonal and the x’th block is of the form px¯ ρ. Also, the density
matrix ρQX of the whole system is block diagonal, with pxρx in the x’th block. Thus,
xpx?ρx− ¯ ρ?t, and so Ex?ρx− ¯ ρ?t≤√2ln2?I(Q : X).
The second statement follows analogously using Theorem III.2.
?ρQX− ρQ⊗ ρX?t=?
IV. The Communication Complexity Model
In the quantum communication complexity model [35], two parties Alice and Bob hold
qubits. When the game starts Alice holds a classical input x and Bob holds y, and so
February 1, 2008DRAFT
Page 15
15
the initial joint state is simply |x? ⊗ |y?. Furthermore each player has an arbitrarily large
supply of private qubits in some fixed basis state. The two parties then play in turns.
Suppose it is Alice’s turn to play. Alice can do an arbitrary unitary transformation on
her qubits and then send one or more qubits to Bob. Sending qubits does not change
the overall superposition, but rather changes the ownership of the qubits, allowing Bob
to apply his next unitary transformation on the newly received qubits. Alice may also
(partially) measure her qubits during her turn. At the end of the protocol, one player
makes a measurement and declares the result of the protocol. In a classical probabilistic
protocol the players may only exchange classical messages.
In both the classical and quantum settings we can also define a public coin model.
In the classical public coin model the players are also allowed to access a shared source
of random bits without any communication cost. The classical public and private coin
models are strongly related [36]. Similarly, in the quantum public coin model Alice and
Bob initially share an arbitrary number of quantum bits which are in some pure state
that is independent of the inputs. This is better known as communication with prior
entanglement [15], [12].
The complexity of a quantum (or classical) protocol is the number of qubits (respectively,
bits) exchanged between the two players. We say a protocol computes a function f :
X × Y ?→ {0,1} with ǫ ≥ 0 error if, for any input x ∈ X,y ∈ Y, the probability that the
two players compute f(x,y) is at least 1 − ǫ. Qǫ(f) (resp. Rǫ(f)) denotes the complexity
of the best quantum (resp. probabilistic) protocol that computes f with at most ǫ error.
For a player P ∈ {Alice, Bob}, Qc,P
protocol that computes f with at most ǫ error with only c messages (called rounds in the
ǫ (f) denotes the complexity of the best quantum
literature), where the first message is sent by P. If the name of the player is omitted
from the superscript, either player is allowed to start the protocol. We say a protocol P
computes f with ǫ error with respect to a distribution µ on X × Y, if
Prob(x,y)∈µ,P(P(x,y) = f(x,y)) ≥ 1 − ǫ.
Qc,P
only c messages where the first message is sent by player P. We will use the notation˜Q
µ,ǫ(f) is the complexity of computing f with at most ǫ error with respect to µ, with
(rather than Q∗, as in the literature) for communication complexity in the public coin
February 1, 2008DRAFT
Download full-text