Page 1

Coding Perspectives for Collaborative Estimation Over

Networks

Sivagnanasundaram Ramanan and John MacLaren Walsh

Dept. of Electrical and Computer Engineering, Drexel University, Philadelphia, PA 19104, USA

E-mail: sur23@drexel.edu, jwalsh@coe.drexel.edu

Abstract—A collaborative distributed estimation problem over a com-

munication constrained network is considered from an information theory

perspective. A suitable architecture for the codes for this multiterminal

information theory problem is determined under source-channel separa-

tion. In particular, distributed source codes in which each node multicasts

a different message to each subset of other nodes are studied. This

code construction hybridizes multiple description codes and codes for the

CEO problem. The goal of this paper is to determine the fundamental

relationship between the multicast communication rates and estimation

performance obtainable. An achievable rate distortion region is proved

to this problem and its structural properties are studied. Also, this

achievable rate region is shown to simplify to the known bounds to

some simpler problems.

I. INTRODUCTION

Consider a network of M nodes deployed to monitor a common

phenomenon embodied by a sequence of random variables T(n).

Each node j ∈ [M] ({1,...,M} is denoted as [M]) in the network

makes indirect observations of this phenomenon, embodied as another

sequence of random variables Y(n)

j

the sequence

T(n),Y(n)

1

,...,Y(n)

M

probability distribution pT,Y1,...,YM.

Each node could use the local observations to obtain Bayesian

estimates ˜T(n)

j

of T(n)that minimize some local cost function

1

N

jj

j

is denoted as YN

communicate with each other in hopes of improving their estimates.

We will study such collaborative distributed estimation schemes

which accomplish this with separated network/channel and source

coding (despite the fact that such a separation is known to be

suboptimal in some multiterminal problems). The network/channel

codes see to it that messages sent over the network arrive at the

intended receivers unaltered, while the distributed source code sees

to it that the content of these messages provides the right information

extracted from the observations at a node in order to lower the

estimation error at the destination.

Our first insight, made in Section II, is that under this decompo-

sition the proper source coding model reflecting the capabilities of

the network code is one in which each node multicasts a different

message to every possible subset of other nodes in the network. In

particular, the source encoder at each node j encodes its observations

YN

j

into a common message Qj→A ∈ {1,2,...,2NRj→A} to each

of the nodes with indices in some subset A of the other nodes

using an average of Rj→A bits per observation symbol. A different

such message can be encoded at each node j for each such subset

A ⊆ [M]\j, and then reliably multicasted (e.g. with the aid of some

S. Ramanan and J. M. Walsh were supported in part by the National Science

Foundation under grant CCF-0728496 and part by the Air Force Office of

Scientific Research under grant FA9550-09-C-0014. They wish to thank Jun

Chen of McMaster University for helpful comments and suggestions in the

early stages of this work.

statistically related to T(n). Let

??

be i.i.d. according to joint

?N

n=1E

?

dj(˜T(n)

,T(n)

)??YN

?

(The vector

?

Y(1)

j

,...,Y(N)

j

?

j ). Alternatively, the nodes in the network could

Q1→2,3

Q1→2,3,Q2→1,3

Q2→1,3,Q3→1,2

Q3→1,2

Q2→1,3

Q1→2,3,Q3→1,2

Q2→1

Q1→2

Q2→3Q3→2

Q1→3Q3→1

1

23

Y1

Y2

Y3

Fig. 1.

3 nontrivial case of the problem of collaborative distributed estimation. To

determine the direction of travel of a message, note that the messages flow

in the direction in which they are read.

The “peace” network, which depicts the lowest dimensional M =

channel and network codes) to the nodes in A. For example, in the

lowest dimensional M = 3 nontrivial case, each node will create

multiple descriptions of its observations, one for each of the other

two nodes in the network individually, and one for both of them as

shown in Fig. 1.

We employ a classic technique from multiterminal information

theory [1] [2] to study the relationship between the rates {Rj→A| j ∈

[M], A ∈ 2[M]\j} (2Sis the power set of subsets of S) of the source

code used, and the estimation errors Djthat each of the nodes can ob-

tain in estimating the sequence T(n),n ∈ [N] from their own obser-

vations YN

j

they have received.

One might view this model as a generalization of two classes

of multiterminal information theory problems: CEO problem [1],

[3], [4], [2], [5] and multiple descriptions problem [6]. The CEO

problem studies the rate - estimation error performance at the fusion

center, which estimates an underlying sequence of parameters solely

based on the rate constrained messages received from a collection

of nodes which independently encode using noisy local observations.

Two variations on the CEO problem are studied in [7] when decoder

side information [8] is available at the CEO and in [9] under the

name of many-help-one problem when one of the nodes directly

observe the underlying sequence. In multiple descriptions problem

rate estimation error performance is studied in the case in which a

node encodes many descriptions of a source and sends different subset

of descriptions to different decoders which use those descriptions to

reproduce the source. A reader who is familiar with these two classes

of problems may question the purpose of studying this model when

both classes of problems are yet unsolved. Interestingly, we show

that some known bounds for the CEO problem and the multiple

and the messages QDj:=?Qi→A

??j ∈ A, A ∈ 2[M]\i?

Page 2

1

Y1

23

Y2 Y3

p(Y1,Y2,Y3| T)

Y1

Y2

Y3

T

p(T)

Fig. 2.

which only encodes a dedicated message to node 2 and a dedicated message

to node 3 is not general enough. Instead, the source encoder at node 1 should

encode a separate message for each possible subset of other nodes in the

network.

This network demonstrates that considering a source code at node 1

descriptions problem can be recovered from the results we derive

in this paper by hybridizing the techniques from both classes of

problems.

The paper is organized as follows. We discuss the best model for

the distributed source code in Section II. In Section III, we present

our main results on achievable rate distortion region. In Section IV,

we simplify the inner bound for some simpler problems.

II. DISTRIBUTED ESTIMATION AND MULTITERMINAL SOURCE

CODING

As outlined in the introduction, suppose we aim to separate the

source coding part of the distributed estimation problem from the

network/channel coding part, despite the fact that such a separation

may be suboptimal. Here we argue that the best model for the

distributed source code is one in which each encoder multicasts a

message to each subset of other nodes in the network, rather than

sending an individual message to each other node in the network.

To see that such a model is the appropriate one, consider a simple

wired network depicted in Fig. 2 in which three nodes (1,2,3)

making local observations Y(n)

1

,Y(n)

common underlying sequence T(n)would like to communicate over

the butterfly network in order to form local estimatesˆT(n)

of T(n). Because of the unidirectionality of the links, only node 1

may transmit information. Suppose further that the observations at

node 2 and 3 are statistically identical and the distortion metrics are

the same, and we wish to obtain the same target average estimation

error D2 = D3 at the two nodes. If node 1 encodes a separate

message for node 2 and node 3, then it would suffice to take these

two messages to be the same in this symmetric case. However, the

network code can not know this, because we have forced the source

coding construction to have a separate message for each of nodes 2

and 3. Thus, the network code is forced to attempt to transmit two

unicasts, one between 1 and 2 with rate R1→2, and one in between 1

and 3 with rate R1→3. If each link in the network is unit capacity, and

the network code is forced to treat the information flowing in between

nodes 1 and 2 as independently unicast from the unicast between 1

and 3, then the highest symmetric rate R = R1→2 = R1→3 which

can be obtained is 3/2. However, had we chosen our source code as

outputting three messages Q1→2,Q1→3,Q1→2,3, so that we included

one which was multicast from 1 to both 2 and 3, then the network

code could support a symmetric rate of R1→2,3 = 2 [10]. This would

not send any unicast information at all R1→2 = R1→3 = 0. This way

33% more useful information flows from 1 to 2 and 3 as would have

had we required only unicasts, and the distortion obtained at nodes

2

,Y(n)

3

statistically related to a

1

,ˆT(n)

2

,ˆT(n)

3

2 and 3 will thus be lower.

From this simple example we can easily infer that a proper sepa-

rated source and network/channel coding approach treats the source

code within network node i as producing an array of 2M−1multicast

messages, with one message Qi→A for each subset A ⊆ [M] \ i.

The capabilities of the possible network/channel codes are then

summarized by a region C of vectors of such multicast rates

r := [Rj→A|j ∈ [M], A ⊆ [M] \ j]

which are simultaneously supportable by the network infrastructure.

The capabilities of the possible source codes are summarized by

a rate distortion region RD describing the set of simultaneously

achievable multicast rates r and average estimation errors

(1)

d := [Di|i ∈ [M]],Di :=

1

N

N

?

n=1

E

?

di

?

T(n),ˆT(n)

i

??

(2)

An overall source channel code achieving average estimation errors

lower than d is selected by choosing a rate vector r that is in both

C and also in RD, i.e. with (r,d) ∈ RD. We now focus our efforts

on describing the rate distortion region for the associated family of

source codes we have selected.

III. ACHIEVABLE RATE DISTORTION REGION

The rate distortion region explains the relationship between the

length in bits of the different messages multicast between the nodes

and the estimation errors (measured in terms of average costs for

Bayesian estimation) that decoder/estimators at these nodes can

obtain. In particular, the vector (r,d) of multicast rates r :=

?Rj→A

N, encoders and decoders

??j ∈ [M],A ∈ 2[M]\j?

and average estimation errors d :=

[Dj|j ∈ [M]] is said to be achievable if there exists a block length

fN

j→A: YN

j → [LN

j→A],gN

i : YN

i ×

?

(j→A)∈Di

[LN

j→A] →ˆTi (3)

withˆTN

i

= gN

i(YN

i ,QDi) such that

Rj→A ≥

1

NlogLN

j→A,E

?

1

N

N

?

n=1

di(T(n),ˆT(n)

i

)

?

≤ Di

(4)

The rate distortion region RD for this problem is defined as the

closure of the region of achievable vectors (r,d).

Denotethesetof message

Si :=?(i → A)|A ∈ 2[M]\i?

theorem.

indicesleavingnode

i

by

and the set {Ui→A | A ∈ 2[M]\i}

i∈[M]Si, then we have the following as USi. If we define S :=?

Theorem1: Given a joint distribution pT,Y[M](t,y[M]), let

Ξ(d) be the collection of random vectors ξ = US which are jointly

distributed with T and Y[M]such that the following conditions are

satisfied

1) T,Y[M]\i,US\Si↔ Yi ↔ USifor all i ∈ [M]

2) There exists a decoding function gi : UDi×Yi →ˆTi such that

E [di(T,gi(UDi,Yi))] ≤ Di for all i ∈ [M]

For each ξ ∈ Ξ(d), define Φ(ξ) as in (5). Also, for each ξ ∈ Ξ(d)

and for each φ ∈ Φ(ξ), define RDin(ξ,φ) as in (6).

Let

RDin :=

ξ∈Ξ(d)

Then, the convex hull conv(RDin) of RDin is an inner bound to

the rate distortion region, i.e. conv(RDin) ⊆ RD.

??

φ∈Φ(ξ)

RDin(ξ,φ)

Page 3

Φ(ξ)=

?

?

˜RS :

?

?

(j→A)∈Pj

˜Rj→A >

?

?

(j→A)∈Pj

H(Uj→A) − H(UPj|Yj), ∀ Pj ⊆ Sj, j ∈ [M]

?

?

(5)

RDin(ξ,φ)=RS :

(j→A)∈Ci

Rj→A ≥

(j→A)∈Ci

˜Rj→A− H(Uj→A)

?

+ H(UCi|UDi\Ci,Yi), ∀ Ci ⊆ Di, i ∈ [M]

?

(6)

Proof idea: This result is an adaptation of a well known inner

bound in the multiterminal source coding community known as the

Berger-Tung inner bound, as clarified by Han and Kobayashi [1], with

the twist that the multiple (dependent) descriptions at each encoder

require an additional set of encoder inequalities. A sketch of the proof

is provided in Appendix A. A more detailed proof may be found in

[11]. ?

We next analyze the structure of the achievable rate region,

because knowing the structure of the rate region may be helpful

when we optimize some function of rates over the rate region. We

indeed use some structural properties of the inner bound to simplify

our bound to simpler problems in Section IV, and, thus present

those structural properties below.

Proposition 1: For each ξ ∈ Ξ(d), Φ(ξ) is a contra-polymatroid.

Proof: The set S is implied to be the ground set, and the

rank function ρ : 2S→ R is defined as

ρ(P) ?

j∈[M]

We must show that ρ is indeed a rank function. Consider two sets Q

and P such that Q ⊆ P ⊆ S, then

ρ(P) − ρ(Q)

=H(Uj→A) − H(UP∩Sj|Yj)

??

(j→A)∈P∩Sj

H(Uj→A) − H(UP∩Sj|Yj)

(7)

?

j∈[M]

?

?

+H(UQ∩Sj|Yj)

?

(j→A)∈Lj

?

=

?

0

j∈[M]

?

(j→A)∈Lj

H(Uj→A) − H(ULj|UQ∩Sj,Yj)

?

≥

where Lj := (P ∩ Sj) \ (Q ∩ Sj). This establishes that ρ is non-

decreasing. Next consider any two sets P ⊆ S and Q ⊆ S. We

have

ρ(P) + ρ(Q) − ρ(P ∩ Q) − ρ(P ∪ Q)

?

−H(UP∩Sj|Yj) − H(UQ∩Sj|Yj))

?

−H(UP∩Qc∩Sj|UP∩Q∩Sj,Yj)) ≤ 0

which implies that ρ is a rank function of a contra-polymatroid.

To see that this contra-polymatroid is equal to Φ(ξ), simply note

that evaluating the rank function ρ and writing the corresponding

inequality for every subset of Sj gives the list of inequalities for

node j. The collection of these inequalities over j ∈ [M] then

=

j∈[M]

(H(UP∩Q∩Sj|Yj) + H(U(P∪Q)∩Sj|Yj)

=

j∈[M]

(H(UP∩Qc∩Sj|UQ∩Sj,Yj)

yields Φ(ξ). Finally, note that evaluating the rank function at any

collection of indices corresponding to message sent from different

encoders simply sums the corresponding individual inequalities for

the different encoders. ?

Corollary 1: For each ξ ∈ Ξ(d), the generating vertices of

the polyhedron Φ(ξ) are exactly {φ(π)|π ∈ Π(S)} where Π(S) is

the set of permutations of the indices in S, and φ(π) is the vector

given by

φπ(1)(π) ? ρ(π(1)) = I(Uπ(1);Y[M])

and for every i ∈ {2,...,|S|}

φπ(i)(π)

?

=

ρ({π(1),...,π(i)}) − ρ({π(1),...,π(i − 1)})

I(Uπ(i);U{π(1),...,π(i−1)},Y[M])

and where ρ is the rank function defined in (7). Additionally, for any

λ ∈ R|S|

is attained by φ(π) for π any permutation of the elements of S

such that λπ(1)≥ ··· ≥ λπ(|S|).

(8)

+, then the solution to the linear program minφ∈Φ(ξ)λ · φ

Proof: These are standard properties of contra-polymatroids.

See, for instance, Lemma 3.3 of [12]. ?

We next use these structural properties of the achievable rate

distortion region to simplify this bound to two simpler problems:

the multiple descriptions problem and the CEO problem.

IV. SIMPLIFICATION OF BOUNDS TO SIMPLER PROBLEMS

Because we have argued that the collaborative distributed estima-

tion problem is essentially a hybrid between a collection of CEO

problems and a multiple descriptions problem, it is important to show

that the inner bound we have given specializes to known inner bounds

for these problems in special cases.

A. Simplification to Multiple Descriptions Problem

The multiple descriptions problem for two descriptions can be

obtained as a special case of our collaborative estimation problem

for M = 4 nodes. Only one node, say node 1, gets to make

observations which it would like to inform the other 3 network

nodes about, so that Y(n)

1

= T(n)and Y(n)

Additionally, node 1 structures its encodings so that nodes 2 and 3

receive different encodings, while node 4 receives everything that is

available to node 2 and 3. The coding strategy introduced in [6] to

this problem can be accomplished by dividing Q1→{4}up into two

parts Q1→{4}= (Q1

bits per symbol with ∆1+∆2 = R1→{4}and forming two descrip-

tions X1 ? (Q1→{2,4},Q1

When only one of the two descriptions X1 or X2 is available, the

achievability coding strategy introduced in [6] simply discards the

part of the description associated with Q1→{4} and utilizes only

U1→{3,4} or U1→{2,4}, respectively. When both descriptions are

available, the achievability coding strategy introduced in [6] uses

all of the encodings (Q1→{2,4},Q1→{3,4},Q1→{4}). Additionally,

since R1 = R2,4+ ∆1 and R2 = R3,4+ ∆2, we can remove the

i

= 0 for all i ?= 1.

1→{4},Q2

1→{4}) containing ∆1 ≥ 0 and ∆2 ≥ 0

1→{4}) and X2 ? (Q1→{3,4},Q2

1→{4}).

Page 4

redundant variables ∆1 and ∆2, and rewrite the constraint for R4

as R4 = R1 − R2,4 + R2 − R3,4. These identifications may be

summarized with the following notation

U1→{2,3}? U2,3, U1→{3,4}? U3,4, U1→{4,}? U4

R1→{2,3}? R2,3, R1→{3,4}? R3,4, R1→{4,}? R4

˜R1→{2,3}?˜R2,3,˜R1→{3,4}?˜R3,4,˜R1→{4}?˜R4

Uj→A = ∅, Rj→A = ∅,˜Rj→A = ∅ all other A

Where the auxiliary random variables U4,U2,4,U3,4are selected such

that

p(U4,U2,4,U3,4,T) = p(T)p(U4,U2,4,U3,4|T)

D1 ≥ E?d(T,ˆT2)?, D2 ≥ E?d(T,ˆT3)?, D0 ≥ E?d(T,ˆT4)?

Under these identifications, the inner bound becomes

(9)

(10)

(11)

(12)

(13)

R4

≥

≥

≥

˜R4− H(U4) + H(U4|U3,4,U2,4)

˜R2,4

˜R3,4

(14)

(15)

(16)

R2,4

R3,4

Having the inequalities R1 ≥ R2,4, R2 ≥ R3,4 (because ∆1,∆2 ≥

0) in hand, we replace R4 with R1−R2,4+R2−R3,4 in (14) and

use the inequalities (14)-(16) to obtain a bound on the rate region

(R1,R2) which is given by

R1

≥

≥

≥

˜R2,4

˜R3,4

˜R2,4+˜R3,4+˜R4

−H(U4) + H(U4|U3,4,U2,4)

(17)

(18)

R2

R1+ R2

(19)

We note that the minimum of˜R2,4+˜R3,4+˜R4 from the encoder

inequalities to be

H(U4) + H(U2,4) + H(U3,4) − H(U4,U2,4,U3,4|T)

Thus right hand side of (19) becomes

H(U2,4) + H(U3,4) − H(U4,U2,4,U3,4|T)

+H(U4|U2,4,U3,4)

H(U2,4) + H(U3,4) − H(U4,U2,4,U3,4|T)

+H(U4,U2,4,U3,4) − H(U2,4,U3,4)

I(U2,4;U3,4) + I(T;U4,U2,4,U3,4)

=

=

We next point out that by the contra-polymatroid property of

the source encoder region describing the collection of variables

˜R2,4,˜R3,4,˜R4 by Corollary 1, this minimum is attained for 6

(permutations of λ1 = 1,λ2 = 1,λ3 = 1) possible solutions of

˜R2,4,˜R3,4,˜R4. However, we are interested in only two of the 6

solutions which are useful in finding the region of (R1,R2) and

present the values of˜R2,4,˜R3,4 below.

1) ˜R2,4 = I(U2,4;T),

˜R3,4 = I(U3,4;U2,4,T)

2) ˜R2,4 = I(U2,4;U3,4,T),

˜R3,4 = I(U3,4;T)

Using time sharing argument of these two solutions we write the

region of rates (R1,R2) as

R1

≥

≥

≥

I(U2,4;T) + α I(U2,4;U3,4|T)

I(U3,4;T) + (1 − α) I(U2,4;U3,4|T)

I(U2,4;U3,4) + I(T;U4,U2,4,U3,4)

(20)

(21)

(22)

R2

R1+ R2

where 0 ≤ α ≤ 1. We next show that any point in the achievable rate

region (ECG region) proved in [6] also lies in the region we proved

above. To prove this, we rewrite the EGC region in the following

form

r1

≥

≥

I(U2,4;T)

max{I(U3,4;T),

I(U2,4;U3,4) + I(T;U4,U2,4,U3,4) − r1}

?r1− I(U2,4;T)

r2

and let

α = min

I(U2,4;U3,4|T),1

?

(23)

Then

R1 ≥ min{r1,I(U2,4;T) + I(U2,4;U3,4|T)} ≤ r1

(24)

and

R2

≥

I(U3,4;T) + I(U2,4;U3,4|T)

−min{r1− I(U2,4;T), I(U2,4;U3,4|T)}

max{I(U3,4;T),

I(U2,4;T) + I(U3,4;T) + I(U2,4;U3,4|T) − r1}

r2

=

≤

In the above proof we used the following inequality which can be

easily proved.

I(U2,4;U3,4) + I(T;U4,U2,4,U3,4)

I(U2,4;T) + I(U3,4;T) + I(U2,4;U3,4|T)

This completes the proof that our inner bound contains every point

in the EGC region.

≥

B. Simplification to CEO problem

We next show that CEO problem can be obtained as a simplifi-

cation of our model and that our inner bound simplifies the Berger-

Tung inner bound for this case. To see this, suppose that the nodes

i ∈ [M] \ M observe the common phenomenon embodied by the

sequence T(n)and send one message each to the CEO node M.

Using these messages received from the nodes i ∈ [M −1], the CEO

node produces an estimateˆT (ˆTM =ˆT) of T such that the expected

distortion E[d(T,ˆT)] < D.

Since the nodes i ∈ [M −1] send messsages only to node M, we

set the rates corresponding to the other messages to 0 and redefine

the rates and variables relevent to this problem as follows.

Rj→M ? Rj, Uj→M ? Uj ∀ j ∈ [M − 1]

Rj→A = 0,˜Rj→A = 0,Uj→A = ∅ all other A,j ∈ [M − 1]

DM ? D, RM→A = 0,˜RM→A = 0,UM→A = ∅

Note that the random vectors ξ = (U[M−1]) satisfy the following

constraints.

• T,Y[M−1]\i,U[M−1]\j↔ Yj ↔ Uj for all j ∈ [M − 1]

• There exists a decoding function g : U[M−1] →ˆT such that

D > E?d(T,ˆT)?

Φ(ξ) = {˜RD|˜Rj > H(Uj− H(Uj|Yj),∀j ∈ [M − 1]}

Here,˜Rj can be selected such that˜Rj = I(Uj;Yj) + ?j for all

j ∈ [M − 1] where ?j can be made arbitrarily small. Note that

selecting the rates so will not change the rate region. If we select

˜Rj = I(Uj;Yj) + ?j, there will be only 1 rate vector˜RD in the set

If we denote the set [M −1] as D := [M −1] as, then Φ(ξ) becomes

Page 5

Φ(ξ). Thus, Ψ is only a function of ξ, i.e. Ψ(ξ,φ) = Ψ(ξ). Hence,

Ψ(ξ) is the collection of rate vectors RD ≥ 0 obeying

?

=H(UC|UD\C) −

j∈C

Rj

>

?

j∈C

(˜Rj− H(Uj)) + H(UC|UD\C)

?

H(UC|UD\C) − H(UC|YC)

H(UC|UD\C) − H(UC|YC,UD\C)

I(UC;YC|UD\C)

j∈C

H(Uj|Yj)

=

=

=

for all C ⊆ D. Here, we have used the facts that node M (CEO)

does not have any side information (YM = 0) and UC ↔ YC ↔

UD\C. Thus the inner bound for the rate-distortion region for the

CEO problem becomes

RDin =

(R[M−1],D)

??????

R[M−1]∈

?

ξ∈Ξ(D)

Ψ(ξ)

where Ξ(D) is the collection of random vectors ξ. This is exactly

the Berger Tung inner bound for the CEO problem given in [4].

C. Simplification to Side Information May be Absent case

The problem studied in [13] can also be obtained as a simplification

of our model. To see this, let the number of nodes M = 3 and,

suppose that node 3 directly observes the source, i.e. Y3 = T, and

node 1 has side information about the source Y1 = Y while node 2

has no side information. Also, suppose that node 3 sends a common

description to both 1,2 and an individual description to only node 1

as it is implicitly done in [13]. We can show that sum of the rates of

these two descriptions derived from our inner bound is equal to the

rate-distortion function proved for the sum-rate in [13]. We skip the

proof to conserve the space and refer any interested reader to [11].

We can also show that the sum rate result proved for the general

problem with degraded side information can be retrieved from our

inner bound.

V. CONCLUSION

We analyzed optimized code constructions for collaborative dis-

tributed estimation via multiterminal information theory. We argued

that the proper model for a distributed source code for collaborative

distributed estimation involves multiple multicast messages from

each encoder rather than unicast messages, yielding a hybrid coding

problem between multiple descriptions and the CEO problem. An

achievable rate region which hybridized the Berger Tung inner bound

and multiple descriptions proof techniques were presented. The inner

bound was shown to be equal to the known bounds for some simpler

problems by exploiting the structural properties of the rate region.

APPENDIX A

PROOF OF THEOREM

We present a sketch of the proof of the inner bound given in Section

III here.

Proof Select a joint conditional distribution p?uS

functions {gN

Calculate the marginal distributions p(uj→A).

Codebook Generation: At each node j ∈ [M], for each subset

of nodes A ⊆ 2[M]\j, generate a codebook with 2n˜

codewords by randomly drawing the elements such that they are i.i.d.

??t,y[M]

?, a set

of encoding functions {fN

i | i ∈ [M]} such that the rates RS are in RDin.

j→A| (j → A) ∈ S } and a set of decoding

Rj→Alength-N

according to the distribution p(uj→A), where?

codewords by mj→A ∈ {1,...,2n˜

into 2nRj→Abins by randomly and uniformly assigning the indices

to the bins. Index the bins by bj→A ∈ {1,...,2nRj→A} and denote

the set of codewords in bin bj→A by Bj→A(bj→A).

Encoding: At each node j ∈ [M], encode the observation sequence

YN

j

by selecting one codeword UN

Cj→A, for each (j → A) ∈ Sj}, such that?UN

If there are more than one such UN

with the smallest indices under lexicographic ordering. If there is

no such UN

subset of nodes A ⊆ 2[M]\j, send the index bj→A of the bin that

contains UN

Bj→A(bj→A). This requires Rj→A bits to multicast a message to a

subset of nodes A ⊆ 2[M]\j.

Decoding: At each node i ∈ [M], decode the messages re-

ceived at the node by selecting the codeword UN

Bj→A(bj→A) for each (j → A) ∈ Di such that?UN

a set of codewords, select an arbitrary set of codewords. Reproduce

the underlying sequence TNbyˆTN

(j→A)∈Pj

˜Rj→A >

?

(j→A)∈PjH(Uj→A) − H(UPj|Yj) for each Pj ⊆ Sj. Index the

Rj→A}. Partition the codewords

j→A(mj→A) from each codebook

Sj(mSj), YN

? is the set of strongly typical sequences.

Sj(mSj), select the codewords

j

?

∈

A∗

?(USj,Yj), where A∗

Sj(mSj), select an arbitrary set of codewords. For each

j→A(mj→A) to the nodes in A, i.e. UN

j→A(mj→A) ∈

j→A(?j→A) in bin

Di(?Di),YN

i

?

∈

A∗

?(UDi,Yi), where UDi? (Uj→A)(j→A)∈Di. If there is no such

i

= gN

i

?YN

i ,UN

Di(?Di)?. ?

REFERENCES

[1] T. S. Han and K. Kobayashi, “A unified achievable rate region for a gen-

eral class of multiterminal source coding systems,” IEEE Transactions

on Information Theory, vol. IT-26, no. 3, pp. 277–288, May 1980.

[2] A. B. Wagner and V. Anantharam, “An improved outer bound for

multiterminal source coding,” IEEE Transactions on Information Theory,

vol. 54, no. 5, pp. 1919–1937, May 2008.

[3] T. Berger, Z. Zhang, and H. Viswanathan, “The ceo problem,” IEEE

Transactions on Information Theory, vol. 42, no. 3, pp. 887–902, May

1996.

[4] J. Chen, X. Zhang, T. Berger, and S. B. Wicker, “An upper bound on

the sum-rate distortion function and its corresponding rate allocation

schemes for the ceo problem,” IEEE Journal on Selected Areas in

Communications, vol. 22, no. 6, pp. 977–987, August 2004.

[5] Y. Oohama, “Gaussian multiterminal source coding,” IEEE Transactions

on Information Theory, vol. IT-43, no. 6, pp. 1912–1923, November

1997.

[6] A. A. El Gamal and T. M. Cover, “Achievable rates for multiple

descriptions,” IEEE Transactions on Information Theory, vol. IT-28,

no. 6, pp. 851–857, November 1982.

[7] S. C. Draper and G. W. Wornell, “Side information aware coding

strategies for sensor networks,” IEEE Journal on Selected Areas in

Communications, vol. 22, no. 6, pp. 966–976, August 2004.

[8] A. D. Wyner and J. Ziv, “The rate-distortion function for source coding

with side information at the decoder,” IEEE Transactions on Information

Theory, vol. IT-22, no. 1, pp. 1–10, January 1976.

[9] Y. Oohama, “Rate-distortion theory for gaussian multiterminal source

coding systems with several side informations at the decoder,” IEEE

Transactions on Information Theory, vol. 51, no. 7, pp. 2577–2593, July

2005.

[10] S.-Y. R. Li, R. W. Yeung, and N. Cai, “Linear network coding,”

IEEE Transactions on Information Theory, vol. 49, no. 2, pp. 371–381,

February 2003.

[11] J. M. Walsh and S. Ramanan, “Coding perspectives for collaborative

distributed estimation over networks,” proof of inner bound. [Online].

Available: http://www.ece.drexel.edu/walsh/web/listOfPubs.html

[12] D. N. C. Tse and S. V. Hanly, “Multiaccess fading channels - part

i: Polymatroid structure, optimal resource allocation and throughput

capacities,” IEEE Transactions on Information Theory, vol. 44, no. 7,

pp. 2796–2815, November 1998.

[13] C. Heegard and T. Berger, “Rate distortion when side information may

be absent,” IEEE Transactions on Information Theory, vol. 31, no. 6,

pp. 727–734, November 1985.