Joint SourceChannel Coding with Correlated Interference
ABSTRACT We study the joint sourcechannel coding problem of transmitting a
discretetime analog source over an additive white Gaussian noise (AWGN)
channel with interference known at transmitter.We consider the case when the
source and the interference are correlated. We first derive an outer bound on
the achievable distortion and then, we propose two joint sourcechannel coding
schemes. The first scheme is the superposition of the uncoded signal and a
digital part which is the concatenation of a WynerZiv encoder and a dirty
paper encoder. In the second scheme, the digital part is replaced by the hybrid
digital and analog scheme proposed by Wilson et al. When the channel
signaltonoise ratio (SNR) is perfectly known at the transmitter, both proposed
schemes are shown to provide identical performance which is substantially
better than that of existing schemes. In the presence of an SNR mismatch, both
proposed schemes are shown to be capable of graceful enhancement and graceful
degradation. Interestingly, unlike the case when the source and interference
are independent, neither of the two schemes outperforms the other universally.
As an application of the proposed schemes, we provide both inner and outer
bounds on the distortion region for the generalized cognitive radio channel.
 Citations (18)
 Cited In (0)

Article: Writing on dirty paper (Corresp.)
[Show abstract] [Hide abstract]
ABSTRACT: A channel with output Y = X + S + Z is examined, The state S sim N(0, QI) and the noise Z sim N(0, NI) are multivariate Gaussian random variables ( I is the identity matrix.). The input X in R^{n} satisfies the power constraint (l/n) sum_{i=1}^{n}X_{i}^{2} leq P . If S is unknown to both transmitter and receiver then the capacity is frac{1}{2} ln (1 + P/( N + Q)) nats per channel use. However, if the state S is known to the encoder, the capacity is shown to be C^{ast} =frac{1}{2} ln (1 + P/N) , independent of Q . This is also the capacity of a standard Gaussian channel with signaltonoise power ratio P/N . Therefore, the state S does not affect the capacity of the channel, even though S is unknown to the receiver. It is shown that the optimal transmitter adapts its signal to the state S rather than attempting to cancel it.IEEE Transactions on Information Theory 06/1983; · 2.65 Impact Factor  SourceAvailable from: sciencedirect.com[Show abstract] [Hide abstract]
ABSTRACT: In this paper we generalize (to nondiscrete sources) the results of a previous paper (Wyner and Ziv, 1976) on source coding with a fidelity criterion in a situation where the decoder (but not the encoder) has access to side information about the source. We define R*(d) as the minimum rate (in the usual Shannon sense) required for encoding the source at a distortion level about d. The main result is the characterization of R*(d) by an information theoretic minimization. In a special case in which the source and the side information are jointly Gaussian, it is shown that R*(d) is equal to the rate which would be required if the encoder (as well as the decoder) is informed of the side information.Information and Control 07/1978;  SourceAvailable from: Natasha Devroye[Show abstract] [Hide abstract]
ABSTRACT: Cognitive radio promises a lowcost, highly flexible alternative to the classic singlefrequency band, singleprotocol wireless device. By sensing and adapting to its environment, such a device is able to fill voids in the wireless spectrum and can dramatically increase spectral efficiency. In this paper, the cognitive radio channel is defined as a twosender, tworeceiver interference channel in which sender 2 obtains the encoded message sender 1 plans to transmit. We consider two cases: in the genieaided cognitive radio channel, sender 2 is noncausally presented the data to be transmitted by sender 1 while in the causal cognitive radio channel, the data is obtained causally. The cognitive radio at sender 2 may then choose to transmit simultaneously over the same channel, as opposed to waiting for an idle channel as is traditional for a cognitive radio. Our main result is the development of an achievable region which combines Gel'fandPinkser coding with an achievable region construction for the interference channel. In the additive Gaussian noise case, this resembles dirtypaper coding, a technique used in the computation of the capacity of the Gaussian multipleinput multipleoutput (MIMO) broadcast channel. Numerical evaluation of the region in the Gaussian noise case is performed, and compared to an inner bound, the interference channel, and an outer bound, a modified Gaussian MIMO broadcast channel. Results are also extended to the case in which the message is causally obtained.IEEE Transactions on Information Theory 06/2006; · 2.65 Impact Factor
Page 1
arXiv:1009.0304v2 [cs.IT] 28 Feb 2011
Joint SourceChannel Coding with Correlated
Interference
YuChih Huang and Krishna R. Narayanan
Department of Electrical and Computer Engineering
Texas A&M University
{jerry.yc.huang@gmail.com, krn@ece.tamu.edu}
Abstract
In this paper, we study the joint sourcechannel coding problem of transmitting a discretetime analog source over an additive
white Gaussian noise (AWGN) channel with interference known at transmitter. We consider the case when the source and the
interference are correlated. We first derive an outer bound on the achievable distortion and then, we propose two joint sourcechannel
coding schemes to make use of the correlation between the source and the interference. The first scheme is the superposition of
the uncoded signal and a digital part which is the concatenation of a WynerZiv encoder and a dirty paper encoder. In the second
scheme, the digital part is replaced by a hybrid digital and analog scheme so that the proposed scheme can provide graceful
degradation in the presence of (signaltonoise ratio) SNR mismatch. Interestingly, unlike the independent interference setup, we
show that neither of both schemes outperform the other universally in the presence of SNR mismatch. These coding schemes are
further utilized to obtain the achievable distortion region of the generalized cognitive radio channels.
Index Terms
Distortion region, joint sourcechannel coding, cognitive radios.
I. INTRODUCTION AND PROBLEM STATEMENT
In this paper, we consider transmitting a lengthn i.i.d. zeromean Gaussian source Vn= (V (1),V (2),...,V (n)) over n
uses of an additive white Gaussian noise (AWGN) channel with noise Zn∼ N(0,N·I) in the presence of Gaussian interference
Snwhich is known at the transmitter as shown in Fig. 1. Throughout the paper, we only focus on the bandwidthmatched
case, i.e., the number of channel uses is equal to the source’s length. The transmitted signal Xn= (X(1),X(2),...,X(n))
is subject to a power constraint
1
n
i=1
where E[·] represents the expectation operation. The received signal Ynis given by
Yn= Xn+ Sn+ Zn.
n
?
E[X(i)2] ≤ P,
(1)
(2)
We are interested in the expected distortion between the source and the estimate?Vnat the output of the decoder given by
where f and g are a pair of sourcechannel coding encoder and decoder, respectively, and d(.,.) is the mean squared error
(MSE) distortion measure given by
d(v, ˆ v) =1
n
i=1
Here the lower case letters represent realizations of random variables denoted by upper case letters. As in [1], a distortion D
is achievable under power constraint P if for any ε > 0, there exists a sourcechannel code and a sufficiently large n such that
d ≤ D + ε.
When V and S are uncorrelated, it is known that an optimal quantizer followed by a Costa’s dirty paper coding (DPC) [2]
is optimal and the corresponding joint sourcechannel coding problem is fully discussed in [3]. However, different from the
d = E[d(Vn,g(f(Vn,Sn) + Sn+ Zn))],
(3)
n
?
(v(i) − ˆ v(i))2.
(4)
ENC
++
DEC
V
ˆV
Z
S
X
Y
Fig. 1. Joint sourcechannel coding with interference known at transmitter.
Page 2
1
typical writing on dirty paper problem, in this paper, we consider the case where the source and the interference are correlated
with a covariance matrix given by
?
Under this assumption, separate source and channel coding using DPC naively may not be a good candidate for encoding
Vnin general. It is due to the fact that the DPC tries to completely avoid the interference without signal to noise ratio (SNR)
penalty so that it cannot take advantage of the it correlation between the source and the interference. In this paper, we first
derive an outer bound on the achievable distortion region and then, we propose two joint sourcechannel coding schemes which
exploit the correlation between Vnand Sn, thereby outperforming the naive DPC scheme. The first scheme is a superposition
of the uncoded scheme and a digital part formed by a WynerZiv coding [4] followed by a DPC, which we refer to as a
superpositionbased scheme with digital DPC (or just the superpositionbased scheme). The second scheme is obtained by
replacing the digital part by a hybrid digital and analog (HDA) scheme given in [3] that has been shown to provide graceful
degradation under an SNR mismatch. We then analyze the performance of these two proposed schemes for SNR mismatch
cases. It is shown that both the HDA scheme and the superpositionbased digital scheme benefit from a higher SNR; however,
interestingly, their performances are different.
One interesting application of this problem is to derive the achievable distortion region for the generalized cognitive radio
channels considered in [5] (also in [6]). This channel can be modeled as a typical twouser interference channel except that one
of them knows exactly what the other plans to transmit. We can regard the informed user’s channel as the setup we consider
in this section and then analyze achievable distortion regions for several different cases.
The rest of the paper is organized as follows. In section II, we present some prior works which are closely related to ours.
The outer bound is given in section III and two proposed schemes are given in section IV. In section V, we analyze the
performance of the proposed schemes under SNR mismatch. These proposed schemes are then extended to the generalized
cognitive radio channels in section VI. Some conclusions are given in VII.
ΛV S=
σ2
V
ρσVσS
σ2
ρσVσS
S
?
.
(5)
II. RELATED WORKS ON JSCC WITH INTERFERENCE KNOWN AT TRANSMITTER
In [7], Lapidoth et al. consider the 2×1 multiple access channel in which two transmitters wish to communicate their sources,
which are drawn from a bivariate Gaussian distribution, to a receiver which is interested in reconstructing both sources. There
are some similarities between the work in [7] and here. However, an important difference is that the transmitters are not allowed
to cooperate with each other, i.e., for the particular transmitter, the interference is not known.
In [8], Tian et al. consider transmitting a bivariate Gaussian source over 1×2 Gaussian Broadcast Channel. In their setup,
the source consisting of two components Vn
receiver is only interested in one part of the sources. They proposed a HDA scheme which performs optimally in terms of
distortion region under all SNRs. At first glance, this problem is again similar to ours if we ignore receiver 2 and focus on
the other. Then this problem reduces to communicating Vn
crucial difference is that this sideinformation does not appear in the received signal.
Joint sourcechannel coding for point to point communications over Gaussian channels has been widely discussed. e.g. [3],
[9], [10]. However, they either don’t consider interference ([9], [10]) or assume independence of source and interference ([3]).
In [3], Wilson et al. proposed a HDA coding scheme for the typical writing on dirty paper problem in which the source is
independent of the interference. This HDA scheme is originally proposed to perform well in the case of a SNR mismatch. In
[3], the authors showed that their HDA scheme not only achieves the optimal distortion in the absence of SNR mismatch but
also provides gracefully degradation in the presence of SNR mismatch. In the following sections, we will discuss this scheme
in detail and then propose a coding scheme based on this one.
From now on, since all the random variables we consider are i.i.d. in time, i.e. V (i) is independent of V (j) for i ?= j, we
will drop the index i for the sake of convenience.
1 and Vn
2 memoryless and stationary bivariate Gaussian distributed and each
1 with correlated sideinformation Vn
2 given at the transmitter. A
III. OUTER BOUNDS
A. Outer Bound 1
For comparison, we first present a genieaided outer bound. This outer bound is derived in a similar way to the one in [11]
in which we assume that S is revealed to the decoder by a genie. Thus, we have
1
2logσ2
V(1 − ρ2)
Dob
(a)
≤ I(V ;?V S)
≤ I(V ;Y S)
= h(Y S) − h(Y S,V )
= h(X + ZS) − h(Z)
(b)
Page 3
2
(c)
≤ h(X + Z) − h(Z)
(d)
≤1
2log
?
1 +P
N
?
,
(6)
where (a) follows from the ratedistortion theory [1], (b) is from the data processing inequality, (c) is due from that conditioning
reduces differential entropy and (d) comes from the fact that Gaussian density maximizes the differential entropy. Therefore,
we have the outer bound as
Dob,1=σ2
1 + P/N
V(1 − ρ2)
.
(7)
Note that this outer bound is in general not tight for our setup since in the presence of correlation, giving S to the decoder
also offers a correlated version of the source that we wish to estimate. For example, in the case of ρ = 1, giving S to the
decoder implies that the outer bound is Dob= 0 no matter what the received signal Y is. On the other hand, if ρ = 0, the
setup reduces to the one with uncorrelated interference and we know that this outer bound is tight. Now, we present another
outer bound that improves this outer bound for some values of ρ.
B. Outer Bound 2
Since S and V are drawn from a jointly Gaussian distribution with covariance matrix given in (5), we can write
S = ρσS
σV
V + Nρ,
(8)
where Nρ∼ N?0,(1 − ρ2)σ2
S
?and is independent to V . Now, suppose a genie reveals only Nρto the decoder, we have
1
2logDob,2
Dob
σ2
V
=1
2logvar(V Nρ)
(a)
≤ I(V ;?V Nρ)
≤ I(V ;Y Nρ)
= h(Y Nρ) − h(Y Nρ,V )
= h(X + ρσS
σV
(c)
≤ h(X + ρσS
σV
?
(b)
V + ZNρ) − h(Z)
V + Z) − h(Z)
?
N
1 +(√P + ρ?σ2
(d)
≤1
2log
var
X + ρσS
σVV + Z
?
(e)
≤1
2log
S)2
N
?
,
(9)
where (a)(d) follow from the same reasons with those in the previous outer bound and (e) is due from the CauchySchwartz
inequality that states that the maximum occurs when X and V are collinear. Thus, we have
Dob,2=
σ2
V
1 + (√P + ρ?σ2
S)2/N.
(10)
Note that although the encoder knows the interference S exactly instead of just Nρ, the outer bound is valid since S is a
function of V and Nρ.
Remark 1: If ρ = 0, this outer bound reduces to the previous one and is tight. If ρ = 1, the genie actually reveals nothing
to the decoder and the setup reduces to the one considered in [12] that the encoder is interested in revealing the interference
to the decoder. For this case, we know that this outer bound is tight. However, this outer bound is in general optimistic except
for two extremes. It is due to the fact that in derivations, we assume that we can simultaneously ignore the Nρand use all the
power to take advantage of the coherent part. Despite this, the outer bound still provides an insight that in order to build a
good coding scheme that one should try to use a portion of power to make use of the correlation and then use the remaining
power to avoid Nρ.
Further, it is natural to combine these two outer bounds as
Dob= max{Dob,1,Dob,2}.
(11)
Page 4
3
IV. PROPOSED SCHEMES
A. Uncoded Scheme
We first analyze the distortion of the uncoded scheme where the transmitted signal is simply the scaled version of the source
?
Thus, (2) becomes
?
σ2
V
X =
P
σ2
V
V.
(12)
Y =
P
V + S + Z.
(13)
The receiver forms the linear MMSE estimate of V from Y as?V = βY , where
β =
σ2
V(?P/σ2
V+ ρσS/σV)
P + σ2
S+ N + 2?P/σ2
?
VρσVσS
.
(14)
The corresponding distortion is then given as
Dunc= σ2
V
1 − β(
?
P
σ2
V
+ ρσS
σV
)
?
.
(15)
Remark 2: If ρ = 1 and σ2
transmitting V over an AWGN channel Z with power constraint (√P +?σ2
channel state S to the receiver. In [12], the authors have shown that the pure amplification (uncoded) scheme is optimal for
this problem. Therefore, we can expect that the uncoded scheme will eventually achieve the optimal distortion when ρ = 1.
V= σ2
S, the source and the interference are exactly the same and the problem reduces to
V)2. From [13] [14], we know that the uncoded
scheme is optimal for this case. One can also think of this scenario as that the transmitter is only interested in revealing the
B. Naive DPC Scheme
Another existing scheme is the concatenation of a optimal source code and a DPC. The optimal source code quantizes the
analog source with a rate arbitrarily close to the channel capacity 1/2log(1 + P/N). Then, the DPC ignores the correlation
between the source and interference (this can be done by a randomization and derandomization pair) and encodes the
quantization output accordingly. Since the DPC achieves the rate equal to that when there is no interference at all, the receiver
can correctly decode these digital bits with high probability. By the ratedistortion theory, we have the corresponding distortion
as
DDPC=
σ2
V
1 + P/N.
(16)
Remark 3: In the absence of correlation, i.e., ρ = 0, the problem reduces to the typical writing on dirty paper setup and it
is known that this scheme is optimal but the uncoded scheme is strictly suboptimal. Therefore, we can expect that when the
correlation is small, this naive DPC scheme will outperform the uncoded scheme.
C. SuperpositionBased Scheme with Digital DPC
We now propose a superpositionbased scheme which retains the advantages of the above two schemes. This scheme can
be regarded as an extended version of the coding scheme in [10] to the setup we consider. As shown in Fig. 2, the transmitted
signal of this scheme is the superposition of the analog part Xawith power Paand the digital part Xdwith power P − Pa.
The motivation here is to allocate some power for the analog part to make use of the interference which is somewhat coherent
to the source for large ρ’s and to assign more power to the digital part to avoid the interference when ρ is small. The analog
part is the scaled version of linear combination of source and interference as
Xa=√a(γV + (1 − γ)S),
where Pa∈ [0,P], a = Pa/σ2
σ2
(17)
a, γ ∈ [0,1] and
a= γ2σ2
V+ (1 − γ)2σ2
S+ 2γ(1 − γ)ρσVσS.
(18)
The received signal is given by
Y = Xd+ Xa+ S + Z
= Xd+√a(γV + (1 − γ)S) + S + Z
= Xd+√aγV +?1 +√a(1 − γ)?S + Z
= Xd+ S′+ Z,
(19)
Page 5
4
+
S
W.Z.
γ
d
X
X
V
Costa
T
V
1 γ
−
2
a
a
P
σ
a
X
Fig. 2. Superpositionbased scheme.
where Xdis chosen to be orthogonal to S and V . The receiver first makes an estimate from Y only as V′= βY with
√a(γσ2
V+ (1 − γ)ρσVσS) + ρσVσS
P + N + σ2
The corresponding MSE is
?
Thus, we can write V = V′+ W with W ∼ N(0,D∗).
We now refine the estimate through the digital part, which is the concatenation of a WynerZiv coding and a DPC. Since
the DPC achieves the rate equal to that when there is no interference at all, the encoder can use the remaining power P −Pa
to reliably transmit the refining bits T with a rate arbitrarily close to
?
The resulting distortion after refinement is then given as
β =
S+ 2√a((1 − γ)σ2
?√a(γ + (1 − γ)ρσS
S+ γρσVσS).
(20)
D∗= σ2
V
1 − β
σV
) + ρσS
σV
??
.
(21)
R =1
2log
1 +P − Pa
N
?
.
(22)
Dsep= inf
γ, Pa
D∗
1 +P−Pa
N
.
(23)
In Appendix A, for selfcontainedness, we briefly summarize the digital WynerZiv scheme to illustrate how to achieve the
above distortion.
It is worth noting that setting γ = 1 gives us the lowest distortion always. i.e., superimposing S onto the transmitted signal
is completely unnecessary. However, it is in general not true for the cognitive radio setup. We will discuss this in detail in
section VI.
Remark 4: Different from the setup considered in [10] that the optimal distortion can be achieved by any power allocation
between coded and uncoded transmissions, in our setup the optimal distortion is in general achieved by a particular power
allocation which is a function of ρ. For example, in the absence of correlation, i.e., S is completely independent to V , one
can simply set Pa= 0 and this scheme reduces to the naive DPC which is optimal in this case. On the other hand, if ρ = 1,
the optimal distortion is achieved by setting Pa= P. Moreover, for ρ > 0, it is beneficial to have a nonzero Pamaking use
of the correlation between the source and the interference.
D. HDA Scheme
Now, let us focus on the HDA scheme shown in Fig. 3 obtained by replacing the digital part in Fig. 2 by the HDA scheme
given in [3]. The analog signal remains the same as (17) and the HDA output is referred to as Xh. Therefore, we have
Y = Xh+√aγV +?1 +√a(1 − γ)?S + Z
Again, the HDA scheme regards S′as interference and V′described previously as sideinformation. The encoding and decoding
procedures are similar to that in [3] but the coefficients need to be rederived to fit our setup (the reader is referred to [3] for
details).
Let the auxiliary random variable U be
U = Xh+ αS′+ κV,
= Xh+ S′+ Z.
(24)
(25)
where Xh∼ N(0,Ph) independent to S′and V and Ph= P − Pa. The covariance matrix of S′and V can be computed by
(5).
Page 6
5
+
S
γ
h
X
X
V
HDA
V
1 γ
−
2
a
a
P
σ
a
X
Fig. 3.HDA scheme.
Codebook Generation: Generate a random i.i.d. codebook U with 2nR1codewords, reveal this codebook to both transmitter
and receiver.
Encoding: Given realizations s′and v, find a u ∈ U such that (s′,v,u) is jointly typical. If such an u can be found, transmit
xh= u − αs′− κv. Otherwise, an encoding failure is declared.
Decoding: The decoder looks for a ˆ u such that (y,v′, ˆ u) is jointly typical. A decoding failure is declared if none or more
than one such ˆ u are found. It is shown in [3] that if n → ∞ and the condition described later is satisfied, the probability of
ˆ u ?= u → 0.
Estimation: After decoding u, the receiver forms a linear MMSE estimate of v from y and u. The distortion is then obtained
as
Dhda= inf
γ, Pa
?σ2
V− ΓTΛ−1
UYΓ?,
(26)
where ΛUY is the covariance matrix of U and Y , and
Γ = [E[V U],E[V Y ]]T.
(27)
In the encoding step, to make sure the probability of encoding failure vanishes with increasing n, we require
R1> I(U;S′,V )
= h(U) − h(US′,V )
= h(U) − h(Xh+ αS′+ κV S′,V )
(a)
= h(U) − h(Xh)
=1
Ph
2logE[U2]
.
(28)
where (a) follows because Xhis independent of S′and V .
Further, to guarantee the decodability of U in the decoding step, one requires
R1< I(U;Y,V′)
= h(U) − h(UY,V′)
= h(U) − h(U − αY − κV′Y,V′)
(a)
= h(U) − h(κW + (1 − α)Xh− αZY ),
(29)
where (a) follows from V′= βY . By choosing
α =
Ph
Ph+ N
(30)
and
κ2=
P2
h
(Ph+ N)D∗,
(31)
one can verify that (28) and (29) are satisfied. Note that in (28) what we really need is R1≥ I(U;S′,V ) + ε and in (29) it
is R1≤ I(U;Y,V′)−δ. However, since ε and δ can be made arbitrarily small, these are omitted for the sake of convenience
and to maintain clarity.
Remark 5: It can be verified that the distortions in (23) and (26) are exactly the same. However, it has been shown in [3]
that the HDA scheme can provide graceful degradation in the SNR mismatch case.
Page 7
6
012345
2.5
3
3.5
4
4.5
5
5.5
6
6.5
7
SNR (dB)
−10log10(D)
Uncoded
Naive DPC
Superposition−based
HDA
Outer bound
Fig. 4.
P
Nvs D, ρ = 0.3.
00.2 0.40.60.81
8
9
10
11
12
13
14
ρ
−10log10(D)
Uncoded
Naive DPC
Superposition−based
HDA
Outer bound 1
Outer bound 2
Fig. 5.
ρ vs D with σV = σS= 1 and
P
N= 10.
E. Numerical Results
In Fig. 4, we plot the distortion (in −10log10(D)) for coding schemes and outer bounds described above as a function of SNR.
In this figure, we set σ2
Moreover, for this case, these two schemes not only outperform others but also approach the outer bound (maximum of two)
very well.
We then fix the SNR and plot the distortion as a function of ρ in Fig. 5. The parameters are set to be σ2
P = 10, and N = 1. As we discussed in Remark 2 and Remark 3, the naive DPC scheme performs optimally when ρ = 0 and
performs better than the uncoded scheme at small ρ regime. However, the uncoded scheme outperforms the naive DPC scheme
at large ρ regime and eventually achieves optimum when ρ = 1. Further, it can be seen that both the proposed schemes exactly
the same and the achievable distortion region with the proposed scheme is larger than what is achievable with the naive DPC
scheme and the uncoded scheme. It can be observed that although the proposed schemes perform close to the outer bound
over a wide range of ρs, the outer bound and the inner bound do not coincide however, leaving room for improvement either
of the outer bound or the schemes.
V= σ2
S= 1 and ρ = 0.3. As we expected, two proposed schemes have exactly the same performance.
V= σ2
S= 1,
V. PERFORMANCE ANALYSIS IN THE PRESENCE OF SNR MISMATCH
In this section, we study the distortions for the proposed schemes in the presence of SNR mismatch i.e., we consider
the scenario where instead of knowing the exact channel SNR, the transmitter only knows a lower bound of channel SNR.
Specifically, we assume that the actual channel noise to be Za∼ N(0,Na) but the transmitter only knows that Na≤ N so
that it designs the coefficients for this N. In what follows, we analyze the performance for both proposed schemes under the
above assumption.
Page 8
7
A. SuperpositionBased Scheme with Digital DPC
Since the transmitter designs its coefficients for N, it aims to achieve the distortion Dsep given in (23). It first quantizes
the source to T by a WynerZiv coding with sideinformation D∗given in (21) and then encodes the quantization output by
a DPC with a rate
R =1
2log
?
1 +P −˜Pa
N
?
,
(32)
where˜Pais the power allotted to Xasuch that the distortion in the absence of SNR mismatch is minimized. i.e.,
˜Pa= arginf
Pa
D∗
1 +P−Pa
N
.
(33)
At receiver, since Na≤ N, the DPC decoder can correctly decode T with high probability. Moreover, the receiver forms
the MMSE of V from Y as V′
√a(γσ2
V+ (1 − γ)ρσVσS) + ρσVσS
P + Na+ σ2
?
Thus, the problem reduces to the WynerZiv problem with mismatch sideinformation. In Appendix B, we show that for this
problem, one can achieve
a= βaY with
βa=
S+ 2√a((1 − γ)σ2
?√a(γ + (1 − γ)ρσS
S+ γρσVσS),
(34)
D∗
a= σ2
V
1 − βa
σV
) + ρσS
σV
??
.
(35)
Dsep,mis=
D∗D∗
a
D∗D∗
a+ (D∗− D∗
a)DsepDsep.
(36)
Unlike the typical separationbased scheme that we have seen in [3], the proposed superpositionbased scheme (whose
digital part can be regarded as a separationbased scheme) can still take advantage of better channels through mismatched
sideinformation. i.e., this scheme does not suffer from the pronounced ”threshold effect”.
B. HDA Scheme
Although it is shown in Appendix B that the performance of the HDA scheme is exactly the same with the digital WynerZiv
scheme under sideinformation mismatch, this problem with HDA scheme cannot be reduced to the WynerZiv problem with
mismatch sideinformation as we did for the superpositionbased scheme. It is due from that the HDA scheme still makes an
estimate of V from U which is a function of S. Fortunately, as shown in [3], the HDA scheme is capable of making use of
SNR mismatch.
Similar to the superpositionbased scheme, we design the coefficients for channel parameter N. The HDA scheme regards
D∗as sideinformation and S′as interference. It generates the auxiliary random variable U given by (25) with coefficients
described by (30) and (31). Since Na≤ N, the receiver can correctly decode U with high probability. The receiver then forms
the MMSE as described in (26) and (27). Note that E[Y2] in ΛUY should be modified appropriately to address the fact that
the actual noise variance is Nain this case.
Remark 6: In [12], the optimal tradeoff between the achievable rate and the error in estimating the interference at the
designed SNR is studied. In [3], the authors also studied a somewhat similar problem. They compare the distortions of the
digital scheme and the HDA scheme in estimating the source V and the interference S as we move away from the designed
SNR. One important observation is that the HDA scheme outperforms the separationbased scheme in estimating the source;
however, the separationbased scheme is better than the HDA scheme if one is interested in estimating the interference. Here,
since the effective interference S′includes the uncoded signal√aV in part and the source is assumed to be correlated to the
interference, estimating the source V is equivalent to estimating a part of S′. Thus, one can expect that if the Pawe choose
and the correlation ρ are large enough, the benefit coming from using the HDA scheme to estimate the source may be less
than that from adopting the superpositionbased scheme to estimate a part of S′. Consequently, for a sufficiently large Paand
ρ, the superpositionbased scheme may be better than the HDA scheme in the presence of SNR mismatch.
C. Numerical Results
Now, we compare the performance of the above two schemes and the scheme that knows the actual SNR. The parameters
are set to be σ2
large (ρ = 0.5) correlations. Two examples for designed SNR = 0 dB and 10 dB are given in Fig. 6 and Fig. 7, respectively.
In Fig. 6, we consider the case that the designed SNR is 0 dB which is relatively small compared to the variance of
interference. For this case, we can see that which scheme performs better in the presence of SNR mismatch really depends
on ρ. It can be explained by the observations made in Remark 6 and the power allocation strategy. For this case the optimal
power allocation˜Pais proportional to ρ. For ρ = 0.1 case, since the correlation is small and the assigned˜Pais also small, the
V= σ2
S= 1. We plot the −10log10(D) as we move away from the designed SNR for both small (ρ = 0.1) and
Page 9
8
012345
3
3.5
4
4.5
5
5.5
6
6.5
7
7.5
SNR (dB)
−10log10(D)
Superposition−based (ρ = 0.1)
HDA (ρ = 0.1)
Actual SNR (ρ = 0.1)
Superposition−based (ρ = 0.5)
HDA (ρ = 0.5)
Actual SNR (ρ = 0.5)
Fig. 6.SNR mismatch case for small designed SNR.
1015 20 2530 35 40
10
15
20
25
30
35
40
SNR (dB)
−10log10(D)
Superposition−based (ρ = 0.1)
HDA (ρ = 0.1)
Actual SNR (ρ = 0.1)
Superposition−based (ρ = 0.5)
HDA (ρ = 0.5)
Actual SNR (ρ = 0.5)
Fig. 7.SNR mismatch case for large designed SNR.
HDA scheme is better than the superpositionbased scheme. On the other hand, for ρ = 0.5 case, we allot a relatively large
power to˜Paso that one may get a better estimate if we try to use the superpositionbased scheme to estimate a part of S′.
This property is further discussed in the Appendix C.
In Fig. 7, we design the coefficients for SNR = 10 dB which can be regarded as relatively large SNR compared to the
variance of interference. For this case, the optimal power allocation˜Pa for both ρ = 0.1 and ρ = 0.5 are relatively small.
Therefore, the performance improvement provided by the HDA scheme is larger than that provided by the superpositionbased
scheme for both cases.
In Fig. 8, we plot the proposed schemes with different choices of Paunder the same channel parameters with those in the
previous figure for ρ = 0.1. We observe that for both schemes, if we compromise the optimality at the designed SNR, it is
possible to get better slopes of distortion than that obtained by setting Pa=˜Pa. In other words, we can obtain a family of
achievable distortion under SNR mismatch by choosing Pa∈ [0,P].
VI. JSCC FOR GENERALIZED COGNITIVE RADIO CHANNELS
There has been a lot of interest in cognitive radio since it has been proposed in [15] for flexible communication devices and
higher spectral efficiency. In a conventional cognitive radio setup, the lower priority user (usually referred to as the secondary
user) listens to the wireless channel and transmits the signal only through the spectrum not used by the higher priority user
(referred to as the primary user).
In [5], Devroye et al. studied the generalized cognitive radio channels in which simultaneous transmission over the same
time or frequency is allowed. This channel can be modeled as a typical twouser interference channel except that one of users
knows exactly what the other plans to transmit.The authors then provide inner and outer bounds on how much rate two users
Page 10
9
012345
3.5
4
4.5
5
5.5
6
6.5
7
7.5
SNR (dB)
−10log10(D)
Actual SNR
HDA (Pa = Pa,opt)
HDA (Pa = 0.9)
Superposition−based (Pa = Pa,opt)
Superposition−based (Pa = 0.9)
Fig. 8. Proposed schemes with different choices of Pa.
can transmit simultaneously for such generalized cognitive radio channel. Their achievable scheme is based on the DPC and
the HanKobayashi scheme [16].
In this section, we consider the same generalized cognitive radio channels as in [5] and focus on the case when both two
users have analog information V1and V2. We are interested in the distortion region which describes how much distortion two
users can achieve simultaneously. In particular, we consider the case that two sources are correlated with a covariance matrix
given by
?
As we mentioned before, we first look at the distortion of the secondary user only and regard it as the setup in section II.
An achievable distortion region is obtained by forcing the primary user to use the uncoded scheme and using the proposed
schemes given in section IV for the secondary user. In fact, since the primary user does not have any sideinformation, the
analog transmission seems to be an optimal choice. Further notice that since we do not consider SNR mismatch here, it makes
no difference which proposed schemes we use.
In what follows, we show that when the correlation is large, adopting the proposed scheme at the secondary user not only
takes advantage of this correlation but also benefits the primary user. On the other hand, when ρ is small, the proposed scheme
helps the secondary user to avoid the interference introduced by the primary user.
As shown in Fig. 9, in a generalized cognitive radio channel, two users wish to transmit their own sources to the corresponding
receiver through an interference channel with direct channel gain 1 and cross channels h1and h2representing the realvalued
channel gains from user 1 to user 2 and vice versa, respectively. The power constraints imposed on the outputs of user 1 and
2 are P1and P2, respectively. Different from interference channels, in cognitive radio channels, we assume that the secondary
user knows V1noncausally. Here, we also assume that the channel coefficient h1is known by the secondary user. The received
signals are given by
?
where Zi∼ N(0,Ni) for i ∈ {1,2}.
Let the primary user simply transmit the scaled version of the uncoded source
?
ΛV1V2=
σ2
V1
ρσV1σV2
σ2
V2
ρσV1σV2
?
.
(37)
Y1
Y2
?
=
?
1h1
1h2
??
X1
X2
?
+
?
Z1
Z2
?
.
(38)
X1=
P1
σ2
V1
V1.
(39)
Therefore, the bottom channel in Fig. 9 reduces to the situation we considered in the previous section with source V = V2
and interference S = h1X1. The covariance matrix becomes (5) with
σ2
σ2
V= σ2
S= h2
V2,
(40)
(41)
1P1.
The secondary user then encodes its source to X2by the HDA scheme described previously in section IVD with power P2
and coefficients according to (30) and (31). With these coefficients, the corresponding distortion D2is computed by (26) and
Page 11
10
ENC 2
+
DEC 2
2
V
2ˆV
2
Z
2
X
2Y
ENC 1
+
DEC 1
1ˆV
1 Z
1
X
1Y
1
1
1h
2h
Primary user
Secondary user
1V
Fig. 9. Setup of cognitive radio channels.
(27). At the receiver 1, the received signal is
Y1= X1+ h2X2+ Z1
=?1 + (1 − γ)√ah1h2
?X1+ h2Xh+ h2
√aγV2+ Z1.
(42)
The decoder 1 then forms a linear MMSE estimate from Y1given by?V1= βY1, where
β =E[V1Y1]
E[Y2
1]
(43)
with
E[V1Y1] =?1 + (1 − γ)√ah1h2
E[Y2
2Ph+ 2√ah2γρ
??
P1σ2
V1+ h2
√aγρσV1σV2
(44)
1] =?1 + (1 − γ)√ah1h2
h2
?2P1+ ah2
P1σ2
V2
2γ2σ2
V2+
?
?1 + (1 − γ)√ah1h2
V1− βE[V1Y1]
?.
(45)
Therefore, the corresponding distortion is
D1= σ2
(46)
It can be verified that assigning γ = 1 leads to a suboptimal D1in general. Thus, as we mentioned before, one may want
to assign a nonzero power to transmit S in order to achieve a larger distortion region.
We can then optimize the power allocation for particular performance criteria. For instance, if one desires achieving the
minimum distortion for the secondary user, γ should be set to be 1. However, if the target is to obtain the largest achievable
distortion region under a total power constraint P1+ P2= P, one should optimize over P1∈ [0,P], Pa∈ [0,1 − P1], and
γ ∈ [0,1]. We briefly discuss these examples below.
1. Greedy Case: We first consider the greedy case where the secondary user focus on reducing its own distortion. As we
mentioned before, the proposed scheme should always set γ = 1 for this case. For comparison, an outer bound on distortion
region for this case is given as follows. Suppose that there is a genie that reveals V1 to the decoder 2 and V2 to both the
encoder 1 and the decoder 1. Similar to the derivation in section III, one obtains
D1ob=σ2
V1(1 − ρ2)
1 + P1/N1
V2(1 − ρ2)
1 + P2/N2
.
(47)
D2ob=σ2
.
(48)
From now on, we only present the outer bound 1 since in the numerical results we consider in the following, this outer bound
is tighter than the outer bound 2. However, one can also derive the outer bound 2 for these cases and take the maximum of
two by a similar way given in section III.
Numerical examples are given in Fig. 10 and 11, in which we set σ2
total power P = 2. The correlation between sources are ρ = 0 and ρ = 0.3, respectively. In both examples, we do not perform
optimization over Ph and Pa with respect to particular criteria. Instead, we plot many choices of Ph and Pa which satisfy
P2= Ph+ Pa.
In Fig. 10, we observe that the proposed scheme achieves the outer bound at two corners in the absence of correlation. The
left corner point can be achieved by assigning P2= P and the right corner point can be achieved by setting P1= P. For
V1= σ2
V2= N1= N2= 1, h1= h2= 0.5, and the
Page 12
11
0.4 0.5 0.60.70.80.91
0.4
0.5
0.6
0.7
0.8
0.9
1
D2
D1
P = 2, N1 = N2 = 1, ρ = 0
outer bound
Fig. 10. Greedy case, ρ = 0.
0.4 0.5 0.60.7 0.8 0.91
0.4
0.5
0.6
0.7
0.8
0.9
1
D2
D1
P = 2, N1 = N2 = 1, ρ = 0.3
outer bound
Fig. 11. Greedy case, ρ = 0.3.
other points, the inner and outer bounds do not coincide. This may be due from that in deriving the outer bound, the genie
reveals to the primary user too much information so that the outer bound may not be tight (recall that for the ρ = 0 case, the
outer bound for the secondary user is tight). Despite this, the inner bound is close to the outer bound. In Fig. 11, we give an
example where ρ = 0.3. One can observe that compared to the result in Fig. 10, the correlation helps both users in terms of
distortion. And again, although the outer bound is not tight, the gap is reasonably small.
2. NonGreedy Case: We now consider the case that the secondary user is willing to help the primary user. i.e., the γ ∈ [0,1].
For this case, the outer bounds must be modified to address the fact that the secondary user uses a part of its power to transmit
V1. For the primary user, suppose there is a genie that reveals V2and the HDA encoder to both encoder 1 and decoder 1, i.e.,
Xhis also known at both sides. We have
n
2logσ2
= h(X1+ h1X2+ Z1V2,Xh) − h(Z1)
≤ h??1 + (1 − γ)√ah1h2
2log(1 + snr1),
V1(1 − ρ2)
D1ob
≤ I(V1;?V1V2) ≤ I(V1;Y1V2)
?X1+ Z1)?− h(Z1)
=n
(49)
where
snr1=P1(1 + (1 − γ)√ah1h2)2
N1
.
(50)
Page 13
12
0.4 0.50.60.70.80.91
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
D2
D1
P = 2, N1 = N2 = 1, ρ = 0
outer bound
Fig. 12.NonGreedy case, ρ = 0.
Similarly, we assume a genie gives away V1to decoder 2 so that we have
2logσ2
= h(X2+ h2X1+ Z2V1) − h(Z2)
≤ h?Xh+ γ√aV2+ Z2)?− h(Z2)
n
V2(1 − ρ2)
D2ob
≤ I(V2;?V2V1) ≤ I(V2;Y2V1)
=n
2log(1 + snr2),
(51)
where
snr2=Ph+ aγ2σ2
V2
N2
.
(52)
Thus, for each choice of P1, Pa, and γ we have the outer bound as
˜D1ob=σ2
V1(1 − ρ2)
1 + snr1
V2(1 − ρ2)
1 + snr2
(53)
˜D2ob=σ2
.
(54)
The outer bound of this case is obtained numerically by taking the lower convex envelope over all
The numerical results for ρ = 0 and ρ = 0.3 are given in Fig. 12 and Fig. 13, respectively. In both figures, all the parameters
are set to be the same as those in the previous two examples. We observe that if the secondary user is willing to help the
primary user, the achievable distortion region is larger than that of greedy case.
3. Coexistence Conditions: In [6], the coexistence conditions are introduced to understand the systemwise benefits of
cognitive radio. The authors study the largest rate that the cognitive radio can achieve under these coexistence constraint
described as follows,
1. the presence of cognitive radio should not create rate degradation for the primary user, and
2. the primary user does not need to use a more sophisticated decoder than it would use in the absence of the cognitive
radio. i.e, a singleuser decoder is enough.
Similar to this idea, we study the distortion of the secondary user under the modified coexistence constraint as
1. the presence of cognitive radio should not increase distortion for the primary user, and
2. the primary user uses a singleuser decoder.
Let the power constraints be P1 and P2 for the primary and the secondary user, respectively, and P1+ P2 = P. In the
absence of the cognitive radio, the distortion of the primary user is
?˜D1ob,˜D2ob
?
.
D∗
1=
σ2
V1
1 + P1/N1.
(55)
The outer bound on the secondary user under the coexistence conditions is given as
Dcoexist,ob= inf
Pa, γ,˜ D1ob≤D∗
1
˜D2ob,
(56)
Page 14
13
0.40.5 0.60.70.80.91
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
D2
D1
P = 2, N1 = N2 = 1, ρ = 0.3
outer bound
Fig. 13.NonGreedy case, ρ = 0.3.
00.511.52
0.4
0.5
0.6
0.7
0.8
0.9
1
P2
D2
proposed scheme (ρ = 0)
outer bound (ρ = 0)
proposed scheme (ρ = 0.3)
outer bound (ρ = 0.3)
Fig. 14.Coexistence case, ρ = 0 and ρ = 0.3.
where˜D1oband˜D2ob are given by (53) and (54), respectively. An example is shown in Fig. 14. All the parameters in this
figure are the same with those in Fig. 1013.
VII. CONCLUSIONS
In this paper, we have discussed the joint sourcechannel coding problem with interference known at the transmitter. In
particular, we considered the case that the source and the interference are correlated with each other. According to the
observations on the uncoded scheme and the naive DPC scheme, we proposed a superpositionbased scheme with digital
DPC and a HDA scheme which can adapt with ρ. The performance of these two schemes under SNR mismatch are also
discussed. Different from typical separationbased schemes suffering from the pronounced threshold effect in the presence of
SNR mismatch, both the proposed schemes can benefit from a better sideinformation acquired at the decoder and thus, provide
a graceful degradation under SNR mismatch. However, there is a difference between the performance of the two proposed
schemes under a SNR mismatch and which scheme is better depends on the designed SNR and ρ.
These two schemes are then applied to cognitive radio channels and achievable distortion regions are discussed for different
cases. To the best of our knowledge, this is the first joint sourcechannel coding scheme for cognitive radio channels. We have
also provided outer bounds on these distortion regions. Despite the fact that the outer bounds are not tight in general, the
numerical results have shown that the gap between the inner bound and the outer bound is reasonably small.
Page 15
14
APPENDIX A
DIGITAL WYNERZIV SCHEME
In this appendix, we summarize the digital WynerZiv scheme for lossy source coding with sideinformation V′(V = V′+W
with W ∼ N(0,D∗)) at receiver. Similar to the previous sections, we omit all the ε and/or δ intentionally for the sake of
convenience and to maintain clarity.
Suppose the sideinformation is available at both sides, the least required rate RWZ for achieving a desired distortion D is
[4]
RWZ=1
2logD∗
D.
(57)
Let us set this rate to be arbitrarily close to the rate given in (22), the rate that the channel can support with arbitrarily small
error probability. The best possible distortion one can achieve for this setup is then given as
D =
D∗
1 +P−Pa
N
.
(58)
This distortion can be achieved as follows [4],
1. Let T be the auxiliary random variable given by
T = αsepV + B,
(59)
where
αsep=
?
D∗− D
D∗
(60)
and B ∼ N(0,D). Generates a length n i.i.d. Gaussian codebook T of size 2nI(T;V )and randomly assign the codewords into
2nRbins with R chosen from (22). For each source realization v, find a codeword t ∈ T such that (v,u) is jointly typical.
If none or more than one are found, an encoding failure is declared.
2. For each chosen codeword, the encoder transmit the bin index of this codeword by the DPC with rate given in (22).
3. The decoder first decodes the bin index (the decodability is guaranteed by the rate we chose) and then looks for a
codewordˆt in this bin such that (ˆt,v′) is jointly typical. If this is not found, a dummy codeword is selected. Note that as
n → ∞, the probability thatˆt ?= t vanishes. Therefore, we can assume thatˆt = t from now on.
4. Finally, the decoder forms the MMSE from t and v′as ˆ v = v′+ ˆ w with
αsepD∗
α2
It can be verified that for the choice of α the required rate is equal to (57) and the corresponding distortion are
ˆ w =
sepD∗+ D(t − αsepv′).
(61)
E[(V −?V )2] = E[(W −?
= D∗
1 −
α2
W)2]
?
?
α2
sepD∗
sepD∗+ D
= D.
(62)
APPENDIX B
WYNERZIV WITH MISMATCHED SIDEINFORMATION
In this appendix, we calculate the expected distortion of the digital WynerZiv scheme in the presence of sideinformation
mismatch. Specifically, we consider the WynerZiv problem with an i.i.d. Gaussian source and the MSE distortion measure.
Let us assume that the best achievable distortion in the absence of sideinformation mismatch to be D. The encoder believes
that the sideinformation is V′, and V = V′+ W with W ∼ N(0,D∗). However, the sideinformation turns out to be V′
and has the relation V = V′
suffered by the decoder .
Since the encoder has been fixed to deal with the sideinformation, V′, at decoder, the auxiliary random variable is as in
(59) with the coefficient given in (60).
Since the decoder knows the actual sideinformation, V′
the MMSE estimate?
Wa=
α2
sepD∗
a
a+ Wawith Wa∼ N(0,D∗
a). Under the same rate, we want to calculate the actual distortion Da
a, perfectly, it only has to estimate Wa. By the orthogonality principle,
Wacan be obtained as
?
αsepD∗
a
a+ D(T − αsepV′
a)
(63)
Therefore, the estimate of the source is?V = V′
a+?
=
Wa. The corresponding distortion is given as
Da= E[(V −?V )2] = E[(Wa−?
D∗D∗
a+ (D∗− D∗
Wa)2]
D∗D∗
a
a)DD
(64)
View other sources
Hide other sources
 Available from K.R. Narayanan · Sep 19, 2014
 Available from ArXiv