Page 1

arXiv:0905.2645v1 [cs.IT] 16 May 2009

Providing Secrecy with Lattice Codes

Xiang HeAylin Yener

Wireless Communications and Networking Laboratory

Electrical Engineering Department

The Pennsylvania State University, University Park, PA 16802

xxh119@psu.edu yener@ee.psu.edu

Abstract—Recent results have shown that lattice codes can be

used to construct good channel codes, source codes and physical

layer network codes for Gaussian channels. On the other hand,

for Gaussian channels with secrecy constraints, efforts to date

rely on random codes. In this work, we provide a tool to bridge

these two areas so that the secrecy rate can be computed when

lattice codes are used. In particular, we address the problem of

bounding equivocation rates under nonlinear modulus operation

that is present in lattice encoders/decoders. The technique is

then demonstrated in two Gaussian channel examples: (1) a

Gaussian wiretap channel with a cooperative jammer, and (2)

a multi-hop line network from a source to a destination with

untrusted intermediate relay nodes from whom the information

needs to be kept secret. In both cases, lattice codes are used to

facilitate cooperative jamming. In the second case, interestingly,

we demonstrate that a non-vanishing positive secrecy rate is

achievable regardless of the number of hops.

I. INTRODUCTION

Information theoretic secrecy was first proposed by Shan-

non in [1]. In this classical model, Bob wants to send a

message to Alice, which needs to be kept secret from Eve.

Shannon’s notion of secrecy requires the average rate of

information leaked to Eve to be zero, with no assumption

made on the computational power of Eve. Wyner, in [2],

pointed out that, more often than not, the eavesdropper

(Eve) has a noisy copy of the signal transmitted from the

source, and building a useful secure communication system

per Shannon’s notion is possible [2]. Csiszar and Korner [3]

extended this to a more general channel model.

Numerous channel models have since been studied under

Shannon’s framework. The maximum reliable transmission

rate with secrecy is identified for several cases including

the Gaussian wiretap channel [4] and the MIMO wiretap

channel [5], [6], [7]. Sum secrecy capacity for a degraded

Gaussian multiple access wiretap channel is given in [8].

For other channels, upper bounds, lower bounds and some

asymptotic results on the secrecy capacity exist. For the

achievability part, Shannon’s random coding argument proves

to be effective in majority of these works.

On the other hand, it is known that the random coding

argument may be insufficient to prove capacity theorems for

certain channels [9]. Instead, structured codes like lattice

codes are used. Using structured codes has two benefits. First,

it is relatively easy to analyze large networks under these

codes. For example, in [10], [11], the lattice code allows the

relaying scheme to be equivalent to a modulus sum operation,

making it easy to trace the signal over a multi-hop relay

network. Secondly,the structured nature of these codes makes

it possible to align unwanted interference, for example, for

the interference channel with more than two users [12], [13],

and the two way relay channel [10], [11].

A natural question is therefore whether structured codes

are useful for secure communication as well. In particular, in

this work, we are interested in answering two questions:

1) How do we bound the secrecy capacity when structured

codes are used?

2) Are there models where structured codes prove to be

useful in providing secrecy?

Relevant references in this line of thinking includes [14]

and [15]. Reference [14] considers a binary additive two-

way wiretap channel where one terminal uses binary jamming

signals. Reference [15] examines a wiretap channel where the

eavesdropping channel is a modulus-Λ channel. Under the

proposed signaling scheme therein, the source uses a lattice

code to convey the secret message, and, the destination jams

the eavesdropper with a lattice code. The eavesdropper sees

the sum of these two codes, both taking value in a finite

group, where the sum is carried under the addition defined

over the group. It is known that if the jamming signal is

sampled from a uniform distribution over the group, then the

sum is independent from the message.

While these are encouragingsteps in showing the impact of

structured jamming signals, as commented in [15], using this

technique in Gaussian channels is a non-trivial step. In the

Gaussian channel, also, the eavesdropper receives the sum of

the signal from the source and the jamming signal. However,

the addition is over real numbers rather than over a finite

group. The property of modulus sum is therefore lost and it

is difficult to measure how much information is leaked to the

eavesdropper.

Most lattice codes for power constrained transmission have

a similar structure to the one used in [15]. First, a lattice is

constructed, which should be a good channel code under the

noise/interference. Then, to meet the power constraint, the

lattice, or its shifted version, is intersected with a bounded

set, called the shaping set, to create a set of lattice points

with finite average power. The lattice is shifted to make sure

sufficiently many lattice points fall into the shaping set to

maintain the codebook size and hence the coding rate [16].

The decoder at the destination is called a lattice decoder if it

is only asked to find the most likely lattice point under the

received signals, and is not aware of shaping set. Because of

Page 2

the structured nature of the lattice, a lattice decoder has lower

complexity compared to the maximum likelihood decoder

where the knowledge of shaping set is used. Also, under

the lattice decoder, the introduction of shaping set does not

pose any additional difficulty to the analysis of decoding

performance.Commonly used shaping sets include the sphere

[12] and the fundamental region of a lattice [17].

A key observation is that, from the viewpoint of an eaves-

dropper, the shaping set actually provides useful information,

since it reduces the set of lattice points the eavesdropper

needs to consider. The main aim of this work, therefore, is

to find a shaping set and lattice code construction under

which the information leaked to the eavesdropper can be

bounded. This shaping set, as we shall see, turns out to be the

fundamental region of a “coarse” lattice in a nested lattice

structure. Under this construction, we show that at most 1 bit

is leaked to the eavesdropper per channel use. This enables

us to lower bound the secrecy rate using a technique similar

to the genie bound from [18].

To demonstrate the utility of our approach, we then apply

our technique to two channel models: a Gaussian wiretap

channel with a cooperative jammer, and a multi-hop line

network, where a source can communicate a destination only

through a chain of untrusted relays. In the second case,

we demonstrate that a non-vanishing positive secrecy rate

is achievable regardless of the number of hops.

The following notation is used throughout this work: We

use H to denote the entropy. εk is used to denote any

variable that goes to 0 when n goes to ∞. We define

C(x) =

than or equal to a.

1

2log2(1 + x). ⌊a⌋ denotes the largest integer less

II. THE REPRESENTATION THEOREM

In this section, we present a result about lattice codes

which will be useful in the sequel.

Let Λ denote a lattice in RN[17], i.e., a set of points

which is a group closed under real vector addition. The

modulus operation x mod Λ is defined as x mod Λ = x −

arg miny∈Λd(x,y), where d(x,y) is the Euclidean distance

between x and y. The fundamental region of a lattice V is

defined as the set {x : x mod Λ = 0}. It is possible that there

are more than one lattice points that have the same minimal

distance to x. Breaking a tie like this is done by properly

assign the boundary of V [17].

Let tAand tBbe two numbers taken from V. For any set

A, define 2A as 2A = {2x : x ∈ A}. Then we have:

{tA+ tB: tA,tB∈ V} = 2V

Define Axas Ax= {tA+ tB+ x,tA,tB∈ V}. Then from

(1), we have Ax = x + 2V. With this preparation, we are

ready to prove the following representation theorem:

Theorem 1: There exists a random integer T, such that

1 ≤ T ≤ 2N, and tA+tBis uniquely determined by {T,tA+

tBmod Λ}.

Proof: By definition of the modulus Λ operation, we

have

(1)

tA+ tBmod Λ = tA+ tB+ x, x ∈ Λ

(2)

The theorem is equivalent to finding the number of possible

x meeting equation (2) for a given tA+ tBmod Λ.

To do that, we need to know a little more about the struc-

ture of lattice Λ. Every point in a lattice, by definition, can be

represented in the following form [19]: x =

N ?

i=1aivi,vi∈

RN,ai∈ Z. {ai} is said to be the coordinates of the lattice

point x under the basis {vi}.

Based on this representation, we can define the following

relationship: Consider two points x,y ∈ Λ, with coordinates

{ai} and {bi} respectively. Then we say x ∼ y if ai =

bimod 2, i = 1...N. It is easy to see the relationship ∼ is

an equivalence relationship. Therefore, it defines a partition

over Λ.

1) Depending on the values of ai− bimod 2, there are

2Nsets in this partition.

2) The sub-lattice 2Λ is one set in the partition, whose

members have even coordinates. The remaining 2N−1

sets are its cosets.

Let Ci denote any one of these cosets or 2Λ. Then Ci can

expressed as Ci= 2Λ + yi, yi∈ Λ. It is easy to verify that

Ax = x + 2V, x ∈ Ci is a partition of 2RN+ yi, which

equals RN.

We proceed to use the two partitions derived above: Since

Ci,i = 1...2Nis a partition of Λ, (2) can be solved by

considering the following 2Nequations:

tA+ tBmod Λ = tA+ tB+ x, x ∈ Ci

From (1), this means tA+ tB mod Λ ∈ x + 2V for some

x ∈ Ci. Since x + 2V,x ∈ Ciis a partition of RN, there is

at most one x ∈ Cithat meets this requirement. This implies

for a given tA+ tB mod Λ, and a given coset Ci, (3) only

has one solution for x. Since there are 2Nsuch equations,

(2) has at most 2Nsolutions. Hence each tA+ tBmod Λ

corresponds to at most 2Npoints of tA+ tB.

Remark 1: Theorem 1 implies that modulus operation

looses at most one bit per dimension of information if

tA,tB∈ V.

The following crypto lemma is useful and is provided here

for completeness.

Lemma 1: [15] Let tA,tB be two independent random

variables distributed over the a compact abelian group, tB

has a uniform distribution, then tA+tBis independent from

tA. Here + is the addition over the group.

In the remainder of the paper, (Λ,Λ1) denotes a nested

lattice structure where Λ1is the coarse lattice. Let V and V1

be their respective fundamental regions. We shall use a ⊕ b,

short for a+b mod Λ1. Then from Lemma 1, we have the

following corollary:

Corollary 1: Let tA ∈ Λ ∩ V1. tB ∈ Λ ∩ V1 and tB is

uniformly distributed over Λ ∩ V1. Let tS= tA⊕ tB. Then

tSis independent from tA.

(3)

III. WIRETAP CHANNEL WITH A COOPERATIVE JAMMER

In this section, we demonstrate the use of lattice codes

for secrecy in the simple model depicted in Figure 1. Nodes

S,D,E form a wiretap channel where S is the source node,

Page 3

1

Z2

Z1

SD

E

CJ

1

1

Fig. 1. Wiretap Channel with a Cooperative Jammer, CJ

D is the destination node, E is the eavesdropper. Let the

average power constraint of node S be P. Now suppose

that there is another transmitter CJ in the system, also

with power constraint P, as shown in Figure 1. We assume

that the interference caused by CJ to node D is either too

weak or too strong that it can be ignored or removed, and

consequently there is no link between CJ and D. In this

model, node CJ may choose to help S by transmitting a

jamming signal to confuse the eavesdropper E. Below, we

derive the secrecy rate for this case when the jamming signal

is chosen from a lattice codebook.

A. Gaussian Noise

We first consider the case when Z1and Z2are independent

Gaussian random variables with zero mean and unit variance.

In this case, we have the following theorem:

Theorem 2: A secrecy rate of [C(P)−1]+is achievable.

Proof: The codebook is constructed as follows: Let

(Λ,Λ1) be a properly designed nested lattice structure in RN

as described in [17]. The codebook is all the lattice points

within the set Λ ∩ V1.

Let tN

dN

transmitted signal is given by tN

the above signal corrupted by Gaussian noise and tries to

decode tN

[17, Theorem 5], there exists a sequence of properly designed

(Λ,Λ1) with increasing dimension, such that

Abe the lattice point transmitted by node S. Let

Abe the dithering noise uniformly distributed over V1. The

A⊕dN

A. The receiver receives

A. Let the decoding result beˆtN

A. Then as shown in

lim

N→∞

C(P) =1

1

Nlog2|Λ ∩ V1| < C(P)

(4)

2log2(1 + P)

(5)

and limN→∞Pr(tN

The cooperative jammer CJ uses the same codebook as

node S. Let the lattice point transmitted by CJ be tN

the dithering noise be dN

by tN

node S, the legitimate receiver node D and the eavesdropper

node E. dN

node E. Hence, there is no common randomness between

the legitimate communicating pairs that is not known by the

eavesdropper.

Then the signal received by the eavesdropper can be

represented as tN

the Gaussian channel noise over N channel uses. Then we

have

A?=ˆtN

A) = 0.

Band

B. The transmitted signal is given

B. As in [17], we assume that dN

B⊕ dN

Ais known by

Bis known by node S, and the eavesdropper

A⊕ dN

A+ tN

B⊕ dN

B+ ZN

2, where ZN

2

is

H(tN

A|tN

A⊕ dN

A+ tN

B⊕ dN

B+ ZN

2,dN

A,dN

B)

(6)

≥H(tN

=H(tN

=H(tN

=H(tN

=H(tN

=H?T|tN

A|tN

A|tN

A|tN

A|tN

A|tN

A⊕ dN

A⊕ dN

A⊕ dN

A⊕ tN

A⊕ tN

A⊕ tN

A+ tN

A+ tN

A⊕ tN

B,dN

B⊕ dN

B⊕ dN

B⊕ dN

A,dN

B+ ZN

B,dN

B,dN

2,dN

A,dN

A,dN

A,dN

B,ZN

2)

(7)

(8)

(9)

(10)

(11)

B)

B,T)

B,T)

B,T)

B,tN

A

?+ H?tN

A|tN

A⊕ tN

B

?− H?T|tN

A⊕ tN

B

?

(12)

(13)

(14)

(15)

≥H?tN

=H?tN

≥H?tN

In (9), we introduce the N bit information T that will help

to recover tN

(14), we use the fact that tN

based on Corollary 1.

Let c =

Then from (15), since H(T) ≤ N, we have c ≤ 1.

Therefore, if the message is mapped one-to-one to tN

an equivocation rate of at least C(P)−1 is achievable under

a transmission rate of C(P) bits per channel use.

We note that to obtain perfect secrecy, some additional

effort is required. First, we define a block of channel uses

as the N channel uses required to transmit a N dimensional

lattice point. A perfect secrecy rate of C(P) − 1 can then

be achieved by coding across multiple blocks: A codeword

in this case is composed of Q components, each component

is an N dimensional lattice point sampled from a uniform

distribution over V1∩ Λ in an i.i.d. fashion. The resulting

codebook C contains 2⌊NQR⌋codewords with R < C(P).

Like wiretap codes, the codebook is then randomly binned

into several bins, where each bin contains 2⌊NQc⌋codewords.

The secret message W is mapped to the bins. The actual

transmitted codeword is chosen from that bin according to a

uniform distribution.

Let YNQ

e

denote the signals available to the eavesdropper:

YNQ

e

= {tNQ

Then we have

A|tN

A

?− H?T|tN

A

?− H (T)

A⊕ tN

B

?− H?T|tN

A⊕ tN

A⊕ tN

B

?

B

?

A⊕dN

A+tN

B⊕dN

Bfrom tN

Ais independent from tN

A⊕dN

A⊕tN

B⊕dN

A⊕ tN

B. In

B

1

NI?tN

A;tN

A⊕ dN

A+ tN

B⊕ dN

B+ ZN

2,dN

A,dN

B

?.

A, then

A

⊕ dNQ

A

+ tNQ

B

⊕ dNQ

B

+ ZNQ,dNQ

A,dNQ

B}.

H(W|YNQ

=H(W|tNQ

− H(tNQ

≥H(tNQ

=H(tNQ

e

,C)

A,YNQ

A|W,YNQ

A|YNQ

A|YNQ

e

,C) + H(tNQ

,C)

,C) − NQε

,C) − H(tNQ

A|YNQ

e

,C)

e

(16)

(17)

e

e

A|C) + H(tNQ

A|C) − NQε

(18)

(19)

=H(tNQ

A|C) − I(tNQ

?

A;YNQ

Q

?

q=1

e

|C) − NQε

≥HtNQ

A|C

?

−I?tN

A;YN

e|C?− NQε

(20)

=H

?

tNQ

A|C

?

− QNc − NQε = QN (R − c) − NQε

(21)

In (17), we use Fano’s inequality to bound the last term in

(16). This is because the size of each bin is kept small enough

such that given W, the eavesdropper can determine tNQ

A

from

Page 4

its received signal YNQ

argument and (21), it can then be shown a secrecy rate of

C(P) − c is achievable. Since c < 1, this means a secrecy

rate of at least C(P) − 1 bits per channel use is achievable.

e

. Using the standard random coding

Remark 2: It is interesting to compare the secrecy rate

obtained here with that obtained by cooperativejamming with

Gaussian noise [20]. The latter is given by C(P)−C(

limP→∞C(

per channel use of loss in secrecy rate at high SNR by using

a structured code book as the jamming signal.

P

P+1).

P

P+1) = 0.5. Therefore there is at most 0.5 bit

B. Non-Gaussian Noise

The performance analysis in [17] requires Gaussian noise.

This is not always the case, for example, in the presence

of interference, which is not necessarily Gaussian. For non-

Gaussian noise, in principle, the analysis in [16] can be used

instead. On the other hand, in [16], a sphere is used as the

shaping set, making it difficult to computing the equivocation

rate via Theorem 1. We show below, if the code rate R has

the form log2t,t ∈ Z+, then a scaled lattice tΛ of the fine

lattice Λ can be used for shaping instead.

Theorem 3: If Z1,Z2 are i.i.d. continuous random vari-

ables with differential entropy h(E), such that 22h(E)= 2πe,

then a secrecy rate of [log2⌊√P⌋ − 1]+is achievable.

Proof: We need to show that there exists a fine lattice

Λ that has a good decoding performance [16, Theorem 6],

and Λ is close to a sphere in the sense that

lim

N→∞h(S) =1

2log2(2πeP′)

(22)

where h(S) =

tal region of Λ, and P′=

[21] that when a lattice is sampled from the lattice ensemble

defined therein, it is close to a sphere in the sense of (22).

The lattice ensemble is generally called construction A [16],

whose generation matrices are all matrix of size K×N over

finite group GF(q), with q being a prime. The lattice sampled

from the ensemble is “good” in probability when q,N → ∞

and K grows faster than log2N [21, (25)-(28)].Note that this

property of “goodness” is invariant under scaling. Therefore,

we can scale the lattice so that the volume of its fundamental

region remains fixed when its dimension N → ∞. This gives

us a sequence of lattice ensembles that meet the condition

of [13, Lemma 1]: (1) N → ∞ (2) q → ∞. (3) Each lattice

ensemble of a given dimension is balanced [16]. This means

when N → ∞, at least 3/4 of the lattice ensemble is good for

channel coding [13, Lemma 1]. The lattice decoder will have

a positive decoding error exponent as long as |V| > 2Nh(E).

Combined, this means there must exist a lattice Λ∗that

is close to a sphere and is a good channel code at the

same time. Hence we have

N → ∞. Since we assume h(E) =1

|V| > 2Nh(E), this means as long as P′> 1, the decoding

error will decrease exponentially when N → ∞.

Now pick the shaping set to be the fundamental region

of tΛ∗,t ∈ Z+. Then the code rate R = log2(t) [17]. With

1

Nlog2|V|, |V| is the volume of the fundamen-

1

N|V|

?

x∈V?x?2dx. It is shown in

1

Nlog2|V| →1

2log2(2πeP′) as

2log2(2πe) and require

the dithering and modulus operation from [17], the average

power of the transmitted signal per dimension is t2P′. Note

that the modulus operation at the destination, required in

order to remove the dithering noise, may distort the additive

channel noise. However, the decoding error event, defined as

the noise pushing a lattice codeword into the set of typical

noise sequence centered on a different lattice point [16],

remains identical. Therefore, the decoding error exponent

is the same. Hence we have P′> 1 and t2P′< P. The

largest possible t is ⌊√P⌋, with the rate being log2(⌊√P⌋).

With similar arguments as in Theorem 2, we conclude that a

secrecy rate of [log2(⌊√P⌋) − 1]+is achievable.

IV. MULTI-HOP LINE NETWORK WITH UNTRUSTED

RELAYS

A. System Model

In this section, we examine a more complicated communi-

cation scenario, as shown in Figure 2. The source has to com-

municate over K −1 hops (K ≥ 3) to reach the destination.

Yet the intermediate relaying nodes are untrusted and need to

be prevented from decoding the source information. Under

this model, we will show that, using Theorem 1, with lattice

codes for source transmission and jamming signals and an

appropriate transmission schedule, an end-to-end secrecy rate

that is independent of the number of untrusted relay nodes

is achievable. We assume nodes can not receive and transmit

signals simultaneously. We assume that each node can only

communicate to its two neighbors, one on each side. Let Yi

and Xibe the received and transmitted signal of the ith node

respectively. Then they are related as Yi= Xi−1+Xi+1+Zi,

where Ziare zero mean Gaussian random variables with unit

variance, and are independent from each other. Each node has

S123D

Fig. 2. A Line Network with 3 Un-trusted Relays

the same average power constraint:1

where n is the total number of channel uses. The channel

gains are normalized for simplicity.

We consider the case where there is an eavesdropper

residing at each relay node and these eavesdroppers are not

cooperating. This also addresses the scenario where there is

one eavesdropper, but the eavesdropper may appear at any

one relay node that is unknown a priori. In either case, we

need secrecy from all relays and the secrecy constraints for

the K relay nodes are expressed as lim

n

?n

k=1E?Xi(k)2?≤¯P

n→∞

1

nH (W|Yn

i) =

lim

n→∞

B. Signaling Scheme

Because all nodes are half duplex, a schedule is necessary

to control when a node should talk. The node schedule is

best represented by the acyclic directional graph as shown

in Figure 3. The columns in Figure 3 indicate the nodes and

the rows in Figure 3 indicate the phases. The length of a

phase is the number of channel uses required to transmit a

1

nH (W),i = 1...K.

Page 5

lattice point, which equals the dimension of the lattice. A

node in a row has an outgoing edge if it transmits during

a phase. The node in that row has an incoming edge if it

can hear signals during the previous phase. It is understood,

though not shown in the figure, that the signal received by

the node is a superposition of the signals over all incoming

edges corrupted by the additive Gaussian noise.

A number of consecutive phases is called one block, as

shown in Figure 3. The boundary of a block is shown by the

dotted line in Figure 3. The data transmission is carried over

M blocks.

One block of channel uses

t0+ t1+ J4

J−1

J0

J1

J2

J0

J1

J2

J3

t0+ J0

t0+ J1

t0+ J2

t0+ J3

t0+ J1

t0+ J2

t0+ J3

t0+ J4

t0+ t1+ J1

t0+ t1+ J2

t0+ t1+ J3

Fig. 3. One Block of Channel Uses

Again the nested lattice code (Λ,Λ1) from [10] is used

within each block. The codebook is constructed in the same

fashion as in Section III.

1) The Source Node: The input to the channel by the

source has the form tN⊕JN⊕dN. Here dNis the dithering

noise which is uniformly distributed over V1. tNand JNare

determined as follows: If it is the first time the source node

transmits during this block, tNis the origin. JNis picked

from the lattice points in Λ∩V1under a uniform distribution.

Otherwise, tNis picked by the encoder. JNis the lattice

point decoded from the jamming signal the source received

during the previous phase. This design is not essential but it

brings some uniformness in the form of received signals and

simplifies explanation.

2) The Relay Node: As this signal propagates toward the

destination, each relay node, when it is its turn, sends a

jamming signal in the form of tN

k+dN

kmod Λ,k = 2...K−1,

where K is the number of nodes. Subscript k denotes the

node index which transmit this signal. If this is the first

time the relay transmits during this block, then tN

from a uniform distribution over Λ ∩ V1, and all previous

received signals are ignored. Otherwise, tN

the signal it received during the previous phase. This will

be clarified in the sequel. dN

uniformly distributed over V1.

The signal received by the relay within a block can be

categorized into the following three cases. Let zNdenote the

Gaussian channel noise.

1) If this is the first time the relay receives signals during

this block, then it has the form (tN

contains interference from its left neighbor.

2) If this is the last time the relay receives signals during

this block, then it has the form (tN

contains interference from its right neighbor.

3) Otherwise it has the form yN

dN

Here tN

noises. Following reference [10], if the lattice is properly

designed and the cardinality of the set Λ ∩ V1 is properly

chosen, then for case (3), the relay, with the knowledge of

dN

(2), the relay will be able to decode tN

Otherwise, we say that a decoding error has occurred at the

relay node.

The transmitted signal at the relay node is then computed

as follows:

kis drawn

kis computed from

kagain is the dithering noise

A⊕dN

A)+zN. It only

B⊕dN

B)+zN. It only

k= (tN

A⊕ dN

A) + (tN

B⊕

B) + zN.

A, tN

Bare lattice points, and dN

A, dN

Bare dithering

A,dN

B, will be able to decode tN

A⊕ tN

Aand tN

B. For case (1) and

Brespectively.

xN= tN

A⊕ tN

B⊕ (−x′N) ⊕ dN

C

(23)

Here x′Nis the lattice point contained in the jamming signal

transmitted by this relay node during the previous phase. − is

the inverse operation defined over the group V1∩Λ. tN

are decoded from the signal it received during the previous

phase.

In Figure 3, we labeled the lattice points transmitted over

some edges. For clarity we omitted the superscriptN. The

+ signs in the figure are all modulus operations. The reason

why we have (−x′N) in (23) is now apparent: it leads to

a simple expression for the signal as it propagates from the

relay to the destination.

3) The Destination: As shown in Figure 3, the destination

behaves identically to a relay node when it computes its

jamming signal.

It is also clear from Figure 3 that the destination will be

able to decode the data from the source. This is because

the lattice point contained in the signal received by the

destination has the form tN⊕ JN, where tNis the lattice

point determinedby the transmitted data, and JNis the lattice

point in the jamming signal known by the destination.

A⊕tN

B

C. A Lower Bound to the Secrecy Rate

Suppose the source transmits Q+1 times within a block.

Then each relay node receives Q+2 batches of signals within

the block. An example with Q = 2 is shown in Figure 3.

Given the inputs from the source of the current block, the

Page 6

signals received by the relay node are independent from

the signals it received during any other block. Therefore,

if a block of channel uses is viewed as one meta-channel

use, with the source input as the channel input and the

signal received by the relay as the channel output, then the

effective channel is memoryless. Each relay node has the

The relay node

under consideration

tN

D3

xN

A1

xN

A2

tN

B1

tN

B2

tN

B2

tN

D1

tN

A1

xN

A1

xN

A2

tN

B1

tN

D2

tN

A2

tN

B3

xN

A3

tN

A3

xN

A3

tN

B3

Fig. 4.Notations for Lattice Points contained in Signals, Q = 2

following side information regarding the source inputs within

one block:

1) Q + 2 batches of received signals.

2) All the dithering noises {di}.

3) Signals transmitted from the relay node during this

block. Note that only the first batch of signals it trans-

mitted may provide information because all subsequent

transmitted signals are computed from received signals

and dithering noises.

Let W be the secret message transmitted over M blocks.

Following the notation in Figure 4, the equivocation with

respect to the relay node is given by:

H2=

1

NMH(W|(xNM

⊕ dNM

αi ,dNM

(tNM

A1 ⊕ dNM

D(i−1)⊕ dNM

α1) + zNM

1

,dNM

α1

(xNM

Ai

dNM

αi ) + (tNM

β(i−1)) + zNM

i

,

β(i−1),i = 2...Q + 1

D(Q+1)⊕ dNM

Define the block error probability as

β(Q+1)) + zNM

Q+1,dNM

β(Q+1),tNM

B1,dNM

b1 )

(24)

¯Pe= Pr(∃i ∈ {2...Q + 1},s.t.xN

or tN

Aiis in error,

D(i−1)is in error, or tN

D(Q+1)is in error.)

(25)

where xN

Similar notations are used for tN

the signaling scheme presented in section IV-B and [17,

Theorem 2], the probability of decoding error at each relay

node goes to zero as N → ∞. Let Pe(i,k) be the probability

of decoding error at relay node i during phase k. Then¯Pe

is related to Pe(i,k) as¯Pe≤ 1 −?

Aiis the part of xNM

Ai

that is within one block.

D(i−1)and tN

D(Q+1). Given

i,k(1 − Pe(i,k)), where

the subscript in product includes the indices of all the relay

node and the indices of the phases in this block.

For any given block length Q, we have limN→∞¯Pe= 0.

Note that¯Pe is just a function of N and Q. Because there

are only finite number of relay nodes, this convergence is

uniform over all relay nodes.

Let the equivocation under error free decoding be

¯H2=

1

NMH(W|(xNM

⊕ dNM

αi ,dNM

(¯tNM

A1 ⊕ dNM

D(i−1)⊕ dNM

α1) + zNM

1

,dNM

α1

(¯ xNM

Ai

dNM

αi ) + (¯tNM

β(i−1)) + zNM

i

,

β(i−1),i = 2...Q + 1

D(Q+1)⊕ dNM

where ¯ xNM

Ai

equals the value xNM

decoding. ¯tNM

fashion. Then we have the following lemma:

Lemma 2: For a given Q,¯H2+ε2≥ H2≥¯H2−ε1where

ε1,2→ 0 as N,M → ∞.

Proof: Let cj,ˆ cjdenote the part of signals received by

the relay node within the jth block. More specifically, they

have the following form:

β(Q+1)) + zNM

Q+1,dNM

β(Q+1),tNM

B1,dNM

b1 )

(26)

Ai

takes with error free

D(i−1)and ¯tNM

D(Q+1)are defined in a similar

ˆ cj= {(xN

(tN

cj= {(¯ xN

(¯tN

Ai(j) ⊕ dN

αi(j))+

β(i−1)(j)) + zN

Ai(j) ⊕ dN

D(i−1)(j) ⊕ dN

In this notation, we exclude the first and the last batch of

received signals. The first batch of received signals does

not undergo any decoding operation. For the last batch of

received signals we have the following notation:

D(i−1)(j) ⊕ dN

i(j),i = 2...Q + 1}

(27)

αi(j))+

β(i−1)(j)) + zN

i(j),i = 2...Q + 1}

(28)

ˆfj= (tN

fj= (¯tN

D(Q+1)(j) ⊕ dN

D(Q+1)(j) ⊕ dN

β(Q+1)(j)) + zN

β(Q+1)(j)) + zN

Q+1(j)

(29)

Q+1(j)

(30)

The block index (j) will be omitted in the following discus-

sion for clarity.

We first prove that cj− ˆ cjis a discrete random variable

with a finite support. According to the notation of (28), cj−ˆ cj

has Q components. Each component can be expressed as

?¯ xN

(¯tN

Ai⊕ dN

D(i−1)⊕ dN

αi

?−?xN

β(i−1)) − (tN

Ai⊕ dN

αi

D(i−1)⊕ dN

?+

β(i−1))

(31)

For the first line of (31) we have

?¯ xN

Ai+ dN

=¯ xN

Ai⊕ dN

αi

?−?xN

αi+ xN

Ai+ xN

Ai⊕ dN

1−?xN

1− xN

αi

?

(32)

(33)

(34)

=¯ xN

Ai+ dN

αi+ xN

2

?

Ai− xN

belong to the coarse lattice Λ1. Applying

Theorem 1, we note that xN

possible solutions. ¯ xN

values. Let R =

22N(R+1)possible values. Similarly, we can prove that the

second line of (31) has at most 22N(R+1)possible values as

well. Therefore cj− ˆ cjtakes at most 24NQ(R+1)possible

values. Therefore H?cj− ˆ cj?≤ 4NQ(R + 1). Similarly, it

2

where xN

1,xN

2

1 and xN

Aieach take ?V1∩Λ? possible

Nlog2?V1∩ Λ?. Then (32) takes at most

2 each has at most 2N

Aiand xN

1

Page 7

can be shown that f −ˆf has at most 2N(R + 1) solutions.

This means that

H(cj− ˆ cj,fj−ˆfj) ≤ (4Q + 2)N(R + 1)

(35)

Let c = {cj}, ˆ c = {ˆ cj}, f = {fj} andˆf = {ˆfj} j = 1...M.

Let b denote the remaining conditioning terms in H2. Let Ej

denote the random variable cj?= ˆ cjor fj?=ˆfj. Then with

probability¯Pe that Ej= 1. Otherwise Ej= 0. Let W be

the message transmitted over the M blocks. Then we have

H(W|b,ˆ c,ˆf)

≥H(W|b,c,ˆ c,f,ˆf)

=H(W|b,c,f,c − ˆ c,f −ˆf)

=H(W|b,c,f) + H(c − ˆ c,f −ˆf|W,b,c,f)

− H(c − ˆ c,f −ˆf|b,c,f)

≥H(W|b,c,f) − H(c − ˆ c,f −ˆf)

M

?

j=1

(36)

(37)

(38)

(39)

≥H(W|b,c,f) −

H(cj− ˆ cj,fj−ˆfj)

(40)

=H(W|b,c,f) −

M

?

j=1

H(cj− ˆ cj,fj−ˆfj,Ej)

(41)

≥H(W|b,c,f) −

M

?

j=1

H(Ej)

−

M

?

j=1

Pr(Ej= 1)H(cj− ˆ cj,fj−ˆfj)

(42)

≥H(W|b,c,f) − M − M¯Pe(4Q + 2)N(R + 1)

(43)

By dividing NM on both sides and letting N,M → ∞, and

ε1 = 1/N +¯Pe(4Q + 2)(R + 1) we get H2 ≥¯H2− ε1.

Similarly we can prove¯H2≥ H2− ε2.

Remark 3: Lemma 2 says that if a particular equivocation

value is achievable with regard to one relay node, when all

the other relay nodes do error free decoding, then the same

equivocation value is achievable when other relay nodes do

decode and forward which is only error free in asymptotic

sense.

Lemma 3:¯H2is the same for all relay nodes.

Proof: Lemma follows because relay nodes receive

statistically equivalent signals if there are no decoding errors.

For the kth relay node, as shown by the edge labels in

Figure 3, the condition term of¯H2in (26) is related to tNM

as follows:

j

xNM

A1

¯ xNM

A2

¯ xNM

A3

...

¯ xNM

¯tNM

= JNM

= tNM

0

= tNM

0

k−2

(44)

(45)

(46)

⊕ JNM

⊕ tNM

k−1

1

⊕ JNM

k

A(Q+1)= tNM

D1 = JNM

0

⊕ tNM

1

⊕ ... ⊕ tNM

Q−1⊕ JNM

K+Q−2

(47)

k

(48)

¯tNM

¯tNM

D2 = tNM

D3 = tNM

...

¯tNM

tNM

0

⊕ JNM

⊕ tNM

k+1

(49)

(50)

01

⊕ JNM

k+2

D(Q+1)= tNM

B1 = JNM

0

⊕ tNM

1

⊕ ...tNM

Q−1⊕ JNM

k+Q

(51)

k−1

(52)

Given the lattice points transmitted by the source tNM

joint distribution of the side information for any relay node

is the same. Hence we have the lemma.

With these preparation, we are now ready to present the

following achievable rate.

Theorem 4: For any ε > 0, a secrecy rate of at least

0.5(C(2¯P −0.5)−1)−ε bits per channel use is achievable

regardless of the number of hops.

Proof: According to Lemma 3, it suffices to design

the coding scheme based on one relay node. We focus on

one block of channel uses as shown in Figure 3. Let V (j)

to denote all the side information available to the relay

node within the jth block. We start by lower bounding

H(tNQ

0

|V (j)) under ideal error free decoding, where tNQ

are the lattice points picked by the encoder at the source node

as described in Section IV-B within this block. H(tNQ

equals

j

, the

0

0

|V (j))

H(tNQ

0

|(¯ xN

dN

Ai⊕ dN

αi,dN

αi) + (¯tN

β(i−1),i = 2...Q + 1,tN

D(i−1)⊕ dN

β(i−1)) + zN

B1,dN

i,

b1)

(53)

Comparing (53) with the condition terms in (26), we see

that we have removed the first batch and the last batch of

received signals during a block from the condition terms

because they are independent from everything else. The last

batch of received signals contains the lattice point of the

most recent jamming signal observable by the relay node. Its

independence follows from Lemma 1.

We then assume that the eavesdropper residing at the relay

node knows the channel noise. This means (53) can be lower

bounded by:

H(tNQ

0

|(¯ xN

dN

Ai⊕ dN

αi,dN

αi) + (¯tN

β(i−1),i = 2...Q + 1,tN

D(i−1)⊕ dN

β(i−1)),

B1,dN

b1)

(54)

Next, we invoke Theorem 1. Equation (54) can be lower

bounded by:

H(tNQ

0

|¯ xN

dN

Ai⊕ dN

αi,dN

αi⊕¯tN

β(i−1),i = 2...Q + 1,tN

D(i−1)⊕ dN

β(i−1),Ti,

B1,dN

b1)

(55)

where Tican be represented with N bits. Using the similar

argument as in (9)-(13), (55) is lower bounded by:

H(tNQ

dN

=H(tNQ

0

|¯ xN

β(i−1),i=2...Q+1,tN

|¯ xN

Ai⊕ dN

αi⊕¯tN

D(i−1)⊕ dN

B1,dN

β(i−1),

αi,dN

b1) − H(Ti,i=2...Q+1) (56)

D(i−1),i=2...Q+1,tN

0

Ai⊕¯tN

B1) − H(Ti,i=2...Q+1)

(57)

It turns out that in the first term in (57), the conditional

variables are all independent from tNQ

¯tN

0

. This is because

D(i−1)contains JN

i−2+k, which is a new lattice point not

Page 8

contained in previous¯tN

point is uniformly distributed over V1∩ Λ. Therefore, from

Lemma 1, ¯ xN

(57) equals

D(j−1)or ¯ xN

Ajj < i. The new lattice

Ai⊕¯tN

D(i−1)is independent from tNQ

0

. Therefore

H(tNQ

0

) − H(Ti,i=2...Q+1)

(58)

Define

c =

1

NQI(tNQ

0

;V (j))

(59)

Then from (58), we have c ∈ (0,1).

To achieve perfect secrecy, a similar argument of coding

across different blocks as the one in Section III can be used.

A codebook with rate R and size 2⌊MNQR⌋that spans over

M blocks is constructed as follows: Each codeword is a

length MQ sequence. Each component of the sequence is

an N-dimensional lattice point sampled in an i.i.d fashion

from the uniform distribution over V1∩ Λ. The codebook is

then randomly binned into several bins. Each bin contains

2⌊MNQc⌋codewords, with c given by (59). Denote the

codebook with C.

The transmitted codeword is determined as follows: Con-

sider a message set {W}, whose size equals the number of

the bins. The message is mapped to the bins in a one-to-one

fashion. The actual transmitted codeword is then selected

from the bin according to a uniform distribution. Let this

codeword be uMNQ. Let V = {V (j),j = 1...M}. Then we

have:

H (W|V,C)

=H?W|uMNQ,V,C?+ H?uMNQ|V,C?

− H?uMNQ|W,V,C?

≥H?uMNQ|V,C?− MNQε

=H?uMNQ|C?− I?uMNQ;V |C?− MNQε

M

?

j=1

=H?uMNQ|C?− MNQc − MNQε

(62) follows from Fano’s inequality and the size of the bin

is picked according to the rate of information leaked to

the eavesdropper under the same input distribution used to

sample the codebook. (64) follows from C → uMNQ→ V

being a Markov chain. Divide (60) and (65) by MNQ and let

M → ∞, we have ε → 0 and limM→∞

limM→∞

bits per channel use is achieved. According to [10], R can

be arbitrarily close to C(P −0.5) by making N → ∞, where

P is the average power per channel use spent to transmit a

lattice point. For a given node, during 2Q + 3 phases, it is

active in Q + 1 phases. Since c ∈ [0,1], a secrecy rate of

Q+1

2Q+3(C(2Q+3

M → ∞. Taking the limit Q → ∞, we have the theorem.

V. CONCLUSION

Lattice codes were shown recently as a useful technique to

prove information theoretic results. In this work, we showed

(60)

(61)

(62)

(63)

≥H?uMNQ|C?−

I?uMNQ(j);V (j)?− MNQε (64)

(65)

1

MNQH(W|V,C) =

1

MNQH(W). Therefore a secrecy rate of R − c

Q+1¯P − 0.5) − 1) is then achievable by letting

that lattice codes are also useful to prove secrecy results.

This was done by showing that the equivocation rate could

be bounded if the shaping set and the “fine” lattice forms

a nested lattice structure. With this new tool, we computed

the secrecy rate for two models: (1) a wiretap channel with

a cooperative jammer, (2) a multi-hop line network with

untrusted relays. For the second model, we have shown that

a coding scheme can be designed to support a non-vanishing

secrecy rate regardless of the number of hops.

REFERENCES

[1] C. E. Shannon. Communication Theory of Secrecy Systems. Bell

System Technical Journal, 28(4):656–715, 1949.

[2] A. D. Wyner. The Wire-tap Channel. Bell System Technical Journal,

54(8):1355–1387, 1975.

[3] I. Csiszar and J. Korner.Broadcast Channels with Confidential

Messages. IEEE Transactions on Information Theory, 24(3):339–348,

1978.

[4] S. Leung-Yan-Cheong and M. Hellman.

Channel. IEEE Transactions on Information Theory, 24(4):451–456,

1978.

[5] A. Khisti and G. Wornell. Secure Transmission with Multiple Anten-

nas: The MISOME Wiretap Channel. Submitted to IEEE Transactions

on Information Theory, 2007.

[6] S. Shafiee, N. Liu, and S. Ulukus. Towards the Secrecy Capacity of

the Gaussian MIMO Wire-tap Channel: The 2-2-1 Channel. Submitted

to IEEE Transactions on Information Theory, 2007.

[7] F. Oggier and B. Hassibi. The Secrecy Capacity of the MIMO Wiretap

Channel. IEEE International Symposium on Information Theory, 2008.

[8] E. Tekin and A. Yener. The Gaussian Multiple Access Wire-tap

Channel. IEEE Transaction on Information Theory, 54(12):5747–5755,

December 2008.

[9] B. Nazer and M. Gastpar. The Case for Structured Random Codes

in Network Capacity Theorems. European Transactions on Telecom-

munications, Special Issue on New Directions in Information Theory,

19(4):455–474, 2008.

[10] K. Narayanan, M.P. Wilson, and A. Sprintson. Joint Physical Layer

Coding and Network Coding for Bi-Directional Relaying. Allerton

Conference on Communication, Control, and Computing, 2007.

[11] W. Nam, S-Y Chung, and Y.H. Lee.

way Relay Channels. Internation Zurich Seminar on Communications,

2008.

[12] G. Bresler, A. Parekh, and D. Tse.

the Many-to-one and One-to-many Gaussian Interference Channels.

Allerton Conf. on Communication, Control, and Computing, 2007.

[13] S. Sridharan, A. Jafarian, S. Vishwanath, and S. A. Jafar. Capacity of

Symmetric K-User Gaussian Very Strong Interference Channels. IEEE

Global Telecommunication Conf., November 2008.

[14] E. Tekin and A. Yener.Achievable Rates for Two-Way Wire-Tap

Channels. International Symposium on Information Theory, June 2007.

[15] L. Lai, H. El Gamal, and H.V. Poor. The Wiretap Channel with Feed-

back: Encryption over the Channel. IEEE Transaction on Information

Theory, 54(11):5059–5067, November 2008.

[16] H. A. Loeliger.Averaging bounds for lattices and linear codes.

IEEE Transaction on Information Theory, 43(6):1767–1773, November

1997.

[17] U. Erez and R. Zamir. Achieving 1/2 log (1+ SNR) on the AWGN

Channel with Lattice Encoding and Decoding. IEEE Transactions on

Information Theory, 50(10):2293–2314, October 2004.

[18] S.A. Jafar.Capacity with Causal and Non-Causal Side Informa-

tion - A Unified View. IEEE Transactions on Information Theory,

52(12):5468–5475, December 2006.

[19] J.H. Conway and N.J.A. Sloane. Sphere Packings, Lattices and Groups.

Springer, 1999.

[20] E. Tekin and A. Yener.The General Gaussian Multiple Access

and Two-Way Wire-Tap Channels: Achievable Rates and Cooperative

Jamming.

IEEE Transactions on Information Theory, 54(6):2735–

2751, June 2008.

[21] U. Erez, S. Litsyn, and R. Zamir.

(Almost) Everything.

IEEE Transactions on Information Theory,

51(10):3401–3416, 2005.

The Gaussian Wire-tap

Capacity Bounds for Two-

the Approximate Capacity of

Lattices Which Are Good for