Page 1

arXiv:0905.2645v1 [cs.IT] 16 May 2009

Providing Secrecy with Lattice Codes

Xiang HeAylin Yener

Wireless Communications and Networking Laboratory

Electrical Engineering Department

The Pennsylvania State University, University Park, PA 16802

xxh119@psu.edu yener@ee.psu.edu

Abstract—Recent results have shown that lattice codes can be

used to construct good channel codes, source codes and physical

layer network codes for Gaussian channels. On the other hand,

for Gaussian channels with secrecy constraints, efforts to date

rely on random codes. In this work, we provide a tool to bridge

these two areas so that the secrecy rate can be computed when

lattice codes are used. In particular, we address the problem of

bounding equivocation rates under nonlinear modulus operation

that is present in lattice encoders/decoders. The technique is

then demonstrated in two Gaussian channel examples: (1) a

Gaussian wiretap channel with a cooperative jammer, and (2)

a multi-hop line network from a source to a destination with

untrusted intermediate relay nodes from whom the information

needs to be kept secret. In both cases, lattice codes are used to

facilitate cooperative jamming. In the second case, interestingly,

we demonstrate that a non-vanishing positive secrecy rate is

achievable regardless of the number of hops.

I. INTRODUCTION

Information theoretic secrecy was first proposed by Shan-

non in [1]. In this classical model, Bob wants to send a

message to Alice, which needs to be kept secret from Eve.

Shannon’s notion of secrecy requires the average rate of

information leaked to Eve to be zero, with no assumption

made on the computational power of Eve. Wyner, in [2],

pointed out that, more often than not, the eavesdropper

(Eve) has a noisy copy of the signal transmitted from the

source, and building a useful secure communication system

per Shannon’s notion is possible [2]. Csiszar and Korner [3]

extended this to a more general channel model.

Numerous channel models have since been studied under

Shannon’s framework. The maximum reliable transmission

rate with secrecy is identified for several cases including

the Gaussian wiretap channel [4] and the MIMO wiretap

channel [5], [6], [7]. Sum secrecy capacity for a degraded

Gaussian multiple access wiretap channel is given in [8].

For other channels, upper bounds, lower bounds and some

asymptotic results on the secrecy capacity exist. For the

achievability part, Shannon’s random coding argument proves

to be effective in majority of these works.

On the other hand, it is known that the random coding

argument may be insufficient to prove capacity theorems for

certain channels [9]. Instead, structured codes like lattice

codes are used. Using structured codes has two benefits. First,

it is relatively easy to analyze large networks under these

codes. For example, in [10], [11], the lattice code allows the

relaying scheme to be equivalent to a modulus sum operation,

making it easy to trace the signal over a multi-hop relay

network. Secondly,the structured nature of these codes makes

it possible to align unwanted interference, for example, for

the interference channel with more than two users [12], [13],

and the two way relay channel [10], [11].

A natural question is therefore whether structured codes

are useful for secure communication as well. In particular, in

this work, we are interested in answering two questions:

1) How do we bound the secrecy capacity when structured

codes are used?

2) Are there models where structured codes prove to be

useful in providing secrecy?

Relevant references in this line of thinking includes [14]

and [15]. Reference [14] considers a binary additive two-

way wiretap channel where one terminal uses binary jamming

signals. Reference [15] examines a wiretap channel where the

eavesdropping channel is a modulus-Λ channel. Under the

proposed signaling scheme therein, the source uses a lattice

code to convey the secret message, and, the destination jams

the eavesdropper with a lattice code. The eavesdropper sees

the sum of these two codes, both taking value in a finite

group, where the sum is carried under the addition defined

over the group. It is known that if the jamming signal is

sampled from a uniform distribution over the group, then the

sum is independent from the message.

While these are encouragingsteps in showing the impact of

structured jamming signals, as commented in [15], using this

technique in Gaussian channels is a non-trivial step. In the

Gaussian channel, also, the eavesdropper receives the sum of

the signal from the source and the jamming signal. However,

the addition is over real numbers rather than over a finite

group. The property of modulus sum is therefore lost and it

is difficult to measure how much information is leaked to the

eavesdropper.

Most lattice codes for power constrained transmission have

a similar structure to the one used in [15]. First, a lattice is

constructed, which should be a good channel code under the

noise/interference. Then, to meet the power constraint, the

lattice, or its shifted version, is intersected with a bounded

set, called the shaping set, to create a set of lattice points

with finite average power. The lattice is shifted to make sure

sufficiently many lattice points fall into the shaping set to

maintain the codebook size and hence the coding rate [16].

The decoder at the destination is called a lattice decoder if it

is only asked to find the most likely lattice point under the

received signals, and is not aware of shaping set. Because of

Page 2

the structured nature of the lattice, a lattice decoder has lower

complexity compared to the maximum likelihood decoder

where the knowledge of shaping set is used. Also, under

the lattice decoder, the introduction of shaping set does not

pose any additional difficulty to the analysis of decoding

performance.Commonly used shaping sets include the sphere

[12] and the fundamental region of a lattice [17].

A key observation is that, from the viewpoint of an eaves-

dropper, the shaping set actually provides useful information,

since it reduces the set of lattice points the eavesdropper

needs to consider. The main aim of this work, therefore, is

to find a shaping set and lattice code construction under

which the information leaked to the eavesdropper can be

bounded. This shaping set, as we shall see, turns out to be the

fundamental region of a “coarse” lattice in a nested lattice

structure. Under this construction, we show that at most 1 bit

is leaked to the eavesdropper per channel use. This enables

us to lower bound the secrecy rate using a technique similar

to the genie bound from [18].

To demonstrate the utility of our approach, we then apply

our technique to two channel models: a Gaussian wiretap

channel with a cooperative jammer, and a multi-hop line

network, where a source can communicate a destination only

through a chain of untrusted relays. In the second case,

we demonstrate that a non-vanishing positive secrecy rate

is achievable regardless of the number of hops.

The following notation is used throughout this work: We

use H to denote the entropy. εk is used to denote any

variable that goes to 0 when n goes to ∞. We define

C(x) =

than or equal to a.

1

2log2(1 + x). ⌊a⌋ denotes the largest integer less

II. THE REPRESENTATION THEOREM

In this section, we present a result about lattice codes

which will be useful in the sequel.

Let Λ denote a lattice in RN[17], i.e., a set of points

which is a group closed under real vector addition. The

modulus operation x mod Λ is defined as x mod Λ = x −

arg miny∈Λd(x,y), where d(x,y) is the Euclidean distance

between x and y. The fundamental region of a lattice V is

defined as the set {x : x mod Λ = 0}. It is possible that there

are more than one lattice points that have the same minimal

distance to x. Breaking a tie like this is done by properly

assign the boundary of V [17].

Let tAand tBbe two numbers taken from V. For any set

A, define 2A as 2A = {2x : x ∈ A}. Then we have:

{tA+ tB: tA,tB∈ V} = 2V

Define Axas Ax= {tA+ tB+ x,tA,tB∈ V}. Then from

(1), we have Ax = x + 2V. With this preparation, we are

ready to prove the following representation theorem:

Theorem 1: There exists a random integer T, such that

1 ≤ T ≤ 2N, and tA+tBis uniquely determined by {T,tA+

tBmod Λ}.

Proof: By definition of the modulus Λ operation, we

have

(1)

tA+ tBmod Λ = tA+ tB+ x, x ∈ Λ

(2)

The theorem is equivalent to finding the number of possible

x meeting equation (2) for a given tA+ tBmod Λ.

To do that, we need to know a little more about the struc-

ture of lattice Λ. Every point in a lattice, by definition, can be

represented in the following form [19]: x =

N ?

i=1aivi,vi∈

RN,ai∈ Z. {ai} is said to be the coordinates of the lattice

point x under the basis {vi}.

Based on this representation, we can define the following

relationship: Consider two points x,y ∈ Λ, with coordinates

{ai} and {bi} respectively. Then we say x ∼ y if ai =

bimod 2, i = 1...N. It is easy to see the relationship ∼ is

an equivalence relationship. Therefore, it defines a partition

over Λ.

1) Depending on the values of ai− bimod 2, there are

2Nsets in this partition.

2) The sub-lattice 2Λ is one set in the partition, whose

members have even coordinates. The remaining 2N−1

sets are its cosets.

Let Ci denote any one of these cosets or 2Λ. Then Ci can

expressed as Ci= 2Λ + yi, yi∈ Λ. It is easy to verify that

Ax = x + 2V, x ∈ Ci is a partition of 2RN+ yi, which

equals RN.

We proceed to use the two partitions derived above: Since

Ci,i = 1...2Nis a partition of Λ, (2) can be solved by

considering the following 2Nequations:

tA+ tBmod Λ = tA+ tB+ x, x ∈ Ci

From (1), this means tA+ tB mod Λ ∈ x + 2V for some

x ∈ Ci. Since x + 2V,x ∈ Ciis a partition of RN, there is

at most one x ∈ Cithat meets this requirement. This implies

for a given tA+ tB mod Λ, and a given coset Ci, (3) only

has one solution for x. Since there are 2Nsuch equations,

(2) has at most 2Nsolutions. Hence each tA+ tBmod Λ

corresponds to at most 2Npoints of tA+ tB.

Remark 1: Theorem 1 implies that modulus operation

looses at most one bit per dimension of information if

tA,tB∈ V.

The following crypto lemma is useful and is provided here

for completeness.

Lemma 1: [15] Let tA,tB be two independent random

variables distributed over the a compact abelian group, tB

has a uniform distribution, then tA+tBis independent from

tA. Here + is the addition over the group.

In the remainder of the paper, (Λ,Λ1) denotes a nested

lattice structure where Λ1is the coarse lattice. Let V and V1

be their respective fundamental regions. We shall use a ⊕ b,

short for a+b mod Λ1. Then from Lemma 1, we have the

following corollary:

Corollary 1: Let tA ∈ Λ ∩ V1. tB ∈ Λ ∩ V1 and tB is

uniformly distributed over Λ ∩ V1. Let tS= tA⊕ tB. Then

tSis independent from tA.

(3)

III. WIRETAP CHANNEL WITH A COOPERATIVE JAMMER

In this section, we demonstrate the use of lattice codes

for secrecy in the simple model depicted in Figure 1. Nodes

S,D,E form a wiretap channel where S is the source node,

Page 3

1

Z2

Z1

SD

E

CJ

1

1

Fig. 1. Wiretap Channel with a Cooperative Jammer, CJ

D is the destination node, E is the eavesdropper. Let the

average power constraint of node S be P. Now suppose

that there is another transmitter CJ in the system, also

with power constraint P, as shown in Figure 1. We assume

that the interference caused by CJ to node D is either too

weak or too strong that it can be ignored or removed, and

consequently there is no link between CJ and D. In this

model, node CJ may choose to help S by transmitting a

jamming signal to confuse the eavesdropper E. Below, we

derive the secrecy rate for this case when the jamming signal

is chosen from a lattice codebook.

A. Gaussian Noise

We first consider the case when Z1and Z2are independent

Gaussian random variables with zero mean and unit variance.

In this case, we have the following theorem:

Theorem 2: A secrecy rate of [C(P)−1]+is achievable.

Proof: The codebook is constructed as follows: Let

(Λ,Λ1) be a properly designed nested lattice structure in RN

as described in [17]. The codebook is all the lattice points

within the set Λ ∩ V1.

Let tN

dN

transmitted signal is given by tN

the above signal corrupted by Gaussian noise and tries to

decode tN

[17, Theorem 5], there exists a sequence of properly designed

(Λ,Λ1) with increasing dimension, such that

Abe the lattice point transmitted by node S. Let

Abe the dithering noise uniformly distributed over V1. The

A⊕dN

A. The receiver receives

A. Let the decoding result beˆtN

A. Then as shown in

lim

N→∞

C(P) =1

1

Nlog2|Λ ∩ V1| < C(P)

(4)

2log2(1 + P)

(5)

and limN→∞Pr(tN

The cooperative jammer CJ uses the same codebook as

node S. Let the lattice point transmitted by CJ be tN

the dithering noise be dN

by tN

node S, the legitimate receiver node D and the eavesdropper

node E. dN

node E. Hence, there is no common randomness between

the legitimate communicating pairs that is not known by the

eavesdropper.

Then the signal received by the eavesdropper can be

represented as tN

the Gaussian channel noise over N channel uses. Then we

have

A?=ˆtN

A) = 0.

Band

B. The transmitted signal is given

B. As in [17], we assume that dN

B⊕ dN

Ais known by

Bis known by node S, and the eavesdropper

A⊕ dN

A+ tN

B⊕ dN

B+ ZN

2, where ZN

2

is

H(tN

A|tN

A⊕ dN

A+ tN

B⊕ dN

B+ ZN

2,dN

A,dN

B)

(6)

≥H(tN

=H(tN

=H(tN

=H(tN

=H(tN

=H?T|tN

A|tN

A|tN

A|tN

A|tN

A|tN

A⊕ dN

A⊕ dN

A⊕ dN

A⊕ tN

A⊕ tN

A⊕ tN

A+ tN

A+ tN

A⊕ tN

B,dN

B⊕ dN

B⊕ dN

B⊕ dN

A,dN

B+ ZN

B,dN

B,dN

2,dN

A,dN

A,dN

A,dN

B,ZN

2)

(7)

(8)

(9)

(10)

(11)

B)

B,T)

B,T)

B,T)

B,tN

A

?+ H?tN

A|tN

A⊕ tN

B

?− H?T|tN

A⊕ tN

B

?

(12)

(13)

(14)

(15)

≥H?tN

=H?tN

≥H?tN

In (9), we introduce the N bit information T that will help

to recover tN

(14), we use the fact that tN

based on Corollary 1.

Let c =

Then from (15), since H(T) ≤ N, we have c ≤ 1.

Therefore, if the message is mapped one-to-one to tN

an equivocation rate of at least C(P)−1 is achievable under

a transmission rate of C(P) bits per channel use.

We note that to obtain perfect secrecy, some additional

effort is required. First, we define a block of channel uses

as the N channel uses required to transmit a N dimensional

lattice point. A perfect secrecy rate of C(P) − 1 can then

be achieved by coding across multiple blocks: A codeword

in this case is composed of Q components, each component

is an N dimensional lattice point sampled from a uniform

distribution over V1∩ Λ in an i.i.d. fashion. The resulting

codebook C contains 2⌊NQR⌋codewords with R < C(P).

Like wiretap codes, the codebook is then randomly binned

into several bins, where each bin contains 2⌊NQc⌋codewords.

The secret message W is mapped to the bins. The actual

transmitted codeword is chosen from that bin according to a

uniform distribution.

Let YNQ

e

denote the signals available to the eavesdropper:

YNQ

e

= {tNQ

Then we have

A|tN

A

?− H?T|tN

A

?− H (T)

A⊕ tN

B

?− H?T|tN

A⊕ tN

A⊕ tN

B

?

B

?

A⊕dN

A+tN

B⊕dN

Bfrom tN

Ais independent from tN

A⊕dN

A⊕tN

B⊕dN

A⊕ tN

B. In

B

1

NI?tN

A;tN

A⊕ dN

A+ tN

B⊕ dN

B+ ZN

2,dN

A,dN

B

?.

A, then

A

⊕ dNQ

A

+ tNQ

B

⊕ dNQ

B

+ ZNQ,dNQ

A,dNQ

B}.

H(W|YNQ

=H(W|tNQ

− H(tNQ

≥H(tNQ

=H(tNQ

e

,C)

A,YNQ

A|W,YNQ

A|YNQ

A|YNQ

e

,C) + H(tNQ

,C)

,C) − NQε

,C) − H(tNQ

A|YNQ

e

,C)

e

(16)

(17)

e

e

A|C) + H(tNQ

A|C) − NQε

(18)

(19)

=H(tNQ

A|C) − I(tNQ

?

A;YNQ

Q

?

q=1

e

|C) − NQε

≥HtNQ

A|C

?

−I?tN

A;YN

e|C?− NQε

(20)

=H

?

tNQ

A|C

?

− QNc − NQε = QN (R − c) − NQε

(21)

In (17), we use Fano’s inequality to bound the last term in

(16). This is because the size of each bin is kept small enough

such that given W, the eavesdropper can determine tNQ

A

from

Page 4

its received signal YNQ

argument and (21), it can then be shown a secrecy rate of

C(P) − c is achievable. Since c < 1, this means a secrecy

rate of at least C(P) − 1 bits per channel use is achievable.

e

. Using the standard random coding

Remark 2: It is interesting to compare the secrecy rate

obtained here with that obtained by cooperativejamming with

Gaussian noise [20]. The latter is given by C(P)−C(

limP→∞C(

per channel use of loss in secrecy rate at high SNR by using

a structured code book as the jamming signal.

P

P+1).

P

P+1) = 0.5. Therefore there is at most 0.5 bit

B. Non-Gaussian Noise

The performance analysis in [17] requires Gaussian noise.

This is not always the case, for example, in the presence

of interference, which is not necessarily Gaussian. For non-

Gaussian noise, in principle, the analysis in [16] can be used

instead. On the other hand, in [16], a sphere is used as the

shaping set, making it difficult to computing the equivocation

rate via Theorem 1. We show below, if the code rate R has

the form log2t,t ∈ Z+, then a scaled lattice tΛ of the fine

lattice Λ can be used for shaping instead.

Theorem 3: If Z1,Z2 are i.i.d. continuous random vari-

ables with differential entropy h(E), such that 22h(E)= 2πe,

then a secrecy rate of [log2⌊√P⌋ − 1]+is achievable.

Proof: We need to show that there exists a fine lattice

Λ that has a good decoding performance [16, Theorem 6],

and Λ is close to a sphere in the sense that

lim

N→∞h(S) =1

2log2(2πeP′)

(22)

where h(S) =

tal region of Λ, and P′=

[21] that when a lattice is sampled from the lattice ensemble

defined therein, it is close to a sphere in the sense of (22).

The lattice ensemble is generally called construction A [16],

whose generation matrices are all matrix of size K×N over

finite group GF(q), with q being a prime. The lattice sampled

from the ensemble is “good” in probability when q,N → ∞

and K grows faster than log2N [21, (25)-(28)].Note that this

property of “goodness” is invariant under scaling. Therefore,

we can scale the lattice so that the volume of its fundamental

region remains fixed when its dimension N → ∞. This gives

us a sequence of lattice ensembles that meet the condition

of [13, Lemma 1]: (1) N → ∞ (2) q → ∞. (3) Each lattice

ensemble of a given dimension is balanced [16]. This means

when N → ∞, at least 3/4 of the lattice ensemble is good for

channel coding [13, Lemma 1]. The lattice decoder will have

a positive decoding error exponent as long as |V| > 2Nh(E).

Combined, this means there must exist a lattice Λ∗that

is close to a sphere and is a good channel code at the

same time. Hence we have

N → ∞. Since we assume h(E) =1

|V| > 2Nh(E), this means as long as P′> 1, the decoding

error will decrease exponentially when N → ∞.

Now pick the shaping set to be the fundamental region

of tΛ∗,t ∈ Z+. Then the code rate R = log2(t) [17]. With

1

Nlog2|V|, |V| is the volume of the fundamen-

1

N|V|

?

x∈V?x?2dx. It is shown in

1

Nlog2|V| →1

2log2(2πeP′) as

2log2(2πe) and require

the dithering and modulus operation from [17], the average

power of the transmitted signal per dimension is t2P′. Note

that the modulus operation at the destination, required in

order to remove the dithering noise, may distort the additive

channel noise. However, the decoding error event, defined as

the noise pushing a lattice codeword into the set of typical

noise sequence centered on a different lattice point [16],

remains identical. Therefore, the decoding error exponent

is the same. Hence we have P′> 1 and t2P′< P. The

largest possible t is ⌊√P⌋, with the rate being log2(⌊√P⌋).

With similar arguments as in Theorem 2, we conclude that a

secrecy rate of [log2(⌊√P⌋) − 1]+is achievable.

IV. MULTI-HOP LINE NETWORK WITH UNTRUSTED

RELAYS

A. System Model

In this section, we examine a more complicated communi-

cation scenario, as shown in Figure 2. The source has to com-

municate over K −1 hops (K ≥ 3) to reach the destination.

Yet the intermediate relaying nodes are untrusted and need to

be prevented from decoding the source information. Under

this model, we will show that, using Theorem 1, with lattice

codes for source transmission and jamming signals and an

appropriate transmission schedule, an end-to-end secrecy rate

that is independent of the number of untrusted relay nodes

is achievable. We assume nodes can not receive and transmit

signals simultaneously. We assume that each node can only

communicate to its two neighbors, one on each side. Let Yi

and Xibe the received and transmitted signal of the ith node

respectively. Then they are related as Yi= Xi−1+Xi+1+Zi,

where Ziare zero mean Gaussian random variables with unit

variance, and are independent from each other. Each node has

S123D

Fig. 2. A Line Network with 3 Un-trusted Relays

the same average power constraint:1

where n is the total number of channel uses. The channel

gains are normalized for simplicity.

We consider the case where there is an eavesdropper

residing at each relay node and these eavesdroppers are not

cooperating. This also addresses the scenario where there is

one eavesdropper, but the eavesdropper may appear at any

one relay node that is unknown a priori. In either case, we

need secrecy from all relays and the secrecy constraints for

the K relay nodes are expressed as lim

n

?n

k=1E?Xi(k)2?≤¯P

n→∞

1

nH (W|Yn

i) =

lim

n→∞

B. Signaling Scheme

Because all nodes are half duplex, a schedule is necessary

to control when a node should talk. The node schedule is

best represented by the acyclic directional graph as shown

in Figure 3. The columns in Figure 3 indicate the nodes and

the rows in Figure 3 indicate the phases. The length of a

phase is the number of channel uses required to transmit a

1

nH (W),i = 1...K.

Page 5

lattice point, which equals the dimension of the lattice. A

node in a row has an outgoing edge if it transmits during

a phase. The node in that row has an incoming edge if it

can hear signals during the previous phase. It is understood,

though not shown in the figure, that the signal received by

the node is a superposition of the signals over all incoming

edges corrupted by the additive Gaussian noise.

A number of consecutive phases is called one block, as

shown in Figure 3. The boundary of a block is shown by the

dotted line in Figure 3. The data transmission is carried over

M blocks.

One block of channel uses

t0+ t1+ J4

J−1

J0

J1

J2

J0

J1

J2

J3

t0+ J0

t0+ J1

t0+ J2

t0+ J3

t0+ J1

t0+ J2

t0+ J3

t0+ J4

t0+ t1+ J1

t0+ t1+ J2

t0+ t1+ J3

Fig. 3. One Block of Channel Uses

Again the nested lattice code (Λ,Λ1) from [10] is used

within each block. The codebook is constructed in the same

fashion as in Section III.

1) The Source Node: The input to the channel by the

source has the form tN⊕JN⊕dN. Here dNis the dithering

noise which is uniformly distributed over V1. tNand JNare

determined as follows: If it is the first time the source node

transmits during this block, tNis the origin. JNis picked

from the lattice points in Λ∩V1under a uniform distribution.

Otherwise, tNis picked by the encoder. JNis the lattice

point decoded from the jamming signal the source received

during the previous phase. This design is not essential but it

brings some uniformness in the form of received signals and

simplifies explanation.

2) The Relay Node: As this signal propagates toward the

destination, each relay node, when it is its turn, sends a

jamming signal in the form of tN

k+dN

kmod Λ,k = 2...K−1,

where K is the number of nodes. Subscript k denotes the

node index which transmit this signal. If this is the first

time the relay transmits during this block, then tN

from a uniform distribution over Λ ∩ V1, and all previous

received signals are ignored. Otherwise, tN

the signal it received during the previous phase. This will

be clarified in the sequel. dN

uniformly distributed over V1.

The signal received by the relay within a block can be

categorized into the following three cases. Let zNdenote the

Gaussian channel noise.

1) If this is the first time the relay receives signals during

this block, then it has the form (tN

contains interference from its left neighbor.

2) If this is the last time the relay receives signals during

this block, then it has the form (tN

contains interference from its right neighbor.

3) Otherwise it has the form yN

dN

Here tN

noises. Following reference [10], if the lattice is properly

designed and the cardinality of the set Λ ∩ V1 is properly

chosen, then for case (3), the relay, with the knowledge of

dN

(2), the relay will be able to decode tN

Otherwise, we say that a decoding error has occurred at the

relay node.

The transmitted signal at the relay node is then computed

as follows:

kis drawn

kis computed from

kagain is the dithering noise

A⊕dN

A)+zN. It only

B⊕dN

B)+zN. It only

k= (tN

A⊕ dN

A) + (tN

B⊕

B) + zN.

A, tN

Bare lattice points, and dN

A, dN

Bare dithering

A,dN

B, will be able to decode tN

A⊕ tN

Aand tN

B. For case (1) and

Brespectively.

xN= tN

A⊕ tN

B⊕ (−x′N) ⊕ dN

C

(23)

Here x′Nis the lattice point contained in the jamming signal

transmitted by this relay node during the previous phase. − is

the inverse operation defined over the group V1∩Λ. tN

are decoded from the signal it received during the previous

phase.

In Figure 3, we labeled the lattice points transmitted over

some edges. For clarity we omitted the superscriptN. The

+ signs in the figure are all modulus operations. The reason

why we have (−x′N) in (23) is now apparent: it leads to

a simple expression for the signal as it propagates from the

relay to the destination.

3) The Destination: As shown in Figure 3, the destination

behaves identically to a relay node when it computes its

jamming signal.

It is also clear from Figure 3 that the destination will be

able to decode the data from the source. This is because

the lattice point contained in the signal received by the

destination has the form tN⊕ JN, where tNis the lattice

point determinedby the transmitted data, and JNis the lattice

point in the jamming signal known by the destination.

A⊕tN

B

C. A Lower Bound to the Secrecy Rate

Suppose the source transmits Q+1 times within a block.

Then each relay node receives Q+2 batches of signals within

the block. An example with Q = 2 is shown in Figure 3.

Given the inputs from the source of the current block, the