Page 1

An Analytical Method for Performance

Evaluation of Binary Linear Block Codes

Ali Abedi and Amir K. Khandani

Coding & Signal Transmission Laboratory

Department of Electrical & Computer Engineering

University of Waterloo

Waterloo, Ontario, Canada, N2L 3G1

Technical Report UW-E&CE#2003-1

February 8, 2003

Page 2

1

An Analytical Method for Performance

Evaluation of Binary Linear Block

Codes

Ali Abedi and Amir K. Khandani

Coding & Signal Transmission Laboratory(www.cst.uwaterloo.ca)

Dept. of Elec. and Comp. Eng., University of Waterloo

Waterloo, ON, Canada, N2L 3G1

Tel: 519-8848552, Fax: 519-8884338

e-mail: {ali, khandani}@cst.uwaterloo.ca

Abstract

An analytical method for performance evaluation of binary linear block codes using an

Additive White Gaussian Noise (AWGN) channel model with Binary Phase Shift Keying

(BPSK) modulation is presented. We focus on the Probability Density Function (pdf) of the

bit Log-Likelihood Ratio (LLR) which is expressed in terms of the Gram-Charlier series

expansion. This expansion requires knowledge of the statistical moments of the bit LLR. We

introduce an analytical method for calculating these moments. This is based on some recursive

calculations involving certain weight enumerating functions of the code. Numerical results are

provided for some examples, which demonstrate close agreement with the simulation results.

Index Terms

Additive White Gaussian Noise Channel, Binary Phase Shift Keying, Bit Decoding, Bit

Error Probability, Block Codes, Log-Likelihood Ratio, Weight Distribution.

This work is financially supported by Natural Sciences and Engineering Research Council of Canada (NSERC) and

by Communications and Information Technology Ontario (CITO). Earlier version [1] of this work has been presented

at ISIT 2002.

Page 3

2

I. INTRODUCTION

In the application of channel codes, one of the most important problems is to develop

an efficient decoding algorithm for a given code. The class of Maximum Likelihood (ML)

decoding algorithms are designed to find a valid code-word with the maximum likelihood

value. The ML algorithms are known to minimize the probability of the Frame Error Rate

(FER) under the mild condition that the code-words occur with equal probability.

Another class of decoding algorithms, known as bit decoding, compute the prob-

ability of the individual bits and decide on the corresponding bit values independent

of each other. The straightforward approach to bit decoding is based on summing up

the probabilities of different code-words according to the value of their component in

a given bit position of interest. Reference [2] provides an efficient method (known as

BCJR) to compute the bit probabilities of a given code using its trellis diagram. There

are some special methods for bit decoding based on coset decomposition principle [3],

sectionalized trellis diagrams [4], and using the dual code [5], [6].

Maximum Likelihood decoding algorithms have been the subject of numerous re-

search activities, while bit decoding algorithms have received much less attention in the

past. More recently, bit decoding algorithms have received increasing attention, mainly

due to the fact that they deliver bit reliability information. This reliability information

has been effectively used in a variety of applications including Turbo decoding.

In 1993, a new class of channel codes, called Turbo-codes, were announced [7],

which have an astonishing performance and at the same time allow for a simple iterative

decoding method using the reliability information produced by a bit decoding algorithm.

Due to the importance of Turbo-codes, there has been a growing interest among com-

munication researchers to work on the bit decoding algorithms.

The analytical performance evaluation of symbol by symbol decoders is considered

a hard task in [8], [9]. Although there is a method for calculating exact performance (in

the sense of expected hamming distortion) of Viterbi decoding of convolutional codes

over Binary Symmetrical Channels [10], but there has been no method for performance

Page 4

3

evaluation of bit decoding in general. Some asymptotic expressions are derived in [11]

for bit error probability of binary linear block codes in the Additive White Gaussian

Noise (AWGN) channel with bit decoding. The bit error probabilities of convolutional

codes over AWGN channel is considered in [9] with ML decoding. An upper bound

is presented in [12] for the performance of finite-delay symbol-by-symbol decoding of

trellis codes over discrete memoryless channels.

In this article, we employ Gram-Charlier series expansion to find the Probability

Density Function (pdf) of the bit LLR. This method is used in some other communi-

cations applications including calculation of pdf of sum of Log-Normal variates [13],

evaluation of the error probability in PAM (Pulse Amplitude Modulation) digital data

transmission systems with correlated symbols in the presence of inter-symbol interference

and additive noise [14], computing nearly Gaussian distributions [15], and computation

of the error probability of equal-gain combiner with partially coherent fading signals [16].

Reference [17] presents a method for computing an unknown pdf using infinite series

(also refer to [18]). Reference [19] computes moments of phase noise and uses maximum

entropy criterion [20] to find pdf.

This paper is organized as follows. In section II, the model used to analyze the

problem is presented. All notations and assumptions are in this section. Computing

pdf of bit LLR using Gram-Charlier expansion is presented in section III. This is an

orthogonal series expansion of a given pdf which requires knowledge of the moments of

the corresponding random variable. An analytical method for computing the moments of

the bit LLR using Taylor expansion is proposed in section IV. It is shown in section V that

we can compute the coefficients of Taylor expansion of the bit LLR recursively. We also

present a closed form expression for computing the bit error probability in section VI. In

section VII, the convergence issue of this approximation is discussed. Numerical results

are provided in section VIII which demonstrate a close agreement between our analytical

method and simulation. We conclude in section IX.

Page 5

4

II. MODELING

Assume that a binary linear code C with code-words of length N is given. We use

notation ci= (ci

1,ci

2,...,ci

N) to refer to the ithcode-word and its elements. We partition

the code into a sub-code C0

kand its coset C1

kaccording to the value of the kthbit position

of its code-words. i.e.,

∀ci∈ C :

ci

k= 0 =⇒ ci∈ C0

ci

k,

k= 1 =⇒ ci∈ C1

C0

k,

(1)

C0

k∪ C1

k= C,

k∩ C1

k= ∅.

(2)

We define the following operators on the code book.

ci⊕ cj= Bit wise binary addition of two code-words.

Note that the sub-code C0

(3)

kis closed under binary addition.

The dot product of two vectors a = (a1,a2,...,aN) and b = (b1,b2,...,bN) is

defined as,

?

The modulation scheme used here is Binary Phase Shift Keying (BPSK) which is

a.b =

N

l=1

albl.

(4)

defined as the mapping M,

M : c −→ m(c),

0 −→ m(0) = −1,

(5)

1 −→ m(1) = 1.

(6)

Note that modulating a code-word as mentioned above results in a vector of constant

square norm,

∀c ∈ C :

?m(c)?2= m(c).m(c) =

N

?

l=1

m2(cl) = N.

(7)

We use the notation ω(c) to refer to the Hamming weight of a code-word c, which

is equal to the number of ones in c. It follows,

Page 6

5

−1.m(c) = N − 2ω(c).

(8)

Modulating a code-word ˜ c = (˜ c1,˜ c2,...,˜ cN) using BPSK and sending it through

an AWGN channel, we will receive x = m(˜ c) + n, where n = (n1,n2,...,nN) is an

independent, identically distributed Gaussian noise vector which has zero mean elements

of variance σ2. Note that for an AWGN channel, we have,

p(x|˜ c) =

1

(√2πσ)Nexp

?

−?x − m(˜ c)?2

2σ2

?

.

(9)

A common tool to express the bit probabilities in bit decoding algorithms is based

on using the so-called Log-Likelihood-Ratio (LLR). The LLR of the kthbit position is

defined by the following equation,

LLR(k) = logP(˜ ck= 1|x)

P(˜ ck= 0|x),

(10)

where ˜ ckis the value of the kthbit in the transmitted code-word and log stands for natural

logarithm. Assuming,

P(˜ ck= 0) = P(˜ ck= 1) =1

2,

(11)

and using (9), it follows,

LLR(k) = logp(x|˜ ck= 1)

p(x|˜ ck= 0)

?

?

(12)

= log

ci?C1

k

exp

?

?

−?x−m(ci)?2

2σ2

?

?.

ci?C0

k

exp

−?x−m(ci)?2

2σ2

(13)

Using (7), it follows,

Page 7

6

LLR(k) = log

?

?

?

?

ci?C1

k

exp

?

?

?

?

x.m(ci)

σ2

?

?

ci?C0

k

exp

x.m(ci)

σ2

(14)

= log

ci?C1

k

exp

n.m(ci)+m(˜ c).m(ci)

σ2

?

?.

ci?C0

k

exp

n.m(ci)+m(˜ c).m(ci)

σ2

(15)

Given the value of the bit LLR, decision on the value of bit k is made by comparing

the LLR(k) with a threshold of zero. We are interested in studying the probabilistic

behavior of the LLR(k) as a function of the Gaussian random vector n.

Using the following theorems from [21]1, we can simplify our analysis.

Theorem 1: The probability distribution of LLR(k) is not affected by the choice

of transmitted code-word, ˜ c as long as the value of the kthbit remains unchanged.

Theorem 2: The probability distribution of LLR(k) for value of bit k = 0 or 1

are the reflections of one another through the origin.

Theorem 3: The probability distribution of LLR(k) is not affected by the choice

of the bit position, k, for the class of Cyclic codes.

Using theorems 1 and 2, without loss of generality, we assume for convenience that

the all-zero code-word, denoted as ˜ c = (0,0,...,0), is transmitted in all our following

discussions. This means, m(˜ c) = −1 = (−1,−1,...,−1) is the transmitted modulated

code-word.

In this case, equation (15) reduces to,

LLR(k) = log

?

?

ci?C1

k

exp

?

?

n.m(ci)−1.m(ci)

σ2

?

?.

ci?C0

k

exp

n.m(ci)−1.m(ci)

σ2

(16)

Using (8), we obtain,

1For the sake of brevity, the proofs are not given here. The reader is referred to [21] for proofs.

Page 8

7

LLR(k) = log

?

?

ci?C1

k

exp

?

?

n.m(ci)−2ω(ci)

σ2

?

?.

ci?C0

k

exp

n.m(ci)−2ω(ci)

σ2

(17)

In the following, for convenience of notation, the index k indicating bit position is

dropped. This means the sets C1and C0are indeed C1

kand C0

k. We use the notation

H(n) to refer to the LLR expression given in (17). i.e.,

H(n) = log

?

?

ci?C1exp

?

?

n.m(ci)−2ω(ci)

σ2

?

?.

ci?C0exp

n.m(ci)−2ω(ci)

σ2

(18)

III. GRAM-CHARLIER EXPANSION OF pdf

One common method for representing a function is to use an expansion on an

orthogonal basis which is suitable for that function. As the pdf of bit LLR is approxi-

mately Gaussian [7], [22], [23], the appropriate basis can be normal Gaussian pdf and its

derivatives which form an orthogonal basis. There are a variety of equivalent formulations

for this expansion [15], [24]–[26], while we follow the notation used in [15].

Consider a random variable Y , which is normalized to have zero mean and unit

variance. One can expand the pdf of Y , namely fY(y), using the following formula

which is called the Gram-Charlier series expansion,

fY(y) ?

1

√2πe−y2

2

∞

?

i=0

αiTi(y),

(19)

where, Ti(y) is the Hermite polynomial of order i, defined as,

Ti(y) =

?i/2?

?

j=0

(−1)ji!

2j(i − 2j)!j!yi−2j,

(20)

and,

αi=

?i/2?

?

j=0

(−1)j

2j(i − 2j)!j!µi−2j,

(21)

Page 9

8

where,

µj=

?+∞

−∞

yjfY(y)dy.

(22)

This is a commonly used method for approximating an unknown pdf. The only unknown

components in (21) are the moments, µj. We propose an analytical method using Taylor

series expansion to compute the moments of the bit LLR in the next section.

IV. COMPUTING MOMENTS

Applying the definition of the mthorder (m > 2) moment to bit LLR results in,

??

m

?

where E[.] stands for expectation and var[.] denotes variance. Note that to compute (24),

µm= E

H(n) − E[H(n)]

?var[H(n)]

1

varm/2[H(n)]

?m?

?m

(23)

=

i=0

(−1)i

i

?

E[Hm−i(n)]Ei[H(n)],

(24)

one needs E[Hj(n)], j = 1,..,m.

To compute E[Hj(n)], we take advantage of a method similar to the so called

Delta method [27] and find average of the Taylor series expansion of Hj(n). We use the

Taylor series expansion of H(n) in conjunction with polynomial theorem [15] to find an

expansion for Hj(n).

Hj(n) =

?∞

?

i=0

1

i!(n.∇)iH(0)

?j

(25)

An alternative approach is to directly expand Hj(n). Note that derivatives of Hj(n) are

functions of derivatives of H(n).

It easily follows that calculating E[Hj(n)], using Taylor series expansion, involves

computing the following terms,

∂LH(0)

∂nl1

1∂nl2

2...∂nlN

N

E[nl1

1]E[nl2

2]...E[nlN

N],

(26)

where L, li’s, i = 1,2,...,N are even and satisfy,

l1+ l2+ ... + lN= L.

(27)

Page 10

9

Note that for a Gaussian random variable n and an integer l, we have,

The number of solutions to equation (27) can be obtained using the method described

in [28] for partitioning an integer. Each solution corresponds to one partial derivative.

E[nl] =

(l)!σl

2l/2(l/2)!,l even,

0,l odd.

(28)

V. TAYLOR EXPANSION OF LLR

The Taylor series expansion of H(n) around vector zero, 0 = (0,0,...,0), is

formulated using the expression below in terms of n,

H(n) = H(0) + n.∇H(0) +1

2!(n.∇)2H(0) + ...

nq1+1

2

q1=1

(29)

= H(0) +

N

?

q1=1

∂H(0)

∂nq1

N

?

N

?

q2=1

∂2H(0)

∂nq1∂nq2

nq1nq2+ ...

(30)

We continue with calculation of different terms in the above equation. For simplicity,

we define (18) as H(n) = logA(n) − logB(n), where,

A(n) =

?

ci?C1

exp

?n.m(ci) − 2ω(ci)

σ2

?

,

(31)

and B(n) has a similar formula. We only consider logA(n) hereafter in this section. The

same approach can be used for logB(n).

logA(n) = logA(0) +

N

?

q1=1

∂ logA(0)

∂nq1

nq1+1

2

N

?

q1=1

N

?

q2=1

∂2logA(0)

∂nq1∂nq2

nq1nq2+ ...

(32)

To simplify the subsequent derivations, the following functions are defined,

F{q1,..,qj}(n) =

∂jA(n)

∂nq1∂nq2...∂nqj

= σ−2j?

ci?C1

Mi

{q1,..,qj}exp

?n.m(ci) − 2ω(ci)

σ2

?

, j ≥ 1,

(33)

Page 11

10

where {q1,..,qj} is a set which contains j bit positions different from k, and,

Mi

{q1,..,qj}=

j?

l=1

m(ci

ql), j ≥ 1,

(34)

where m(ci

ql) = ±1 is the modulated value for the qth

ci. It is clear that Mi

l, ql∈ {q1,..,qj}, bit of code-word

{q1,..,qj}= ±1 as well. We define,

R{q1,..,qj}(n) = A−1(n)F{q1,..,qj}(n),j ≥ 1,

(35)

where A(n) and F{q1,..,qj}(n) are given in (31) and (33), respectively.

The functions A(n), F{q1,..,qj}(n), and R{q1,..,qj}(n) defined in (31), (33), and (35)

reduce to special weight distribution functions when n = 0,

A(0) = A(Z) =

N

?

w=0

a(w)Zw,

(36)

where Z = exp(−2

in C1.

σ2) and a(w) is the number of code-words with Hamming weight w

F{q1,..,qj}(0) = F{q1,..,qj}(Z) = σ−2j

N

?

w=0

[f+

{q1,..,qj}(w) − f−

{q1,..,qj}(w)]Zw,j ≥ 1, (37)

where f±

{q1,..,qj}(w), is the number of code-words ci∈ C1with Hamming weight w and

{q1,..,qj}= ±1.

R{q1,..,qj}(0) = R{q1,..,qj}(Z) = A−1(Z)F{q1,..,qj}(Z),

We can compute F{q1,..,qj}(0), using the trellis diagram of the code. This is achieved by

constructing a new trellis diagram and augmenting each state into two states according

Mi

j ≥ 1.

(38)

to the values of Mi

{q1,..,qj0}where j0= 1,..,j.

To simplify (32), it easily follows that,

∂ logA(n)

∂nq1

= A−1(n)F{q1}(n) = R{q1}(n),

(39)

∂ logA(0)

∂nq1

= R{q1}(Z).

(40)

Replacing (39) and (40) in (32), we have,

Page 12

11

logA(n) = logA(Z) +

N

?

q1=1

R{q1}(Z)nq1+1

2

N

?

q1=1

N

?

q2=1

∂R{q1}(0)

∂nq2

nq1nq2+ ...

(41)

To compute (41), one needs derivatives of R{q1}(n), which can be calculated using

the following theorem.

Theorem 4: For any qirepresenting a bit position other than k, we have,

∂R{q1,..,qj}(n)

∂nqi

=

σ−4R{q1,..,qi−1,qi+1,..,qj}(n) − R{q1,..,qj}(n)R{qi}(n), If qi∈ {q1,..,qj},

R{q1,..,qj,qi}(n) − R{q1,..,qj}(n)R{qi}(n),

Otherwise.

(42)

Proof: For proof refer to Appendix A.

Another theorem which simplifies the calculation of even order derivatives, is pre-

sented next.

Theorem 5: We have,

∂2R{q1,..,qj}(n)

∂n2

qi

= −2R{qi}(n)∂R{q1,..,qj}(n)

∂nqi

.

(43)

Proof: For proof refer to Appendix A.

Referring to (42), one can easily see that the coefficients of the expansion (41)

are polynomials of R{q1,..,qj}(0) for different values of j. It is noteworthy that these

coefficients are polynomials of special weight distribution functions defined in (38). The

above theorems and results enable us to compute all the derivatives required in the Taylor

series expansion of H(n) = logA(n) − logB(n).

VI. COMPUTING PROBABILITY OF ERROR

The bit error performance follows by a simple integration of the resulting pdf. We

present a closed form formula for computing this integral in this section.

Using theorem 2, we have,

P(e|˜ ck= 0) = P(e|˜ ck= 1),

(44)

Page 13

12

where event e corresponds to bit k being in error. Using assumption (11), we can write,

P(e) = P(e|˜ ck= 0)P(˜ ck= 0) + P(e|˜ ck= 1)P(˜ ck= 1) = P(e|˜ ck= 0).

Hence, computation of the bit error probability involves calculating an integral of

(45)

the following form,

P(e) =

?∞

a

fY(y)dy,

(46)

where y is the bit LLR normalized to have zero mean and unit variance and a =

−E[y]/σy. Substituting fY(y) with its Gram-Charlier expansion results in,

P(e) ?

?∞

a

1

√2πe−y2

2

∞

?

i=0

αiTi(y)dy.

(47)

Noting that α0= 1, T0(y) = 1, we have,

?∞

P(e) ?

a

1

√2πe−y2

?∞

2dy +

?∞

∞

?

a

1

√2πe−y2

2

∞

?

i=1

αiTi(y)dy

(48)

= Q(a) +

a

1

√2πe−y2

2

i=1

αiTi(y)dy.

(49)

Changing the order of integration and summation and using the following property,

?

we can write,

e−y2

2Ti(y) = −d

dy

e−y2

2Ti−1(y)

?

,i ≥ 1,

(50)

P(e) ? Q(a) −

1

√2π

?

∞

?

αi

i=1

?

∞

?

αi

?∞

a

d

?

e−y2

2Ti−1(y)

?∞

?

(51)

= Q(a) −

1

√2π

∞

i=1

e−y2

2Ti−1(y)

a

(52)

= Q(a) +

1

√2πe−a2

2

i=1

αiTi−1(a).

(53)

This results in a closed form expression for computing probability of error.

Page 14

13

VII. CONVERGENCE PROPERTIES

Convergence properties of Gram-Charlier expansion is investigated in [24], [29],

[30]. It is proved in [31], that the expansion is convergent if the expanded function

satisfies the following condition,

?+∞

−∞

fY(y)ey2/4dy < ∞.

(54)

Reference [13], mentions that this expansion has good asymptotic behavior as defined

in [32]. In other words, a few terms will give a close approximation.

General properties of Hermite polynomials are discussed in[33], where it is

shown that this class of polynomials form an orthogonal basis which span the interval

(−∞,+∞). Therefore, pdf of the bit LLR can be expanded arbitrarily closely, in mean

square sense, using the given set of orthogonal basis. i.e.,

?+∞

where ?l(y) is truncation error defined as,

lim

l−→∞

−∞

?2

l(y)dy −→ 0,

(55)

?l(y) = fY(y) −

1

√2πe−y2/2

l?

i=0

αiTi(y).

(56)

If fY(y) is piecewise continuous in the interval (−∞,+∞), the result of this

expansion converges to fY(y) at each point of (−∞,+∞) at which fY(y) is continuous.

At points where fY(y) has a jump discontinuity, this series converges to (fY(y+) +

fY(y−))/2 [34]. It seems that the pdf of the bit LLR is a continuous function, although

there is no straight forward proof for it. In the following, we show that the error in the

computation of bit error probability converges to zero, no matter fY(y) is continuous or

not.

In practice, computation of error probability is performed by integrating fY(y) from

a to b instead of a to ∞, where a = −E[y]/σyand b is a large finite value.

Using Cauchy-Schwartz inequality [35],

????

?+∞

−∞

f(y)g(y)dy

????

2

<

?+∞

−∞

|f(y)|2dy

?+∞

−∞

|g(y)|2dy,

(57)

Page 15

14

for the case of f(y) = ?l(y) and,

g(y) =

????

?b

2

1, a < y < b,

0,

Otherwise,

(58)

we have,

????

?b

a

?l(y)dy< (b − a)

?+∞

−∞

?2

l(y)dy

(59)

Applying (55) to (59), results in,

lim

l−→∞

a

?l(y)dy −→ 0.

(60)

In this case, we can get as small as desired error, ?l(y), in computation of error

probability by increasing the number of terms, l.

VIII. NUMERICAL RESULTS

In this section, some examples have been provided which show a close agreement

between the analytical method and simulation results.

As an example, we used a (15,11,3) Cyclic code and evaluated its performance using

the proposed method. The order of the Gram-Charlier expansion is 10. The comparison

between the analytically calculated BER and the one obtained from simulation is shown

in Figure 1. From theorem 3, we know that in the case of Cyclic codes, the computed

pdf is not affected by the choice of the bit position.

Another example is a (12,11,2) single parity check code. The order of the Gram-

Charlier expansion is 10. The comparison between the analytically calculated BER and

the one obtained from simulation is shown in Figure 2.

The last example is the binary extended (24,12,8) Golay code. Its performance is

shown in Figure 3. The bit error rate is calculated using Gram-Charlier series with 14

term.

Page 16

15

IX. CONCLUDING REMARKS

A method is presented for calculating bit error probability of binary linear block

codes over AWGN channel, using special weight enumerating functions of the code.

Summary of the proposed method is presented here. Starting with calculation of special

weight distribution functions defined in (38), proceed with Taylor series of LLR as

indicated in (29). Averaging this expansion will give us moments of the pdf of bit LLR,

which can be used to compute coefficients of Gram-Charlier series using (21). A closed

form expression (53) can be used to find the bit error probability. All these steps can be

seen in Figure 4.

We are currently working on extending this method for performance evaluation of

Turbo-codes. Some existing approaches are bounds on the performance of Turbo-codes

like the upper bound derived in [36].

Acknowledgments

The authors would like to thank M. H. Baligh, A. Heunis, and M. Thompson for their

helpful discussions and comments.

Page 17

16

APPENDIX

A. Proofs of theorems

Theorem 4:

Proof: Using (35), one can write,

∂R{q1,..,qj}(n)

∂nqi

=

∂

∂nqi

?A−1(n)F{q1,..,qj}(n)?

= A−1(n)∂F{q1,..,qj}(n)

(61)

∂nqi

+∂A−1(n)

∂nqi

− A−2(n)∂A(n)

F{q1,..,qj}(n)

(62)

= A−1(n)∂F{q1,..,qj}(n)

∂nqi

∂nqi

F{q1,..,qj}(n)

(63)

= A−1(n)∂F{q1,..,qj}(n)

∂nqi

− A−2(n)F{qi}(n)F{q1,..,qj}(n)

(64)

= A−1(n)∂F{q1,..,qj}(n)

∂nqi

− [A−1(n)F{q1,..,qj}(n)][A−1(n)F{qi}(n)]

(65)

= A−1(n)∂F{q1,..,qj}(n)

∂nqi

− R{q1,..,qj}(n)R{qi}(n).

(66)

Using (33) and noting that m2(cl

qi) = 1, we have,

∂F{q1,..,qj}(n)

∂nqi

=

σ−4F{q1,..,qi−1,qi+1,...,qj}(n), If qi∈ {q1,..,qj},

F{q1,..,qj,qi}(n),

Otherwise.

(67)

Substituting (67) in (66), and using (35), completes the proof.

Page 18

17

Theorem 5:

Proof: We consider two different cases. If qi∈ {q1,..,qj} using (42), one can write,

∂2R{q1,..,qj}(n)

∂n2

qi

∂nqi

=σ−4∂

∂nqi

∂nqi

=σ−4[R{q1,..,qj}(n) − R{q1,..,qi−1,qi+1,..,qj}(n)R{qi}(n)]

−

∂nqi

=σ−4[R{q1,..,qj}(n) − R{q1,..,qi−1,qi+1,..,qj}(n)R{qi}(n)]

− [σ−4R{q1,..,qi−1,qi+1,..,qj}(n) − R{q1,..,qj}(n)R{qi}(n)]R{qi}(n)

− R{q1,..,qj}(n)[σ−4− R2

= − 2σ−4R{q1,..,qi−1,qi+1,..,qj}(n)R{qi}(n) + 2R{q1,..,qj}(n)R2

= − 2R{qi}(n)[σ−4R{q1,..,qi−1,qi+1,..,qj}(n) − R{q1,..,qj}(n)R{qi}(n)] = −2R{qi}(n)∂R{q1,..,qj}(n)

=

∂

[σ−4R{q1,..,qi−1,qi+1,..,qj}(n) − R{q1,..,qj}(n)R{qi}(n)]

∂

[R{q1,..,qj}(n)R{qi}(n)]

(68)

[R{q1,..,qi−1,qi+1,..,qj}(n)] −

(69)

∂

[R{q1,..,qj}(n)]R{qi}(n) − R{q1,..,qj}(n)

∂

∂nqi

[R{qi}(n)]

(70)

{qi}(n)]

(71)

{qi}(n)

(72)

∂nqi

(73)

.

Page 19

18

For the other case where qi/ ∈ {q1,..,qj}, we have,

∂2R{q1,..,qj}(n)

∂n2

qi

∂nqi

∂

∂nqi

∂nqi

=σ−4R{q1,..,qj}(n) − R{q1,..,qj,qi}(n)R{qi}(n)

−

∂nqi

=σ−4R{q1,..,qj}(n) − R{q1,..,qj,qi}(n)R{qi}(n)

− [R{q1,..,qj,qi}(n) − R{q1,..,qj}(n)R{qi}(n)]R{qi}(n)

− R{q1,..,qj}(n)[σ−4− R2

= − 2R{q1,..,qj,qi}(n)R{qi}(n) + 2R{q1,..,qj}(n)R2

= − 2R{qi}(n)[R{q1,..,qj,qi}(n) − R{q1,..,qj}(n)R{qi}(n)] = −2R{qi}(n)∂R{q1,..,qj}(n)

=

∂

[R{q1,..,qj,qi}(n) − R{q1,..,qj}(n)R{qi}(n)]

∂

[R{q1,..,qj}(n)R{qi}(n)]

(74)

=

[R{q1,..,qj,qi}(n)] −

(75)

∂

[R{q1,..,qj}(n)]R{qi}(n) − R{q1,..,qj}(n)

∂

∂nqi

[R{qi}(n)]

(76)

{qi}(n)]

(77)

{qi}(n)

(78)

∂nqi

.

(79)

It can be seen from (73) and (79) that both cases ended up to the same expression

as one in (43), which completes the proof.

REFERENCES

[1] A. Abedi, A. K. Khandani, “An Analytical Method for Performance Analysis of Binary Linear Block Codes,”

Proceedings of the IEEE International Symposium on Information Theory (ISIT 2002), Lausanne, Switzerland,

pp. 403, July 2002. (available from www.cst.uwaterloo.ca)

[2] L. R. Bahl, J. Cocke, F. Jelinek, J. Raviv, “Optimal Decoding of Linear Codes for Minimizing Symbol Error

Rate,” IEEE Transactions on Information Theory, vol. 20, pp. 284-287, March 1974.

[3] L. Ping, K. L. Yeung, “Symbol-by-Symbol Decoding of the Golay Code and Iterative Decoding of Concatenated

Golay Codes,” IEEE Transactions on Information Theory, vol. 45, no. 7, pp. 2558-2562, November 1999.

[4] Y. Liu, S. Lin, M. P. C. Fossorier, “MAP Algorithms for Decoding Linear Block Codes Based on Sectionalized

Trellis Diagrams,” IEEE Transactions on Communications, vol. 48, no. 4, pp. 577-586, April 2000.

[5] S. Riedel, “Symbol-by-Symbol MAP Decoding Algorithm For High-Rate Convolutional Codes That Use

Reciprocal Dual Codes,” IEEE Journal on Selected Areas in Communications, vol. 16, no. 2, pp. 175-185,

February 1998.

[6] C. R. P. Hartmann, L. D. Rudolph, “An Optimum Symbol-by-Symbol Decoding Rule for Linear Codes,” IEEE

Transactions on Information Theory, vol. 22, no. 5, pp. 514-517, September 1976.

Page 20

19

[7] C. Berrou, A. Glavieux, P. Thitimajshima, “Near Shannon Limit Error-Correcting Coding and Decoding: Turbo-

Codes (1),” Proceedings of IEEE International Conference on Communications, Geneva, Switzerland, pp. 1064-

1070, May 1993.

[8] J. G. Proakis, Digital Communication, 2nd ed., New York: McGraw-Hill, 1989.

[9] S. S. Pietrobon, “On the Probability of Error of Convolutional Codes,” IEEE Transactions on Information Theory,

vol. 42, no. 5, pp. 1562-1568, September 1996.

[10] M. R. Best, M. V. Burnashev, Y. Levy, A. Rabinovich, P. C. Fishburn, A. R. Calderbank, and D. J. Costello, Jr.,

“On a technique to calculate the exact performance of a convolutional code,” IEEE Transactions on Information

Theory, vol. 41, no. 2, pp. 441-447, March 1995.

[11] C. R. P. Hartmann, L. D. Rudolph, K. G. Mehrotra, “Asymptotic Performance of Optimum Bit-by-Bit Decoding

for the White Gaussian Channel,” IEEE Transactions on Information Theory, vol. 23, no. 4, pp. 520-522, July

1977.

[12] E. Baccarelli, R. Cusani, G. D. Blasio, “Performance Bound and Trellis-Code Design Criterion for Discrete

Memoryless Channels and Finite-Delay Symbol-by-Symbol Decoding,” IEEE Transactions on Communications,

vol. 45, no. 10, pp. 1192-1199, October 1997.

[13] D. C. Schleher, “Generalized Gram-Charlier Series with Application to the Sum of Log-Normal Variates,” IEEE

Transactions on Information Theory, vol. 23, no. 2, pp. 275-280, March 1977.

[14] G. L. Cariolaro, S. G. Pupolin, “Considerations on Error Probability in Correlated-Symbol Systems,” IEEE

Transactions on Communications, vol. 25, no. 4, pp. 462-467, April 1977.

[15] S. Blinnikov, R. Moessner, “Expansions for Nearly Gaussian Distributions,” Astronomy and Astrophysics

Supplement Series, vol. 130, no. 1, pp. 193-205, May 1998.

[16] M. A. Najib, V. K. Prabhu, “Analysis of Equal-Gain Diversity with Partially Coherent Fading Signals,” IEEE

Transactions on Vehicular Technology, vol. 49, no. 3, pp. 783-791, May 2000.

[17] N. C. Beaulieu, “An Infinite Series for the Computation of the Complementary Probability Distribution of a

Sum of Independent Random Variables and Its Application to the Sum of Rayleigh Random Variables,” IEEE

Transactions on Communications, vol. 38, no. 9, pp. 1463-1474, September 1990.

[18] C. Tellambura, A. Annamalai, “Further Results on the Beaulieu Series,” IEEE Transactions on Communications,

vol. 48, no. 11, pp. 1774-1777, November 2000.

[19] I. T. Monroy, G. Hooghiemstra, “On a Recursive Formula for the Moments of Phase Noise,” IEEE Transactions

on Communications, vol. 48, no. 6, pp. 917-920, June 2000.

[20] M. Kavehrad, M. Joseph, “Maximum Entropy and the Method of moments in Performance Evaluation of Digital

Communications Systems,” IEEE Transactions on Communications, vol. 34, pp. 1183-1189, December 1986.

[21] A. Abedi, P. Chaudhari, A. K. Khandani, “On Some Properties of Bit Decoding Algorithms,” Proceedings of the

Canadian Workshop on Information Theory (CWIT 2001), Vancouver, Canada, pp. 106-109, June 2001 (available

from www.cst.uwaterloo.ca)

[22] H. El-Gamal, A. R. Hammons Jr., “Analyzing the Turbo Decoder Using the Gaussian Approximation”, IEEE

Transactions on Information Theory, vol. 47, no. 2, pp. 671-686, February 2001.

Page 21

20

[23] D. Divsalar, S. Dolinar, F. Pollara, “Iterative Turbo Decoder Analysis Based on Density Evolution,” IEEE Journal

on Selected Areas in Communications, vol. 19, no. 5, pp. 891-907, May 2001.

[24] R.S. Freedman, “On Gram-Charlier Approximations,” IEEE Transactions on Communications, vol. 29, no. 2, pp.

122-125, February 1981.

[25] M. Abramowitz, Handbook of Mathematical Functions, Washington, U.S. Dept. of Commerce, 1958.

[26] J. F. Kenney, E. S. Keeping, Mathematics of Statistics, Part 2, Second edition Princeton, NJ: Van Nostrand, 1951.

[27] A. W. Van der vaart, Asymptotic Statistics, Cambridge University press, 1998.

[28] I. Niven, Mathematics of Choice, The mathematical association of America, 1965.

[29] H. Cramer, Mathematical Methods of Statistics, Princeton University press, 1957.

[30] H. Cramer, Random Variables and Probability Distributions, 3rd edition, Cambridge University Press, London,

1970.

[31] H. Cramer “On Some Classes of Series Used in Mathematical Statistics,” Proc. of the 6th Scand. Congress of

Mathematicians, Copenhagen. pp. 399-425, 1925.

[32] E. T. Whittaker, G. N. Watson, A Course of Modern Analysis, 4th edition, Cambridge University Press, London,

1962.

[33] G. Szego, Orthogonal Polynomials, Providence, R. I. American Mathematical Society, 1967.

[34] P. V. Oneil, Advanced Engineering Mathematics, 4th edition, International Thomson Publishing, 1995.

[35] I. S. Gradshteyn, I. M. Ryzhik, Table of Integrals, Series, and Products, 5th edition, Academic press, 1994.

[36] T. M. Duman, M. Salehi, “New Performance Bounds for Turbo Codes,” IEEE Transactions on Communications,

vol. 46, no. 6, pp. 717-723, June 1998.

Page 22

21

−8 −6−4−202468

10

−5

10

−4

10

−3

10

−2

10

−1

10

0

Eb/N0 (dB)

Bit Error Rate

Experimental

Analytical

Fig. 1.Comparison between analytical and experimental BER for (15,11,3) Cyclic code.

Page 23

22

−10 −8 −6−4 −20246

10

−3

10

−2

10

−1

10

0

Eb/N0 (dB)

Bit Error Rate

Experimental

Analytical

Fig. 2. Comparison between analytical and experimental BER for (12,11,2) single parity check code.

Page 24

23

−6 −4−20246

10

−3

10

−2

10

−1

10

0

Eb/N0 (dB)

Bit Error Rate

Experimental

Analytical

Fig. 3.Comparison between analytical and experimental BER for binary extended (24,12,8) Golay code.

Page 25

24

Calculate BER using

Gram−Charlier series using

Compute Coefficients of

Distributions defined in

Compute Special Weight

of bit LLR using

Find Taylor Expansion

Compute Moments using

END

START

(24)

(53)

(38)

(21)

(29)

Fig. 4.Flow chart of the analytical method for performance evaluation of binary linear block codes.