Page 1

An Analytical Method for Performance

Evaluation of Binary Linear Block Codes

Ali Abedi and Amir K. Khandani

Coding & Signal Transmission Laboratory

Department of Electrical & Computer Engineering

University of Waterloo

Waterloo, Ontario, Canada, N2L 3G1

Technical Report UW-E&CE#2003-1

February 8, 2003

Page 2

1

An Analytical Method for Performance

Evaluation of Binary Linear Block

Codes

Ali Abedi and Amir K. Khandani

Coding & Signal Transmission Laboratory(www.cst.uwaterloo.ca)

Dept. of Elec. and Comp. Eng., University of Waterloo

Waterloo, ON, Canada, N2L 3G1

Tel: 519-8848552, Fax: 519-8884338

e-mail: {ali, khandani}@cst.uwaterloo.ca

Abstract

An analytical method for performance evaluation of binary linear block codes using an

Additive White Gaussian Noise (AWGN) channel model with Binary Phase Shift Keying

(BPSK) modulation is presented. We focus on the Probability Density Function (pdf) of the

bit Log-Likelihood Ratio (LLR) which is expressed in terms of the Gram-Charlier series

expansion. This expansion requires knowledge of the statistical moments of the bit LLR. We

introduce an analytical method for calculating these moments. This is based on some recursive

calculations involving certain weight enumerating functions of the code. Numerical results are

provided for some examples, which demonstrate close agreement with the simulation results.

Index Terms

Additive White Gaussian Noise Channel, Binary Phase Shift Keying, Bit Decoding, Bit

Error Probability, Block Codes, Log-Likelihood Ratio, Weight Distribution.

This work is financially supported by Natural Sciences and Engineering Research Council of Canada (NSERC) and

by Communications and Information Technology Ontario (CITO). Earlier version [1] of this work has been presented

at ISIT 2002.

Page 3

2

I. INTRODUCTION

In the application of channel codes, one of the most important problems is to develop

an efficient decoding algorithm for a given code. The class of Maximum Likelihood (ML)

decoding algorithms are designed to find a valid code-word with the maximum likelihood

value. The ML algorithms are known to minimize the probability of the Frame Error Rate

(FER) under the mild condition that the code-words occur with equal probability.

Another class of decoding algorithms, known as bit decoding, compute the prob-

ability of the individual bits and decide on the corresponding bit values independent

of each other. The straightforward approach to bit decoding is based on summing up

the probabilities of different code-words according to the value of their component in

a given bit position of interest. Reference [2] provides an efficient method (known as

BCJR) to compute the bit probabilities of a given code using its trellis diagram. There

are some special methods for bit decoding based on coset decomposition principle [3],

sectionalized trellis diagrams [4], and using the dual code [5], [6].

Maximum Likelihood decoding algorithms have been the subject of numerous re-

search activities, while bit decoding algorithms have received much less attention in the

past. More recently, bit decoding algorithms have received increasing attention, mainly

due to the fact that they deliver bit reliability information. This reliability information

has been effectively used in a variety of applications including Turbo decoding.

In 1993, a new class of channel codes, called Turbo-codes, were announced [7],

which have an astonishing performance and at the same time allow for a simple iterative

decoding method using the reliability information produced by a bit decoding algorithm.

Due to the importance of Turbo-codes, there has been a growing interest among com-

munication researchers to work on the bit decoding algorithms.

The analytical performance evaluation of symbol by symbol decoders is considered

a hard task in [8], [9]. Although there is a method for calculating exact performance (in

the sense of expected hamming distortion) of Viterbi decoding of convolutional codes

over Binary Symmetrical Channels [10], but there has been no method for performance

Page 4

3

evaluation of bit decoding in general. Some asymptotic expressions are derived in [11]

for bit error probability of binary linear block codes in the Additive White Gaussian

Noise (AWGN) channel with bit decoding. The bit error probabilities of convolutional

codes over AWGN channel is considered in [9] with ML decoding. An upper bound

is presented in [12] for the performance of finite-delay symbol-by-symbol decoding of

trellis codes over discrete memoryless channels.

In this article, we employ Gram-Charlier series expansion to find the Probability

Density Function (pdf) of the bit LLR. This method is used in some other communi-

cations applications including calculation of pdf of sum of Log-Normal variates [13],

evaluation of the error probability in PAM (Pulse Amplitude Modulation) digital data

transmission systems with correlated symbols in the presence of inter-symbol interference

and additive noise [14], computing nearly Gaussian distributions [15], and computation

of the error probability of equal-gain combiner with partially coherent fading signals [16].

Reference [17] presents a method for computing an unknown pdf using infinite series

(also refer to [18]). Reference [19] computes moments of phase noise and uses maximum

entropy criterion [20] to find pdf.

This paper is organized as follows. In section II, the model used to analyze the

problem is presented. All notations and assumptions are in this section. Computing

pdf of bit LLR using Gram-Charlier expansion is presented in section III. This is an

orthogonal series expansion of a given pdf which requires knowledge of the moments of

the corresponding random variable. An analytical method for computing the moments of

the bit LLR using Taylor expansion is proposed in section IV. It is shown in section V that

we can compute the coefficients of Taylor expansion of the bit LLR recursively. We also

present a closed form expression for computing the bit error probability in section VI. In

section VII, the convergence issue of this approximation is discussed. Numerical results

are provided in section VIII which demonstrate a close agreement between our analytical

method and simulation. We conclude in section IX.

Page 5

4

II. MODELING

Assume that a binary linear code C with code-words of length N is given. We use

notation ci= (ci

1,ci

2,...,ci

N) to refer to the ithcode-word and its elements. We partition

the code into a sub-code C0

kand its coset C1

kaccording to the value of the kthbit position

of its code-words. i.e.,

∀ci∈ C :

ci

k= 0 =⇒ ci∈ C0

ci

k,

k= 1 =⇒ ci∈ C1

C0

k,

(1)

C0

k∪ C1

k= C,

k∩ C1

k= ∅.

(2)

We define the following operators on the code book.

ci⊕ cj= Bit wise binary addition of two code-words.

Note that the sub-code C0

(3)

kis closed under binary addition.

The dot product of two vectors a = (a1,a2,...,aN) and b = (b1,b2,...,bN) is

defined as,

?

The modulation scheme used here is Binary Phase Shift Keying (BPSK) which is

a.b =

N

l=1

albl.

(4)

defined as the mapping M,

M : c −→ m(c),

0 −→ m(0) = −1,

(5)

1 −→ m(1) = 1.

(6)

Note that modulating a code-word as mentioned above results in a vector of constant

square norm,

∀c ∈ C :

?m(c)?2= m(c).m(c) =

N

?

l=1

m2(cl) = N.

(7)

We use the notation ω(c) to refer to the Hamming weight of a code-word c, which

is equal to the number of ones in c. It follows,