Analysis and Design of Binary MessagePassing Decoders
ABSTRACT Binary messagepassing decoders for lowdensity paritycheck (LDPC) codes are studied by using extrinsic information transfer (EXIT) charts. The channel delivers hard or soft decisions and the variable node decoder performs all computations in the Lvalue domain. A hard decision channel results in the wellknow Gallager B algorithm, and increasing the output alphabet from hard decisions to two bits yields a gain of more than 1.0 dB in the required signal to noise ratio when using optimized codes. The code optimization requires adapting the mixing property of EXIT functions to the case of binary messagepassing decoders. Finally, it is shown that errors on cycles consisting only of degree two and three variable nodes cannot be corrected and a necessary and sufficient condition for the existence of a cyclefree subgraph is derived. Comment: 8 pages, 6 figures, submitted to the IEEE Transactions on Communications

Conference Paper: Absorbing sets and cycles
[Show abstract] [Hide abstract]
ABSTRACT: Absorbing sets have been identified as structures in the graph of a lowdensity paritycheck code that cause error floors  in particular in combination with binary message passing decoding algorithms. In this paper it is shown that absorbing sets involving only variable nodes up to degree 3 are equivalent to cycles and a sufficient and necessary condition on the degree distribution to avoid these absorbing sets is derived. The results are extended to irregular graphs and simulation results demonstrate the improvement in the error floor region.Turbo Codes and Iterative Information Processing (ISTC), 2012 7th International Symposium on; 01/2012
Page 1
1
Analysis and Design of Binary
MessagePassing Decoders
Gottfried Lechner, Troels Pedersen, and Gerhard Kramer
Abstract—Binary messagepassing decoders for lowdensity
paritycheck (LDPC) codes are studied by using extrinsic in
formation transfer (EXIT) charts. The channel delivers hard
or soft decisions and the variable node decoder performs all
computations in the Lvalue domain. A hard decision channel
results in the wellknow Gallager B algorithm, and increasing
the output alphabet from hard decisions to two bits yields a gain
of more than 1.0 dB in the required signal to noise ratio when
using optimized codes. The code optimization requires adapting
the mixing property of EXIT functions to the case of binary
messagepassing decoders. Finally, it is shown that errors on
cycles consisting only of degree two and three variable nodes
cannot be corrected and a necessary and sufficient condition for
the existence of a cyclefree subgraph is derived.
Index Terms—extrinsic information transfer charts, Gallager
B Algorithm, irregular codes, lowdensity paritycheck codes,
messagepassing decoding
I. INTRODUCTION
Gallager introduced lowdensity paritycheck (LDPC) codes
[1], [2] and also presented messagepassing decoding al
gorithms that exchange only binary messages between the
variable and check nodes. These algorithms are referred to as
Gallager A and Gallager B [3] depending on how the variable
tocheck node messages are computed. The algorithms have
small memory requirements and low complexity implementa
tions, especially of the check node decoder, and they have
found practical use in highspeed applications, e.g. optical
transmission systems [4]. However, the complexity advantages
come at the cost of a significant loss in performance.
In this work, we use extrinsic information transfer (EXIT)
charts [1], [2], [5], [6] to analyze and design binary message
passing algorithms. Interestingly, in contrast to nonbinary
message passing where they are an approximation, EXIT
charts are exact for the case of binary messages (and infinite
length code ensembles) since the mutual information de
scribes the probability densities of the messages precisely.
Furthermore, the EXIT functions for binary messagepassing
algorithms can be derived analytically [1], [2].
Binary messagepassing algorithms were studied in [7],
[8] where the authors showed that optimum algorithms must
Manuscript submitted to the IEEE Transactions on Communications, April
2010.
Gottfried Lechner is with the Institute for Telecommunications Research,
University of South Australia (Email: gottfried.lechner@unisa.edu.au). Troels
Pedersen is with the Department of Electronic Systems, Aalborg University,
Denmark (Email: troels@es.aau.dk). Gerhard Kramer was with Bell Labs,
AlcatelLucent, Murray Hill, NJ. He is now with the University of Southern
California, Los Angeles, CA (Email: gkramer@usc.edu).
Parts of this work have been presented at the IEEE International Symposium
on Information Theory (ISIT) 2007 and at the Australian Communications
Theory Workshop (AusCTW) 2010.
satisfy certain symmetry and isotropy conditions. In contrast
to majority based decision rules, we assume that the variable
node decoder converts all incoming messages to Lvalues [9],
performs decoding in the Lvalue domain and applies a hard
decision on the result. Note that for these algorithms, there
always exist majority decision rules that can be derived in a
straightforward way as shown in Section VB. This general
approach assures that the symmetry and isotropy conditions
are satisfied and the algorithms can be extended for systems
where the channel provides more information than hard deci
sions, while the variable and check node decoder still exchange
binary messages only. This reduces the gap between optimum
decoding and binary messagepassing decoding, while the
complexity is kept low.
Our main contributions are as follows:
• We derive a framework that allows binary message
passing algorithms to incorporate various quantization
schemes for the channel messages. Increasing the number
of quantization bits from one to two leads to a signifi
cantly improved decoding performance.
• We identify certain structures of the factor graph which
cannot be corrected by binary messagepassing decoders.
Therefore, in addition to the stability condition, the
degree distribution has to satisfy another constraint in
order to avoid error floors. An important consequence is
that regular LDPC codes with variable node degree three
cannot be decoded without an error floor.
The rest of this paper is organized as follows. In Sec
tion II, we introduce basics and definitions which are used
in Section III to derive the EXIT functions of the variable
and check node decoders. In Section IV, we show how the
EXIT functions can be used to optimize the code and derive
constraints on the degree distributions. Section V considers
practical aspects of binary messagepassing decoders and
simulation results are presented in Section VI.
II. PRELIMINARIES
For binary messagepassing decoders, the extrinsic channel
[6] of the variable and check node decoder is represented as
a binary symmetric channel (BSC) with input X and output
Y both with alphabet {+1,−1}. Let ? denote the crossover
probability of the BSC which we assume to be less than or
equal to 0.5. Since there is a onetoone relation between
mutual information I(X;Y ) and crossover probability for the
BSC, we can equivalently describe those channels using
I(X;Y ) = 1 − hb(?)
(1)
arXiv:1004.4020v1 [cs.IT] 22 Apr 2010
Page 2
2
where hb(·) denotes the binary entropy function
hb(?) = −?log2(?) − (1 − ?)log2(1 − ?).
The variable node decoder converts all messages to Lvalues
using
(2)
L(y) = logPr[X = +1Y = y]
Pr[X = −1Y = y]
= logPr[Y = yX = +1]
Pr[Y = yX = −1],
where y is a realization of Y
Pr[X = +1] = Pr[X = −1] = 1/2. Defining the reliability
associated with a BSC as
D = log1 − ?
(3)
and we assumed that
?
≥ 0
(4)
allows to express the Lvalue as
L(y) = y · D.
(5)
Throughout the paper, random variables are denoted by up
percase letters and their realizations are denoted by lowercase
letters. The indices v, c, a, and e stand for variable node de
coder, check node decoder, apriori and extrinsic, respectively.
III. EXIT FUNCTIONS OF COMPONENT DECODERS
A. Check Node Decoder
A check node of degree dc of a binary messagepassing
algorithm computes the output message of each of its edges
as the product of the other dc− 1 edge inputs when using
the alphabet {+1,−1}. Let ?ac denote the average bit error
probability at the input of the check nodes. The correspond
ing apriori crossover probability of the extrinsic channel is
therefore also ?ac. We define Iac = 1 − hb(?ac) so that
?ac = h−1
interval [0,1/2]. The crossover probability at the check node
output is [2, Lemma 4.1]
b(1 − Iac) where h−1
b(x) takes on values in the
?ec= fc(?ac;dc) =1 − (1 − 2?ac)dc−1
2
(6)
where fcis the EXIT function of a check node of degree dc.
Using (6) and (1), we define Iec= 1 − hb(?ec). The inverse
of the EXIT function in (6) reads
?ac= f−1
c (?ec;dc) =1 − (1 − 2?ec)
1
dc−1
2
.
(7)
B. Variable Node Decoder
For a variable node of degree dv, every outgoing message
Lev,jalong edge j of the variable node is given by
Lev,j= Lch+
dv
?
i=1;i?=j
Lav,i,j = 1,...,dv
(8)
where Lchis the Lvalue from the channel (see (9) below) and
Lav,iis the Lvalue from the check nodes along edge i of the
variable node. To perform this summation, all messages must
be converted to Lvalues using (4) and (5), where we assume
that the variable node decoder knows the parameters of the
communication and extrinsic channels. We show in Section VI
how the decoder can be implemented without this knowledge.
In the following, we assume the communications channel is
an additive white Gaussian noise channel with binary input
(BIAWGN) and noise variance σ2
are converted to Lvalues as
nso the received values y
Lch=
2
σ2
n
y
(9)
before being quantized. Conditioned on X = +1, the unquan
tized Lvalues are Gaussian random variables with variance
σ2
we derive the EXIT functions for three different quantization
schemes.
1) Hard Decision Channel: Consider the case where the
receiver performs hard decisions. Then the decoder’s commu
nication channel can be modeled as a BSC with crossover
probability
?0
where g(l) = pLX(l+1) is the conditional transition proba
bility of the actual communication channel. Any symmetric
channel with binary input is completely described by g(l)
which is therefore sufficient to analyze the decoding behavior.
Let Dch and Dav denote the reliabilities of the decoder’s
communication and extrinsic channels, respectively (see [6,
Fig. 2 and Fig. 3]). The variable node decoder computes the
outgoing message on edge j by using (8) with Lch= y·Dch.
The outgoing message transmitted to the check node decoder
is the sign of Lev. To compute the error probability of the
message, we consider two cases. First, assume that the channel
message is in error. This error is corrected if the sum over the
Lav,iin (8) can overcome the incorrect sign on Lch, i.e., if
ch= 4/σ2
nand mean µch= σ2
ch/2 [10]. In the following,
?ch=
−∞
g(l)dl
(10)
−Dch− ncDav+ (dv− 1 − nc)Dav≥ 0
where ncis the number of erroneous messages from the check
nodes. An equivalent condition is nc≤ t where
?Dav(dv− 1) − Dch
Similarly, if the channel message is correct, then nchas to be
less than or equal to
?Dav(dv− 1) + Dch
to result in a correct outgoing message. Combining these two
cases yields the error probability of the outgoing messages of
the variable node decoder
(11)
t =
2Dav
?
.
(12)
¯t =
2Dav
?
(13)
?ev
=fv(?av;dv,?ch)
1 − ?chB (t;dv− 1,?av)
−(1 − ?ch)B (¯t;dv− 1,?av)
=
(14)
where
B(k;n,p) =
k
?
i=0
?n
i
?
pi(1 − p)n−i
(15)
Page 3
3
ζ0
ζ1
ζ2
Lch
0
1
2
w
Fig. 1.Quantization scheme for channel messages.
denotes the binomial cumulative distribution. The EXIT func
tion (14) serves as a lower bound on EXIT functions for (ap
propriately designed) softdecision detectors, see Section IIIC
below.
2) Soft Decision Channel: In the limit of no quantization
of the output of a BIAWGN channel, the crossover probability
at the output of the variable node is
?ev=Pr[Lev,j≤ 0X = +1]
=Pr
Lch+
dv−1
?
Pr[−Lch+ µch≥ Dav(dv− 1 − 2z) + µchX = +1]
dv−1
?
dv
?
i=1;i?=j
Lav,i≤ 0
??????
X = +1
=
z=0
Pr[Nc= z]·
=
z=0
b(z;dv− 1,?av)Q
?Dav(dv− 1 − 2z) + µch
σch
?
(16)
,
where Nc is a random variable representing the number of
erroneous messages from the check nodes, where
Q(φ) =
1
√2π
?∞
φ
e−ψ2
2dψ
(17)
is the familiar Q function, and where
b(k;n,p) =
?n
k
?
pk(1 − p)n−k
(18)
denotes the binomial probability mass function. The EXIT
function (16) serves as an upper bound on the EXIT function
of any quantization scheme, see Section IIIC below.
3) Output Alphabets Larger than Binary: Suppose a quan
tizer provides the sign sign(Lch) and a quantization index w,
w = 0,...,W, of the magnitude Lch. The boundary points
of the quantizer are defined by the vector ζ = [ζ0,...,ζW]
where 0 ≤ ζ0< ζ1< ··· < ζW. Such a quantization scheme
is depicted in Figure 1.
Following [11], [12], this channel quantization scheme can
be decomposed as (W +1) BSCs. Subchannel w is used with
probability pw, has crossover probability ?ch,wand reliability
00.10.20.30.40.5
0.60.70.80.91
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
soft
BSQC
BSC
check
Iav,Iec
Iev,Iac
Fig. 2. Binary message passing EXIT functions of check nodes with dc= 6
and variable nodes with dv = 4, ζ1= 1.90 and σ = 0.67 for BSC, BSQC
and soft decision channel.
Dch,w. For w > 0 we have
pw=
?ζw
1
pw
ζw−1
g(l)dl +
?−ζw−1
−ζw
g(l)dl,
(19)
?ch,w=
?−ζw−1
−ζw
g(l)dl,
and(20)
Dch,w= log1 − ?ch,w
?ch,w
.
(21)
We define subchannel zero (w = 0) as a BSC with crossover
probability 0.5 [11], [12]. The parameters for subchannel zero
are
?ζ0
?ch,0=1
2,
Dch,0= 0.
p0=
−ζ0
g(l)dl,
(22)
and(23)
(24)
The EXIT function of the overall channel is given by the
expectation of the EXIT functions of the subchannels [11],
[12]
?ev=
W
?
w=0
pw?ev,w
(25)
where ?ev,wis given by (14) with ?ch= ?ch,w.
C. Examples
Figure 2 shows examples of the EXIT functions for check
and variable nodes for a regular LDPC code with variable node
degree dv = 4 and check node degree dc= 6. In Figure 2,
we use Iav = 1 − hb(?av), Iev = 1 − hb(?ev) and similarly
for Iac and Iec. The EXIT function for the hard decision
channel (BSC) changes its behavior at certain values of Iav.
Page 4
4
These values correspond to a change of the majority decision
rule of the Gallager B algorithm [3]. The EXIT function for
the soft decision channel is an upper bound for all quanti
zation schemes. As an example for larger output alphabets,
we consider a binary symmetric quaternary output channel
(BSQC) where the output of the channel takes on values from
{−Dch,2,−Dch,1,+Dch,1,+Dch,2}. This channel output can
be represented by a quantization using
ζ =?0,
Notice that for this case (22) gives p0= 0. In Figure 2, the
EXIT function is shown for ζ1= 1.90.
ζ1,
∞?.
(26)
IV. IRREGULAR CODES
In this section we consider irregular LDPC codes, their
EXIT functions, and constraints that have to be satisfied to
avoid error floors. The results will be used in Section VI to
optimize the degree distribution of the variable nodes. We
focus on checkregular codes, i.e. all check nodes have the
same degree dc, but the analysis extends in a simple way to
checkirregular codes.
A. Mixing of EXIT Functions
We first prove the following theorem that is stated in a
different context in [13]. Recall that g(l) = pLX(l+1) is the
conditional probability density of a channel from X to L.
Theorem 1. Consider a collection of channels {gi(l)}ithat
satisfy gi(−l) = elgi(l) for all i. We then have
?
where
?
In other words, for 0 ≤ λi ≤ 1 and
average EXIT function of the channel collection equals the
EXIT function of the “averaged” channel.
i
λiI(Xi;Li) = I(X;L)
(27)
pLX(l + 1) =
i
λipLiXi(l + 1).
(28)
?
iλi = 1, the
Proof: Since g(l) = pLX(l + 1) = pLX(−l − 1), we
obtain (note the integral limits)
I(X;L) =h(L) − h(LX)
=
0
?∞
−[g(l) + g(−l)]logg(l) + g(−l)
+ g(l)logg(l) + g(−l)logg(−l)dl.
2
(29)
Using the symmetry condition g(−l) = e−lg(l) this simplifies
to
?∞
I(X;L) =
0
g(l)
?
e−llog
2
1 + el+ log
2
1 + e−l
?
dl
(30)
which is a linear operation on g(l). Hence the order of
summation and computation of mutual information may be
swapped.
Theorem 1 implies that the EXIT function of a mixture of
codes can be computed as the average of the EXIT functions
of the component codes as long as the channels to the L
values satisfy the property g(−l) = elg(l). In the case of
binary messagepassing decoders, however, the outgoing L
values of the variable node are quantized to {+Dev,−Dev}.
This nonlinear operation prohibits the exchange of averaging
and the computation of the mutual information. On the other
hand, the mixture EXIT function can still be computed by
averaging over the crossover probabilities instead of the EXIT
functions [1], [2], [14], i.e., we have
?
where λidenotes the fraction of edges connected to a variable
node of degree i [3] and fv(·) is given by (14). Furthermore,
we can formulate the successful decoding constraint in terms
of crossover probabilities
?
for all ? ∈ (0,0.5). Expressing the design rate R of a code as
[3]
?ev=
i
λifv(?av;i,?ch)
(31)
?ev=
i
λifv(?;i,?ch) < f−1
c (?)
(32)
R = 1 −
1
dc
?
i
λi
i
(33)
leads to a linear program for maximizing the design rate:
?
subject to
λifv(?;i,?ch) < f−1
maximize
i
λi
i
(34)
?
?
0 ≤ λi≤ 1.
i
c (?)
∀ ? ∈ (0,0.5)
i
λi= 1
In practice, the first constraint is evaluated for a fine grid of
discrete ?. This linear program enables the efficient optimiza
tion of the variable node degree distribution.
B. Stability
It has been shown in [3] that if the degree distribution
of an LDPC code satisfies a stability condition, the decoder
converges to zero error when starting from a sufficiently
small error probability. The stability condition for the binary
messagepassing decoder is given in the following theorem.
Theorem 2. An irregular LDPC code satisfies the stabil
ity condition under binary messagepassing decoding (using
quantized or unquantized channel messages) if and only if the
variable node degree distribution satisfies
(λ2+ 2?chλ3)(dc− 1) < 1,
(35)
where ?chdenotes the error probability of a hard decision of
the channel messages.
Proof: Let the superscript(?)denote the error probabilities
at iteration ?. Furthermore, let ?(?)
denote the error probabilities from the variable to check nodes
and from the check to variable nodes, respectively. According
ac = ?(?)
ev and ?(?+1)
av
= ?(?)
ec
Page 5
5
to (25), the EXIT function of the variable node decoder with
quantized channel messages is the expectation of the EXIT
functions of the subchannels. Combining (6), (14) and (25)
leads to
?
W
?
For small error rates, it suffices to consider only the first
order Taylor series expansion over one iteration [3]. Stability
implies that the error probability decreases over one iteration
for sufficiently small error probabilities, i.e., we have
?(?+1)
ev
=
W
?
w=0
pwfv
?(?+1)
av
;dv,?ch,w
?
=
w=0
pwfv
?
fc(?(?)
ev;dc);dv,?ch,w
?
.
(36)
lim
?(?)
ev→0
∂?(?+1)
ev
∂?(?)
?W
ev
= lim
?(?)
ev→0
?
∂fc(η;dc)
∂η
?W
w=0
?
?
w=0
pw
∂fv(η;dv,?ch,w)
∂η
????
∂fv(η;dv,?ch,w)
∂η
????
∂fv(η;dv,?ch,w)
∂η
????
η=fc(?(?)
ev;dc)
·
η=?(?)
ev
?
= lim
?(?)
ev→0
?
∂fc(η;dc)
∂η
pw
????
η=fc(?(?)
ev;dc)
?
·
lim
?(?)
ev→0
W
?
η=?(?)
ev
?
=
w=0
pw
lim
η→0
?
· lim
η→0
?∂fc(η;dc)
∂η
?
< 1,
(37)
where the final step follows because fc(?(?)
0. For the check node (6), we have
?∂fc(η;dc)
For the variable node decoder we start with the binomial
cumulative distribution
ev;dc) → 0 as ?(?)
ev →
lim
η→0
∂η
?
= dc− 1.
(38)
lim
p→0
∂
∂pB(k;n,p) = lim
p→0
?
∂
∂p
k
?
;
;
i=0
?n
k = 0
k > 0
i
?
pi(1 − p)n−i
=
−n
0
.
(39)
Using this result with (14) and (25) and the fact that for ?av→
0 the reliability of the channel message Dchis small compared
to the reliability of the apriori message Dav, for regular LDPC
codes we get
?
equals the error probability of a hard decision channel ?ch=
W
?
w=0
pw
lim
η→0
∂fv(η;dv,?ch,w)
∂η
?
;
;
;
=
1dv= 2
dv= 3
dv≥ 4
2?W
w=0pw?ch,w
0
.
(40)
The expectation of the error probabilities of all subchannels
V1
V2
V3
C1
C2
C3
Fig. 3.Cycle formed by variable nodes of degree two and three.
?W
also valid for unquantized channel messages by setting ?ch=
Q?σch
?
w=0pw?ch,w. Since every binary input symmetric output
channel can be decomposed into BSCs [11], [12], (40) is
2
?. For irregular LDPC codes (31) leads to
λi
w=0
i
?
W
?
pw
?
lim
η→0
∂fv(η;i,?ch,w)
∂η
??
= λ2+ 2?chλ3.
(41)
Combining (38) and (41) with (37) yields the theorem.
We note that this stability condition is the same as for Gal
lager’s original algorithm B (with hard decisions) as derived in
[7, Eq. (11)], i.e. a finer quantization of the channel messages
does not change the stability condition. The stability condition
imposes a linear constraint on λ2 and λ3 and can hence be
incorporated in the linear program (34) for code optimization.
For regular LDPC codes with dv= 2, the left side of (35) is
dc− 1, which cannot be less than one. Therefore, such codes
exhibit an error floor with binary messagepassing decoding.
Regular codes with dv = 3 satisfy the stability condition if
2?ch(dc− 1) < 1. However, we show in the next section that
these codes also exhibit an error floor with binary message
passing decoding.
C. Effect of Cycles
As shown in [15], cycles are unavoidable in the factor
graph of finite length LDPC codes. Some cycles lead to error
floors because they lead to poor properties of the code itself,
e.g. cycles of length g that include only variable nodes of
degree two lead to a minimum distance dmin≤g
messagepassing decoders, we identify cycles that lead to error
floors that are caused by the decoding algorithm and not by
the properties of the code.
2. For binary
Theorem 3. Consider a cycle formed by variable nodes
of degree two and three only and assume that the channel
messages associated with the nodes in the cycle are in error.
If all other incoming messages at the check nodes of the cycle
are correct, then the variable nodes forming the cycle cannot
be corrected by the binary messagepassing decoder.
Proof: An example of a cycle of interest is shown in
Figure 3, where the left edges correspond to the channel
Page 6
6
messages which are all assumed to be in error. In the first
iteration the variable nodes send out their received messages.
Since every check node in the cycle is connected twice to the
set of erroneous variable nodes, the outgoing messages from
the check nodes are also in error.
In the following iteration, the outgoing messages at the
variable nodes are computed according to (8). For degree
two variable nodes (V1 and V2), the extrinsic Lvalue is the
sum of the channel Lvalue and the Lvalue of the other
incoming message. Since both messages are in error, the
outgoing message is also in error. For variable nodes of degree
three, one message is not involved in the cycle (as shown for
V3in Figure 3). Even if this message is correct, the extrinsic
Lvalue is the sum of the channel Lvalue and two Lvalues
from the check nodes with same magnitude but different signs.
Therefore, the outgoing message is the sign of the channel L
value which is in error. Since this leads to the same state as
after the first iteration, the decoder is not able to correct the
errors of the variable nodes in the cycle.
If the factor graph contains cycles consisting of variable
nodes of degree two and three only, there is a nonzero
probability that the involved channel messages are in error
leading to a decoding failure. Similar to stopping sets [16],
this situation leads to error floors. According to Theorem 3, to
avoid an error floor, the factor graph of an LDPC code must
not have such cycles. Such a graph exists if the following
condition is satisfied.
Theorem 4. A factor graph with no cycles of variable nodes
of degree two and three exists if and only if
3λ2+ 4λ3≤
6
dc
(1 − R) −
(1 − R)
1
N
<
6
dc.
(42)
Proof: Let Λi denote the fraction of variable nodes of
degree i, i.e. the node perspective of the variable node degree
distribution [3]. Furthermore, let N and M denote the number
of variable and check nodes, respectively. The maximum
number of nodes in the subgraph containing only degree two
and three variable nodes is
Λ2N + Λ3N + M.
(43)
This subgraph is cyclefree only if the number of edges Etin
the subgraph is at most one less than the number of nodes
Et≤ Λ2N + Λ3N + M − 1.
(44)
Furthermore, such a subgraph exists if (44) is satisfied. Since
Et= 2Λ2N + 3Λ3N, the bound (44) is
Λ2+ 2Λ3≤M − 1
Using
N
.
(45)
Λi=
λi
i
?
j
λj
j
,
(46)
to convert from node perspective Λi to edge perspective λi,
and expressing the design rate R of the LDPC code as [3]
R = 1 −
1
dc
?
j
λj
j
= 1 −M
N
(47)
leads to the theorem.
An important consequence of Theorems 3 and 4 is that
regular LDPC codes with dv< 4 cannot be decoded without
an error floor using binary messagepassing decoders. To see
this, observe that for regular codes with dv = 2 or dv = 3,
Theorem 4 is satisfied only if dc ≤ 2 and dc ≤ 1.5,
respectively. Both cases are not possible for codes of positive
rate. Therefore, although regular LDPC codes with dv = 3
are attractive from the point of view of a cyclefree analysis
[8], they have limited use for binary messagepassing. To
demonstrate this fact we show simulation results of such a
code in Section VI.
V. IMPLEMENTATION ASPECTS
A. Estimation of the APriori Channel
In Section III, we assumed that the variable node knows
the parameters of the extrinsic channel for every iteration, i.e.
it knows the crossover probabilities of the messages from the
check nodes to the variable nodes. Suppose we represent these
crossover probabilities as a sequence of numbers indexed by
the iteration number. In [17] we showed how this sequence can
be predicted in advance from the trajectory of the decoder in
the EXIT chart. Another possibility is to determine a sequence
of crossover probabilities by simulations. We remark that for
finitelength codes, such an approach can lead to better results
than an asymptotic prediction from the EXIT chart. In this
section, we present an adaptive method which is based on the
fraction of unsatisfied check nodes [18] and does not require
a precomputed sequence of crossover probabilities.
Estimating the error probability based on the extrinsic
messages of the check nodes is not possible, because these
messages depend on the transmitted symbols which are not
known to the decoder. However, the decoder knows that all dc
symbols involved in a paritycheck equation sum up to zero,
and it can therefore determine the extrinsic error probability
?ec from the number of unsatisfied check nodes denoted by
Me. Denote the fraction of unsatisfied check nodes as
?s=Me
M.
(48)
This quantity can be seen as the extrinsic error probability of
a check node of degree dc+ 1. Using (7) with ?sleads to an
estimate of the crossover probability at the input of the check
node
?ac≈1 − (1 − 2?s)
1
dc
2
.
(49)
The crossover probability probability at the output of the check
node follows from (6) and (49)
?ec=1 − (1 − 2?ac)dc−1
2
≈1 − (1 − 2?s)
dc−1
dc
2
.
(50)
This estimate is then used in (4) to compute the reliability of
the extrinsic channel.
Page 7
7
−10−8−6−4−2
Es
N0threshold [dB]
0
246810
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
BIAWGN capacity
BSC capacity
Rsoft
RBSC
code rate, capacity
Fig. 4.
and hard decision channel (RBSC).
Thresholds of optimized codes for soft channel information (Rsoft)
B. Majority Decision Rules
The concept of converting all incoming messages to L
values, perform decoding in the Lvalue domain and sending
the hard decision from variable to check nodes is useful for
analyzing binary messagepassing decoders. In practice, how
ever, these operations are often replaced by majority decision
rules due to reasons of complexity. Consider a variable node
of degree dv. Depending on ?av, a minimum number of t
messages (see (12)) from the check nodes have to disagree
with the channel messages, in order to change the outgoing
message of that variable node. This allows the derivation of
a majority decision rule that is parametrized by ?av. Since
?av= ?ecwe can use the fraction of unsatisfied check nodes
to adapt the majority decision rule over the iterations. For
channels with larger output alphabet than binary, a majority
decision rule has to be defined for every subchannel (see (19)
(21)).
VI. NUMERICAL RESULTS
Using linear programming, we optimized codes using (34)
with (35) and (42) and compared them with the capacity of the
BIAWGN and BSC. For the optimization we set the maximum
variable node degree to dv,max = 100 and performed the
optimization for check node degrees in the range between 2
and 1000. The decoding thresholds of these codes in terms of
the required signal to noise ratio are shown in Figure 4. Note
that cycles of the kind described in Theorem 3 could occur
if a random interleaver was used. However, we constructed
our codes using the progressive edgegrowth (PEG) algorithm
[19], [20] which ensures that such cycles do not exist.
Observe that the gap to capacity decreases as the rate
increases. This makes binary messagepassing decoders attrac
tive for applications which require a high code rate. For a rate
of 0.9, the best code using soft channel information is within
approximately 0.5 dB of capacity. Note that the softdecision
and harddecision channel EXIT curves serve as upper and
lower bounds, respectively, for all quantization schemes.
TABLE I
THRESHOLDS FOR CODE OF RATE 0.5 AND THE LDPC CODE USED IN [4,
SEC. I.6].
Rate 0.5
BSQC
2.69
14
1.95
Rate 0.9375
BSQC
5.12
112
2.34
BSC
3.67
15
Soft
2.30
12
BSC
6.08
112
Soft
5.02
112
Eb/N0[dB]
dc
ζ1
We performed bit error rate simulations for codes of rate
R = 0.5 for the BSC, BSQC and the soft information channel.
The thresholds of these codes using the associated quantization
schemes are shown in Table I where ζ1 is the quantization
interval (see Section IIIC) that leads to the best threshold.
The bit error rate simulation results are shown in Figure 5
using codes of length N = 104. Also shown in this figure
are the results for a regular LDPC code with variable node
degree dv= 3 and check node degree dc= 6 transmitted over
a channel with soft outputs. As expected from Section IVC
this code shows an errorfloor.
22.533.5
[dB]
44.55
10−8
10−7
10−6
10−5
10−4
10−3
10−2
10−1
100
bit error rate
soft
BSC
BSQC
regular
Eb
N0
Fig. 5.
R = 0.5.
Bit error rate simulations for optimized and regular code of rate
The system with hard channel decisions (BSC) corresponds
to the Gallager B algorithm. Observe that by adding one more
bit for the channel messages and quantizing them according to
a BSQC, the performance of this algorithm can be improved
by 1.0 dB with only a small increase in decoding complexity.
A finer quantization of the channel messages will not result in
a significant gain, since the gap to the unquantized system
is only approximately 0.5 dB. Remark that predicting the
sequence of crossover probabilities by using the trajectory in
the EXIT chart [17] gives a similar performance.
To demonstrate the performance of binary messagepassing
decoders for high code rates we perform bit error rate simu
lations of a code used in [4, Section I.6]. This code of rate
R = 0.9375 is a regular LDPC code with dv= 7, dc= 112
and is used for an optical transmission system for high data
rates up to 40 Gbps. The slopes which define the code (for
details see [4, Section I.6.2]) are not defined in the standard
and have been chosen to be the first seven prime numbers,
Page 8
8
i.e., 2,3,5,7,11,13,17. In [4] it is assumed that the decoder
observes hard decisions from the channel. Using our analysis,
the thresholds for channels with larger output alphabets are
shown in Table I assuming that the unquantized channel can
be modeled as a BIAWGN channel. The corresponding bit
error rate simulations are shown in Figure 6. Increasing the
number of quantization levels of the channel messages by one
bit leads to an improvement of approximately 1.0 dB.
4.5 55.56
6.577.5
10−8
10−7
10−6
10−5
10−4
10−3
10−2
10−1
100
bit error rate
soft
BSC
BSQC
Eb
N0
[dB]
Fig. 6.
used in [4, Sec. I.6].
Bit error rate simulations for the LDPC code of rate R = 0.9375
VII. CONCLUSIONS
We analyzed binary messagepassing decoders using EXIT
charts. For channels which deliver hard decisions, this analysis
led to an algorithm that is equivalent to Gallager’s decoding
algorithm B. We extended these results to channels with larger
output alphabets including channels providing soft informa
tion. We found that increasing the channel output alphabet
size by only one bit leads to a significant lowering of the
decoding threshold. We described why the mixing property
of EXIT functions does not apply to binary messagepassing
algorithms if one uses mutual information, and presented a
modified mixing method based on error probabilities in order
to optimize codes. Finally, we showed that degree two and
three variable nodes forming a cycle cannot be corrected and
we derived a condition for the variable node distribution that
guarantees the existence of a cyclefree subgraph for these
nodes.
REFERENCES
[1] R. Gallager, “Low density parity check codes,” IRE Transactions on
Information Theory, vol. IT8, pp. 21–28, Jan. 1962.
[2] ——, Low Density Parity Check Codes, ser. Research monograph series.
Cambridge, Mass.: MIT Press, 1963, no. 21.
[3] T. Richardson and R. Urbanke, “The capacity of lowdensity parity
check codes under messagepassing decoding,” Information Theory,
IEEE Transactions on, vol. 47, pp. 599 – 618, Feb. 2001.
[4] ITUT G.975.1, “Forward error correction for high bitrate DWDM
submarine systems,” 2004.
[5] S. ten Brink, “Convergence of iterative decoding,” Electronics Letters,
vol. 35, no. 10, pp. 806–808, May 1999.
[6] A. Ashikhmin, G. Kramer, and S. ten Brink, “Extrinsic information
transfer functions: model and erasure channel properties,” Information
Theory, IEEE Transactions on, vol. 50, no. 11, pp. 2657–2673, Nov.
2004.
[7] M. Ardakani and F. Kschischang, “Properties of optimum binary
messagepassing decoders,” Information Theory, IEEE Transactions on,
vol. 51, no. 10, pp. 3658–3665, Oct. 2005.
[8] M. Ardakani, “Efficient analysis, design and decoding of lowdensity
paritycheck codes,” Ph.D. dissertation, University of Toronto, 2004.
[9] J. Hagenauer, E. Offer, and L. Papke, “Iterative decoding of binary block
and convolutional codes,” Information Theory, IEEE Transactions on,
vol. 42, no. 2, pp. 429–445, Mar. 1996.
[10] T. Richardson, A. Shokrollahi, and R. Urbanke, “Design of capacity
approaching irregular lowdensity paritycheck codes,” Information The
ory, IEEE Transactions on, vol. 47, pp. 619–637, Feb. 2001.
[11] I. Land, “Reliability information in channel decoding – practical aspects
and information theoretical bounds,” Ph.D. dissertation, University of
Kiel, Germany, 2005.
[12] I. Land and J. Huber, “Information combining,” Foundations and Trends
in Commun. and Inform. Theory, vol. 3, no. 3, 2006.
[13] M. Tuechler and J. Hagenauer, “EXIT charts of irregular codes,” in 2002
Conference on Information Sciences and Systems, Princeton University,
2002.
[14] M. Ardakani and F. Kschischang, “Designing irregular LPDC codes
using EXIT charts based on message error rate,” in Proc. IEEE Int.
Symp. Inf. Theory (ISIT), Lausanne, Switzerland, 2002.
[15] T. Etzion, A. Trachtenberg, and A. Vardy, “Which codes have cycle
free tanner graphs?” Information Theory, IEEE Transactions on, vol. 45,
no. 6, pp. 2173–2180, Sep. 1999.
[16] T. Richardson and R. Urbanke, Modern Coding Theory.
University Press, 2008.
[17] G. Lechner, T. Pedersen, and G. Kramer, “EXIT chart analysis of binary
messagepassing decoders,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT),
Nice, France, 2007.
[18] G. Yue and X. Wang, “A new binary iterative LDPC decoding algo
rithm,” in Proc. Int. Symp. on Turbo Codes & Rel. Topics, Lausanne,
Switzerland, 2008.
[19] X.Y. Hu, E. Eleftheriou, and D. Arnold, “Regular and irregular progres
sive edgegrowth tanner graphs,” Information Theory, IEEE Transactions
on, vol. 51, no. 1, pp. 386–398, Jan. 2005.
[20] X.Y. Hu, “Software for PEG code construction.” [Online]. Available:
http://www.inference.phy.cam.ac.uk
Cambridge
View other sources
Hide other sources
 Available from ArXiv
 Available from Troels Pedersen · May 22, 2014
Similar Publications
Gottfried Lechner 