# A Markov chain model for Edge Memories in stochastic decoding of LDPC codes

**ABSTRACT** Stochastic decoding is a recently proposed method for decoding Low-Density Parity-Check (LDPC) codes. Stochastic decoding is, however, sensitive to the switching activity of stochastic bits, which can result in a latching problem. Using Edge Memories (EMs) has been proposed as a method to counter the latching problem in stochastic decoding. In this paper, we introduce a Markov chain model for EMs and study state transitions over decoding cycles. The proposed method can be used to determine the convergence and the required number of decoding cycles in stochastic decoding. Moreover, it can help to study the behavior of decoding process and to estimate the decoding time.

**0**Bookmarks

**·**

**100**Views

- [Show abstract] [Hide abstract]

**ABSTRACT:**A modified Gradient Descent Bit Flipping (GDBF) algorithm is proposed for decoding Low Density Parity Check (LDPC) codes. The algorithm, called Noisy GDBF (NGDBF) offers improvement in terms of performance by adding a random perturbation at each iteration to escape from undesirable local maxima. Both single-bit and multi-bit flipping versions of the algorithm are proposed and evaluated. The proposed single-bit and multi-bit versions of the algorithm are shown to improve the Bit Error Rate (BER) compared to previous GDBF algorithms of comparable complexity. The multi-bit NGDBF algorithm achieves a 0.5dB coding gain compared to the best GDBF algorithms previously reported. Unlike other multi-bit GDBF algorithms that provide an escape from local maxima, the proposed algorithm does not require computing a global objective function or a sort over all symbol metrics, making it highly efficient in comparison. Architectural details are presented for implementing the NGDBF algorithm. Complexity analysis and optimizations are also discussed.IEEE Transactions on Communications 02/2014; · 1.75 Impact Factor

Page 1

A Markov Chain Model for Edge Memories in

Stochastic Decoding of LDPC Codes

Abstract—Stochastic decoding is a recently proposed method

for decoding Low-Density Parity-Check

Stochastic decoding is, however, sensitive to the switching activity

of stochastic bits, which can result in a latching problem. Using

Edge Memories (EMs) has been proposed as a method to counter

the latching problem in stochastic decoding. In this paper, we

introduce a Markov chain model for EMs and study state

transitions over decoding cycles. The proposed method can be

used to determine the convergence and the required number of

decoding cycles in stochastic decoding. Moreover, it can help to

study the behavior of decoding process and to estimate the

decoding time.

(LDPC) codes.

Index Terms—Stochastic decoding, Low-Density Parity-Check

(LDPC) codes.

I.

INTRODUCTION

Low-Density Parity-Check (LDPC) codes [1] are error-

correcting codes with decoding performance close to the

Shannon capacity limit [2]. These codes have been considered

in several digital communication standards, such as WiMAX

(IEEE 802.16) [3] and Digital Video Broadcast (DVB)

satellite communications [4]. LDPC codes are often decoded

by using a form of iterative belief propagation, such as the

Sum-Product Algorithm (SPA), which can be graphically

represented by a bipartite Tanner graph [5] with two distinct

groups of nodes: variable-nodes and check-nodes. LDPC

codes are decoded by passing messages iteratively between

variable-nodes and check-nodes over the edges of the Tanner

graph. The SPA requires passing probabilities or log-

likelihood values between nodes through parallel connections

with many paths. This increases the chip area needed for

connections as well as the energy consumption. As a result,

high-speed implementations of LDPC decoders have been a

subject of active research in recent years.

Stochastic computation was introduced in the 1960s as a

method to design low-precision digital circuits [6]. In

stochastic computing, probabilities are encoded by random

sequences of bits. Each bit in these sequences is equal to 1

with the probability to be encoded. This method has been used

for iterative decoding of some error-correcting codes. Its

advantage is that the decoding operation can be realized using

very simple circuits working at high-speed.

The early stochastic LDPC decoders [7][8] are sensitive to

the level of random switching activity within the Tanner graph.

Several methods have been proposed to improve the

performance of the stochastic decoding. Using Edge Memories

(EMs) is a way to avoid latching problem [9], which is a major

shortcoming of stochastic decoding. In this paper, we present a

model to analyze the performance of EMs. This model can be

used to determine the convergence and to gain a better

understanding of the behavior of the decoding process. The rest

of the paper is organized as follows. Section II provides an

overview of stochastic computation and the stochastic decoding

method. Section III describes the analysis of a Markov chain

model for EMs. The simulation results are given in Section IV.

Section V gives the conclusions and summaries.

II.

In

STOCHASTIC COMPUTATION AND LDPC DECODING

stochastic computation,

transformed to streams of stochastic bits using Bernoulli

sequences. A sequence of N bits of which j bits are equal to

1 represents a probability value of

10 sequence with 8 bits equal to 1 represents a probability of

0.8. The transformation between a probability and a stochastic

stream is not unique. Different stochastic streams are possible

to represent a given probability. Stochastic representation and

computation can be applied to probability operations in Tanner

graph to replace the probabilities passed between nodes. Thus,

the complex probability operations at variable-nodes and

check-nodes can be performed by using simple circuits.

Here, we use a degree 3 node to show how the variable-

nodes and the check-nodes exchange stochastic bits in

stochastic decoding. Let

=

PPa

probabilities of two inputs, a and b , of the variable-node.

The output probability of the variable-node,

computed as

[

babac

PPPPP

+=

Similarly, at the check-node we have

bac

PPP

) 1 (

−=

the probabilities are

Nj/

. For instance, a length

) 1( =

a

and

) 1( =

b

=

PPb

be the

c P , can be

)]1 )(1 (

ba

PP

−−

(1)

ba

PP

)1 (

−+

(2)

Kuo-Lun Huang

†, Vincent Gaudet

‡, and Masoud Salehi

†

†Department of Electrical and Computer Engineering, Northeastern University, Boston, Massachusetts

‡Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario

khuang@ece.neu.edu, vcgaudet@uwaterloo.ca, salehi@ece.neu.edu

Page 2

Fig.1. The structure of (a) a stochastic variable-node and (b) a

stochastic check-node.

Figure 1 shows the logic-gate structure of a variable-node

and a check-node in stochastic decoding [7].

In stochastic decoding, each decoding round is called a

Decoding Cycle (DC) [9], which does not directly correspond

to the iterations in SPA decoding. In each DC, a variable-node

receives one bit from the stochastic stream corresponding to

the channel probability, and the extrinsic bits from the check-

nodes. Then, each variable-node propagates its outgoing 1-bit

messages to the connected check-nodes. Check-nodes check

the parities and send their 1-bit messages back to the variable-

nodes. After completing this exchange of bits between

variable-nodes and check-nodes, variable-nodes load in the

next bit from the stochastic streams, and start the next

decoding cycle.

When using the stochastic decoding approach, there is the

possibility that the stochastic decoder is very sensitive to the

level of random switching activity. Based on the structure of

the variable-node, the latching problem occurs when the input

bits of the variable-node are unequal for several decoding

cycles. Under this condition, the output bits get stuck in the

same value and the variable-node is locked into the hold state.

A re-randomization mechanism is required to prevent the

latching problem. In [9], Edge Memories (EMs) were

introduced to avoid the latching problem.

Edge Memories (EMs)

Edge Memories [9] are L-bit shift registers assigned to the

outgoing edges of variable-nodes. When input bits of the

variable-node are in agreement, the EM is updated with the

input bit. On the contrary, the EM is not updated when the

variable-node is at hold state. One stochastic bit is randomly

picked from the EM and passed through the edge as the

outgoing bit. Thus, EMs contain only the bits which are not

produced in hold state. With this updating scheme, EMs reduce

the chance of latching in the LDPC decoding.

III. A MARKOV CHAIN MODEL FOR EMS

In order to understand the behavior of decoding process,

we present a Markov chain model for EMs in stochastic LDPC

decoder. First, we define the state of EMs based on the

number of 1s in EMs. State 0 denotes the case where the bits

in the EM are all-zeros. State 1 means there is only one 1 in

the shift register, and state L stands for the all-ones state.

Thus, there will be

) 1( +

L

states for the EM with length L.

Fig. 2. A Markov chain model for a length-L Edge Memory.

Under the infinite codeword length and cycle-free assumption,

we conclude that all EMs are identical and independent in the

stochastic decoder. Figure 2 shows a Markov chain model for

an EM.

Due to the updating scheme used here, an EM is only

updated one bit each time. We can see that except state 0 and

state L , all of other states have three transitional options,

staying at the same state, or moving to one of the neighboring

states. When the updated bit is 1 and the abandoned bit is 0,

the current state, state i , will transit to state

contrary, if the updated bit is 0 and the shifted out bit is 1, the

state will transit to state

) 1( −

i

. If the variable-node is locked

in the hold state or the updated bit and the abandoned bit of

EM are the same, the EM will stay in the same state.

The probabilities of this model are evolved as follows.

First, we initialize the EM with the probability received from

channel. For each state of EM, the initial probability is

corresponding to the channel probability. In each DC, the

probabilities of each state are based on the input probability of

the variable-node, and the channel probability. Thus, the

probabilities of states at time t can be represented as

⎡

(

t

SP

) 1( +

i

. On the

⎥

⎦

⎥

⎥

⎥

⎥

⎥

⎤

⎢

⎣

⎢

⎢

⎢

⎢

⎢

=

)(

)(

)

)(

1

0

Lt

t

t

SP

SP

P

M

S

(3)

and

()

()

()()

()()

⎥⎦

⎤

⎢⎣

⎡

−−

⎟

⎠

⎞

⎜

⎝

(4)

⎛ +

i

+

⎥⎦

⎤

⎢⎣

⎡

−−

⎟

⎠

⎞

⎜

⎝

⎛ −

+⋅

⎟

⎠

⎞

⎜

⎝

⎛ −

i

+

+

⎥⎦

⎤

⎢⎣

⎡

⋅

⎟

⎠

⎞

⎜

⎝

⎛

−−

=

−

+−

−−

−

−

−−

1

,11

1

,

1

,1

1

,11

11

1

)(

111)(

) 1 (

L

)()(

dv

t in chit

dv

t in ch

dv

t inchit

dv

tin chitit

PP

L

SP

PP

L

i

PP

L

L

SP

PP

iL

SPSP

for

here

the input probability of the variable-node at time t , and

the degree of the variable-node.

The first term on the right hand side in (4) is the

probability of state transition from state

second term represents the probability of staying in the same

state, and the last term is the probability of moving from state

) 1( +

i

to state )(i .

. 0 ,0

>≤≤

tLi

ch

P is the probability obtained from the channel,

t in

d is

P, is

v

) 1( −

i

to state )(i , the

Page 3

The output probability of the variable-node is

,

t out

P

where

tout

B

node to the check-node at time t .

We also have

)| 1

=

()(

,

SS

=

t out

T

t

BPP

(5)

, is the outgoing stochastic bit from the variable-

⎥

⎦

⎥

⎥

⎥

⎥

⎥

⎤

⎢

⎣

⎢

⎢

⎢

⎢

⎢

⎡

=

=

=

==

) | 1(

) | 1(

) | 1(

) | 1(

,

1,

0,

,

Ltout

t out

tout

t out

SBP

SBP

SBP

BP

M

S

(6)

where

()

()()

[

1

]

()

1

,

1

,

1

,,

>

≤

L

≤

11)| 1

=

(

−

−−

+−−−−=

dv

t in ch

dv

t inch

dv

t in chit out

PP

PPPP

L

i

SBP

for

. 0 ,0

ti

(7)

The first term in (7) is the probability that the outgoing bit

from the variable-node is 1 when all the incoming bits are in

disagreement and the bit randomly picked from the EM is 1.

The second term is the probability that the output is 1 when all

the input bits of the variable-node are 1s.

Because in stochastic decoding parity check is done at the

check-nodes, the input probability at the variable-nodes is

equal to

(

211

,

=

t in

P

)

2

1

,

−

−−

dc

t out

P

(8)

where

c

d is the degree of the check-node, and

. 0

>

t

IV. SIMULATION RESULTS

In this section we study the simulation of the Markov

chain model for the EM in stochastic decoding of a

(dv,dc)=(2,3) LDPC code. In these simulations, all-zeros

codeword is transmitted over an AWGN channel using BPSK

modulation. We have selected the length of EMs to be 4 and

simulations are carried out at SNR of 3 dB. The state transition

matrices at different decoding cycles are shown in Table 1.

After approximately 30 decoding cycles, the state

transition matrix approaches the steady state. In the stationary

distribution, the transition probability from state 0 to the same

state is equal to 1 indicating edge memories approaching to the

0 state, which is in agreement with transmission of the all-

zeros codeword.

The stationarity of the Markov chain model is also

observed from the limit

Tab. 1. The transition matrices of a length-4 Edge Memory for a

(dv,dc)=(2,3) LDPC code for different decoding cycles.

tt

SS

, 1, 0

S

⎢

0.8073 0.1740

ttt

SSS

, 4 , 3, 2

At 1

DC

(t=1)

1 , 4

1 , 3

1, 2

1, 1

S

1 , 0

−

−

−

−

−

t

t

t

t

t

S

S

S

⎥

⎦

⎥

⎥

⎥

⎥

⎥

⎥

⎥

⎤

⎢

⎣

⎢

⎢

⎢

⎢

⎢

⎢

⎡

0.30420.6958000

0.0062 0.47190.521900

0 0.01250.63960.34790

00 0.0187

0000.02500.9750

At 10

DCs

(t=10)

1, 4

1, 3

1, 2

1 , 1

S

1, 0

−

−

−

−

−

t

t

t

t

t

S

S

S

S

⎥

⎦

⎥

⎥

⎥

⎥

⎥

⎥

⎥

⎤

⎢

⎣

⎢

⎢

⎢

⎢

⎢

⎢

⎢

⎡

0.1286 0.8714000

0.0003 0.34620.653500

00.00060.56370.43570

000.00100.78120.2178

0000.00130.9987

At 20

DCs

(t=20)

1, 4

1 , 3

1, 2

1 , 1

S

1, 0

−

−

−

−

−

t

t

t

t

t

S

S

S

S

⎥

⎦

⎥

⎥

⎥

⎥

⎥

⎥

⎥

⎤

⎢

⎣

⎢

⎢

⎢

⎢

⎢

⎢

⎢

⎡

0.12000.8800000

0.00000.34000.660000

0 0.00000.56000.44000

000.00010.7799 0.2200

0000.0001 0.9999

At 30

DCs

(t=30)

1, 4

1, 3

1, 2

1, 1

S

1, 0

−

−

−

−

−

t

t

t

t

t

S

S

S

S

⎥

⎦

⎥

⎥

⎥

⎥

⎥

⎥

⎥

⎤

⎢

⎣

⎢

⎢

⎢

⎢

⎢

⎢

⎢

⎡

0.11930.8807 000

0.00000.3395 0.660500

00.00000.55960.4404 0

000.00000.77980.2202

0000.00001.0000

51015

DCs

202530

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

0.18

0.2

Pin and Pout of variable-node for (dv,dc)=(2,3) LDPC code at SNR=3dB

Probabilities

Pin

Pout

Fig. 3. Input and output probabilities of the variable-node for a

(dv,dc)=(2,3) LDPC code for 30 decoding cycles.

Page 4

⎥

⎦

⎥

⎥

⎥

⎤

⎢

⎣

⎢

⎢

⎢

⎡

=

∞→

001

001

lim

n

L

MOMM

L

n

M

(9)

Figure 3 shows the input and output probabilities of the

variable-node for 30 DCs. These probabilities converge to zero

because the all-zeros codeword is send over an AWGN channel.

When the output probability of the variable-node converges to

zero (or one), the outgoing stochastic bit from the variable-

node has high probability to be 0 (or 1). After the state of the

Edge Memories is converges, the output stochastic bit can be

determined. Thus, the codeword can be determined and the

decoding process can be terminated. This is helpful in

determining the maximum number of decoding cycles of

stochastic decoder and estimating the decoding time.

V.

CONCLUSIONS

We presented a Markov chain model for Edge Memories in

stochastic LDPC decoding and determined the state transition

matrices of this model. This model can be employed to study

the convergence conditions and the behavior of decoding

process. This model is helpful in estimating the required

number of decoding cycles in stochastic decoding of LDPC

codes.

REFERENCES

[1] R. G. Gallager, “Low-density parity-check code,” Cambridge, MA: MIT

Press, 1963.

[2] D. J. C. MacKay, and R. M. Neal, “Near Shannon limit performance of

low density parity check codes,” Electronics Letters, Vol. 33, No.6, Mar.

13, 1997, pp. 457-458.

[3] The IEEE 802.16 Working Group [Online].

Available: http://www.ieee802.org/16/

[4] The Digital Video Broadcast Standard [Online].

Available: http://www.dvb.org

[5] R. Tanner, “A Recursive Approach to Low Complexity Codes,” IEEE

Trans. Information Theory, Vol. 27, No. 5, Sep. 1981, pp. 533-547.

[6] B. Gaines, “Advances in Information Systems Science,” New York:

Plenum, 1969, pp. 37-172.

[7] V. C. Gaudet, and A. C. Rapley, “Iterative Decoding using Stochastic

Computation,” Electronics Letters, Vol. 39, No. 3, Feb. 6, 2003, pp.

299-301.

[8] W. J. Gross, V. C. Gaudet, and A. Milner, “Stochastic Implementation

of LDPC Decoders,” Conf. on Signals, Systems, and Computers 2005,

pp.713-717.

[9] S. S. Tehrani, W. J. Gross, and S. Mannor, “Stochastic Decoding of

LDPC codes,” IEEE Communications Letters, Vol. 10, No. 10, Oct.

2006, pp. 716-718.

[10] C. Winstead, V. C. Gaudet, A. Rapley, and C. Schlegel, “Stochastic

iterative decoders,” IEEE Int. Symp. on Information Theory 2005,

pp.1116.

[11] S. S. Tehrani, S. Mannor, and W. J. Gross, “Fully Parallel Stochastic

LDPC Decoders,” IEEE Trans. Signal Process., Vol. 56, No. 11, Nov.

2008, pp. 5692-5703.

[12] S. S. Tehrani, A. Naderi, G. A. Kamendje, S. Mannor, and W. J. Gross,

“Tracking Forecast Memories in stochastic decoders,” IEEE Int. Conf.

on Acoustics, Speech and Signal Process.2009, pp. 561.