ArticlePublisher preview available

Belief propagation decoding assisted on-the-fly Gaussian elimination for short LT codes

To read the full-text of this research, you can request a copy directly from the authors.

Abstract and Figures

Belief propagation (BP) decoding has been widely used for decoding Luby transform (LT) codes which perform very well for a large number of input symbols. However, in reality, small numbers of input symbols are often encountered. In this paper, an efficient BP decoding assisted on-the-fly Gaussian elimination (OFG) decoding process is proposed. Our algorithm exploits XOR operations to get a packet of degree one when the ripple is empty, which gives a small value of overhead. Simulation results show that the proposed algorithm gives a largely improved overhead, about a 0.25 or more, with respect to that of the conventional BP algorithm. The complexity of the proposed algorithm is notably reduced with respect to that of OFG, especially in case of \(k = 150\)–500, while guaranteeing the overhead nearly same as that of OFG.
This content is subject to copyright. Terms and conditions apply.
Cluster Comput (2016) 19:309–314
DOI 10.1007/s10586-015-0522-0
Belief propagation decoding assisted on-the-fly Gaussian
elimination for short LT codes
Hoyoung Cheong1·Jonwon Eun1·Hyuncheol Kim2·Kuinam J. Kim3
Received: 22 October 2015 / Revised: 14 December 2015 / Accepted: 19 December 2015 / Published online: 1 February 2016
© Springer Science+Business Media New York 2016
Abstract Belief propagation (BP) decoding has been
widely used for decoding Luby transform (LT) codes which
perform very well for a large number of input symbols. How-
ever, in reality, small numbers of input symbols are often
encountered. In this paper, an efficient BP decoding assisted
on-the-fly Gaussian elimination (OFG) decoding process is
proposed. Our algorithm exploits XOR operations to get a
packet of degree one when the ripple is empty, which gives a
small value of overhead. Simulation results show that the pro-
posed algorithm gives a largely improved overhead, about a
0.25 or more, with respect to that of the conventional BP algo-
rithm. The complexity of the proposed algorithm is notably
reduced with respect to that of OFG, especially in case of
k=150–500, while guaranteeing the overhead nearly same
as that of OFG.
Keywords OFG decoding ·Complexity ·
Triangularization ·Gaussian elimination
BKuinam J. Kim
Hoyoung Cheong
Jonwon Eun
Hyuncheol Kim
1Department of Information Communication, Namseoul
University, Cheonan, Korea
2Department of Computer Science, Namseoul University,
Cheonan 331-707, Korea
3Department of Convergence Security, Kyonggi University,
Suwon, Korea
1 Introduction
Fountain codes are a promising solution to multicast reli-
able information with a low complexity over binary erasure
channel (BEC). Luby transform (LT) codes, which are the
first implementation of Fountain codes, approaches capacity
for increasing code length, k[1]. However, in reality, short
length messages are often encountered. In video streaming
applications, the number of frames in a group of pictures can
be a message length, which is typically small value. Under
this configuration of a small or limited number of symbols
for coding in LT codes, mathematical analysis and simulation
results have shown that LT codes can perform poorly [2,3].
There are several decoding algorithms for LT codes and the
widely used is belief propagation (BP) decoding algorithm.
The complexity of BP is very low and its decoding speed is
fast, but for a small value of k, it requires a large overhead.
GE or GE-like algorithm, such as On-the-fly Gaussian elim-
ination (OFG) or incremental Gaussian elimination (IG) [4],
shows a noticeable overhead performance for the small value
of k[5].
In the binary erasure channel, the BP decoding process
can be significantly simplified since all received symbols
in decoding are completely reliable. BP first finds degree-1
received symbols and moves it into theripple. Symbols in the
ripple are processed one by one to decode another symbols
with degree more than one. The ripple in the BP decoding
process plays an important role for decoding LT codes suc-
cessfully. When there is no degree-1 packets in the ripple,
BP decoder declares the decoding failure. In case of decod-
ing failure a new packet is received, the 1sin the decoded
positions of the corresponding equation are canceled and BP
decoding is reattempted. Thats why the BP decoder has a
large overhead for a small value of k[5]. BP decoding has
been widely used for decoding LT codes which perform very
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
... In the case of small numbers of input symbols, just like our research case, the second type of treatment for decoding LT codes over an AWGN channel were proposed. Cheong et al [12] utilised a BP decoding algorithm assisted on-the-fly Gaussian elimination (OFG). The study tries to overcome the problem of missing degree one coded symbols by using xoring operations for certain coded symbols to reproduce a degree one coded symbol again. ...
... It is obvious that, when dealing with LT codes in an AWGN channel, the prementioned studies [9][10][11][12][13][14] try to overcome the problems of error propagation by adapting the decoding matrix of the LT code so that it is fitted with that of LDPC codes in order to use efficient iterative soft decoding. But this efficient decoding entails large complexity due to its iterative nature. ...
Full-text available
Luby transform (LT) and Raptor codes are an effectual solution for distributing bulky data files in broadcasting scenarios. For an erasure channel, like the internet communication system, these codes are used as standard codes for many applications. In this paper, we propose an LT code using deterministic degree generators. The degree values will be generated in a sequential way with a repetition period (í µí±¹ í µí²‘), these degrees decide on the number of data packets (or symbols) which have to be combined to form the coded packets. The data packets will be truncated in segments of length (í µí±¹ í µí²‘) and will be chosen serially. We exploit this deterministic encoding method for short length LT code over an additive white Gaussian noise (AWGN) channel and decode it using a new soft decoding algorithm based on maximum likelihood probabilities. The decoding process does not need any matrix variation which will result in decreasing the decoding complexity which is one of the important performance factors for such code length. The simulation results prove the superiority of this encoding approach over that of LT code generated using robust Soliton distribution (RSD) and decoded by belief propagation (BP) assisted by Gaussian elimination (GE)method which is one of the best decoding treatment for short length LT codes.
... The idea merge between the high degree output packets with some low degree one to avoid decoding block. Belief propagation algorithm is modified to find a solution for an empty ripple size for short length blocks is presented in [19]- [25]. From these previous works, it is evident that the short length data files need an adjustment for the degrees of the encoded packets as well as the way of selection to the data packets required to form the encoded one. ...
Full-text available
This paper introduces a Simulink model design for a modified fountain code. The code is a new version of the traditional Luby transform (LT) codes. The design constructs the blocks required for generation of the generator matrix of a limited-degree-hopping-segment Luby transform (LDHS-LT) codes. This code is especially designed for short length data files which have assigned a great interest for wireless sensor networks. It generates the degrees in a predetermined sequence but random generation and partitioned the data file in segments. The data packets selection has been made serialy according to the integer generated from both degree and segment generators. The code is tested using Monte Carlo simulation approach with the conventional code generation using robust soliton distribution (RSD) for degree generation, and the simulation results approve better performance with all testing parameter.
... We also showed that the computation complexity of our method is less than Gaussian elimination and its overhead is as less as Gaussian elimination. In [6], the authors have proposed a BP decoding algorithm assisted on-the-fly Gaussian process for short LT codes. The proposed algorithm starts with the conventional BP process in the first decoding phase. ...
Full-text available
Rateless coding such as Luby transform (LT) codes is representative of rate adaptive solutions to improve the frequency efficiency of transmitting data over a time-varying wireless channel. Also, these type of codes use belief propagation (BP) decoding which searches for encoded packets with degree 1 among the large numbers of encoded packets. This method fails if no encoded packets with degree 1 exist. However, in reality, small numbers of input packets are often encountered. In this paper, we have proposed a decoding algorithm that increases the performance of these codes for the small number of source packets. After the failure of BP decoding, to recover the remaining source packets, we use a method which keeps on decoding using the encoded packets whose degrees are not one. This algorithm continues until a degree 1 packet emerges in one of the iterations of this decoding process. Then, decoding algorithm switches back to the traditional BP decoding.
... With this approach, the nodes are always half full, as they get filled in the order of sorting and splitting so that none of new entries goes to old nodes. The advantage of this approach is that the generation time is quite fast [12][13][14][15][16][17]. ...
Full-text available
The data processing in the Socialist Republic of Vietnam (Vietnam, hereunder) is in an early stage and a variety of problems are needed to be solved. In the Vietnamese banking and financial sectors, where managing and storing of customer data and transaction histories are being emphasized as never before, the volume of data to be secured on a daily basis are explosively increasing due to rapid economic development so that the relevant authorities are seeking an efficient and reliable way to manage them. Being a widely known popular variation of B-tree, B+-tree is considered as a most adequate tree-type data structure for bulk data. Nevertheless, as it is quite time-consuming to construct a B+-tree for massive data the authors propose a Hadoop framework-based parallel B+-tree system to deal with the problem. The system is largely divided into three phases: First, data are partitioned and distributed evenly such that each partition will have almost the same amount of data volume. Second, a parallel local B+-tree system is constructed. Finally, some small-scale B+-trees are constructed and integrated into the complete form of B+-tree which will be dealing with an entire data set. The authors expect that the proposed system will offer an efficient index structuring while reducing data processing time.
... It is clear from Fig. 3 that an LT code with RSD employing the proposed method outperforms the conventional BP-RSD and BP-MBRSD methods for all rates applied. For the BER performance, our proposed method achieved a similar score to that of the BP-GE-RSD [13]. For regular BP, to recover the k source symbols from any N encoding symbols with a probability of 1 -, an average of   ...
Full-text available
Luby transform (LT) codes were the first practical rateless erasure codes proposed in the literature. The performances of these codes, which are iteratively decoded using belief propagation algorithms, depend on the degree distribution used to generate the coded symbols. The existence of degree-one coded symbols is essential for the starting and continuation of the decoding process. The absence of a degree-one coded symbol at any instant of an iterative decoding operation results in decoding failure. To alleviate this problem, we proposed a method used in the absence of a degree-one code symbol to overcome a stuck decoding operation and its continuation. The simulation results show that the proposed approach provides a better performance than a conventional LT code and memorybased robust soliton distributed LT code, as well as that of a Gaussian elimination assisted LT code, particularly for short data lengths.
Conference Paper
In Socialist Republic of Vietnam, applying the Big data to process any kind of data is still a challenge, especially in the banking sector. Until now, there is only one bank applied Big data to develop a data warehouse system has focused, consistent, can provide invaluable support to executives make immediate decisions, as well as planning long-term strategies, however, it still not able to solve any specific problem. Nowadays, from the fact large amounts of traditional data are still increasing significantly, if B-tree is considered as the standard data structure that manage and organize this kind of data, B+-tree is the most well-known variation of B-tree that is very suitable for applying bulk loading technique in case of data is available. However, it usually takes a lot of time to construct a B+-tree for a huge volume of data. In this paper, we propose a parallel B+-Tree construction scheme based on a Hadoop framework for Transaction log data. The proposed scheme divides the data into partitions, builds local B+-trees in parallel, and merges them to construct a B+-tree that covers the whole data set. While generating the partitions, it considers the data distribution so that each partitions have nearly equal amounts of data. Therefore the proposed scheme gives an efficient index structure while reducing the construction time.
Full-text available
Luby transform (LT) codes provide an efficient way to transfer information over erasure channels. Past research has shown that LT codes can perform well for a large number of input symbols. However, mathematical analysis and simulation results have revealed that the packet overhead for LT decoders can be as large as 100% when the number of input symbols is small. Designing an efficient decoder to handle a small number of symbols becomes an imminent research issue. In this paper, we make an observation that LT decoders often fail to recover all the input symbols, while LT encoders have a high probability of producing a full-rank coefficient matrix. Motivated by this observation, we propose a novel decoding algorithm called LT-W, in which we incorporate the use of the Wiedemann solver into LT decoding to extend the decodability of LT codes. Extensive experiments show that our proposed method reduces the packet overhead significantly and yet preserves the efficiency of the original LT decoding process.
Full-text available
The erasure correction performance of Luby transform (LT) code ensembles over higher order Galois fields is analysed under optimal, i.e. maximum likelihood (ML) erasure decoding. We provide the complete set of four bounds on the erasure probability after decoding on word as well as on symbol level. Especially the upper bounds are extremely close to the simulated residual erasure rates after decoding and can thus be used for code design instead of time-consuming simulations.
Full-text available
We propose an improved algorithm for decoding LT codes using Gaussian elimination. Our algorithm performs useful processing at each coded packet arrival thus distributing the decoding work during all packets reception, obtaining a shorter actual decoding time. Furthermore, using a swap heuristic the decoding matrix is kept sparse, decreasing the cost of both triangularization and back-substitution steps.
Full-text available
The proliferation of applications that must reliably distribute large, rich content to a vast number of autonomous receivers motivates the design of new multicast and broadcast protocols. We describe an ideal, fully scalable protocol for these applications that we call a digital fountain. A digital fountain allows any number of heterogeneous receivers to acquire content with optimal efficiency at times of their choosing. Moreover, no feedback channels are needed to ensure reliable delivery, even in the face of high loss rates. We develop a protocol that closely approximates a digital fountain using two new classes of erasure codes that for large block sizes are orders of magnitude faster than standard erasure codes. We provide performance measurements that demonstrate the feasibility of our approach and discuss the design, implementation, and performance of an experimental system.
Full-text available
An (m; n; b; r)-erasure-resilient coding scheme consists of an encoding algorithm and a decoding algorithm with the following properties. The encoding algorithm produces a set of n packets each containing b bits from a message of m packets containing b bits. The decoding algorithm is able to recover the message from any set of r packets. Erasure-resilient codes have been used to protect real-time traffic sent through packet based networks against packet losses. In this paper we describe a erasure-resilient coding scheme that is based on a version of Reed-Solomon codes and which has the property that r = m: Both the encoding and decoding algorithms run in quadratic time and have been customized to give the first real-time implementations of Priority Encoding Transmission (PET) [2],[1] for medium quality video transmission on Sun SPARCstation 20 workstations. 1 Introduction Most existing and proposed networks are packet based, where a packet is a fixed length indivisible unit of inform...
Luby transform (LT) codes are often employed during best-effort packet transfers to offer rateless erasure protection. Efficient as they are, these randomized codes with a small number of input symbols often post an inevitable performance trade-off between the decoding failure rates in their waterfall and error-floor regions. In order to surmount this trade-off, we propose a new encoding strategy that requires a portion of the high-degree output symbols of an LT code to abandon the conventional equally-probable input selection strategy; instead, connect themselves to some specially selected low-degree input symbols. Our simulation results show that short-length LT codes with merely 103 input symbols yield significantly lower symbol/block failure rates at a small overhead of reception as they employ the proposed strategy. These performance-enhanced rateless codes can have potential applications in real-time multimedia broadcasting.
Block-level cloud storage (BLCS) offers to users and applications the access to persistent block storage devices (virtual disks) that can be directly accessed and used as if they were raw physical disks. In this paper we devise ENIGMA, an architecture for the back-end of BLCS systems able to provide adequate levels of access and transfer performance, availability, integrity, and confidentiality, for the data it stores. ENIGMA exploits LT rateless codes to store fragments of sectors on storage nodes organized in clusters. We quantitatively evaluate how the various ENIGMA system parameters affect the performance, availability, integrity, and confidentiality of virtual disks. These evaluations are carried out by using both analytical modeling (for availability, integrity, and confidentiality) and discrete event simulation (for performance), and by considering a set of realistic operational scenarios. Our results indicate that it is possible to simultaneously achieve all the objectives set forth for BLCS systems by using ENIGMA, and that a careful choice of the various system parameters is crucial to achieve a good compromise among them. Moreover, they also show that LT coding-based BLCS systems outperform traditional BLCS systems in all the aspects mentioned before.
In this letter, we investigate an efficient Gaussian elimination decoding scheme of Raptor codes used over the binary erasure channel. It will be shown that the proposed incremental Gaussian elimination decoding significantly improves on the decoding time over the usual Gaussian elimination decoding while maintaining the same decoding performance.
An XOR-based erasure-resilient coding scheme
  • J Bloemer
  • M Kalfane
  • M Karpinski
  • R Karp
  • M Luby
  • D Zuckerman
Bloemer, J., Kalfane, M., Karpinski, M., Karp, R., Luby, M., Zuckerman, D.: An XOR-based erasure-resilient coding scheme. In: ICSI TR-95-048 (1995)