Article

Low-Density Parity-Check Codes

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

A low-density parity-check code is a code specified by a parity-check matrix with the following properties: each column contains a small fixed number j geq 3 of l's and each row contains a small fixed number k > j of l's. The typical minimum distance of these codes increases linearly with block length for a fixed rate and fixed j . When used with maximum likelihood decoding on a sufficiently quiet binary-input symmetric channel, the typical probability of decoding error decreases exponentially with block length for a fixed rate and fixed j . A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described. Both the equipment complexity and the data-handling capacity in bits per second of this decoder increase approximately linearly with block length. For j > 3 and a sufficiently low rate, the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length. Some experimental results show that the actual probability of decoding error is much smaller than this theoretical bound.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Low-density parity-check (LDPC) code [1] has been widely used in communication fields, such as 5G New Radio (NR) [2], IEEE 802.11 (WiFi) [3]. It was demonstrated by D. MacKay and M. Neal in 1996 [4] that LDPC codes can approach the Shannon limit. ...
... An LDPC code is a linear block code with a sparse parity check matrix [1], which can be represented by a bipartite graph with variable nodes and check nodes. The parity check matrix of a binary (N, K) LDPC code C of rate R = K/N is a 0-1 matrix of size (N − K)× N, in which each column of the matrix corresponds to a variable node and each row corresponds to a check node. ...
... Moreover, better optimization algorithms for finding well-performing scheduling sequences and the theoretically optimal value of scheduling sequences remain to be further studied. 1 The reduction ratio shows the reduction of the average NMP under the corresponding scheduling policies compared with the average NMP under LBP. 2 The BLER(×10 −3 ) shows the block error rate under the corresponding scheduling policies, where the max number of iterations is set to be 5, and the results are displayed after multiplying by 10 3 . ...
Preprint
In this study, an optimization model for offline scheduling policy of low-density parity-check (LDPC) codes is proposed to improve the decoding efficiency of the belief propagation (BP). The optimization model uses the number of messages passed (NMP) as a metric to evaluate complexity, and two metrics, average entropy (AE), and gap to maximum a posteriori (GAP), to evaluate BP decoding performance. Based on this model, an algorithm is proposed to optimize the scheduling sequence for reduced decoding complexity and superior performance compared to layered BP. Furthermore, this proposed algorithm does not add the extra complexity of determining the scheduling sequence to the decoding process.
... However, even if the optimal (or near-optimal) input distribution is found, random-coding using this distribution may not be appealing in practice. Instead, it is more practical to use binary codes like low-density parity-check codes (LDPC) [19] or polar codes [20], [21]. Motivated by this, it is interesting to investigate how closely the capacity can be approached using binary codes, especially codes for binary input and binary output (BIBO) channels. ...
... Definition 3 (VBC with erasures: VBC¯ ): VBC¯ is a channel with inputs X b i , i = 0, . . . , N − 1, and outputs Y c i as defined in (19). The VBC with erasures defined above is used to approximate the IM/DD channel in (1), as a channel with a vector binary input, output (which could be erased), and noise. ...
... , Y c i (n)), i = 0, . . . , N − 1, whose symbols are 0, 1, or ξ (erasure) (19). These signals are then decoded intom i . ...
Preprint
Full-text available
The paper provides a new perspective on peak- and average-constrained Gaussian channels. Such channels model optical wireless communication (OWC) systems which employ intensity-modulation with direct detection (IM/DD). First, the paper proposes a new, capacity-preserving vector binary channel (VBC) model, consisting of dependent binary noisy bit-pipes. Then, to simplify coding over this VBC, the paper proposes coding schemes with varying levels of complexity, building on the capacity of binary-symmetric channels (BSC) and channels with state. The achievable rates are compared to capacity and capacity bounds, showing that coding for the BSC with state over the VBC achieves rates close to capacity at moderate to high signal-to-noise ratio (SNR), whereas simpler schemes achieve lower rates at lower complexity. The presented coding schemes are realizable using capacity-achieving codes for binary-input channels, such as polar codes. Numerical results are provided to validate the theoretical results and demonstrate the applicability of the proposed schemes.
... For instance, involutions have been used frequently in block cipher designs, in AES [13], Khazad [4], Anubis [3] and PRINCE [7]. Furthermore, low-cycle permutations (such as involutions) have been also used to construct Bent functions over finite fields [12,15] and to design codes [15]. In [8], behaviors of permutations of an affine equivalent class have been analyzed with respect to some cryptanalytic attacks, and it is shown that low-cycle permutations (such as involutions) are nice candidates against these attacks. ...
... For instance, involutions have been used frequently in block cipher designs, in AES [13], Khazad [4], Anubis [3] and PRINCE [7]. Furthermore, low-cycle permutations (such as involutions) have been also used to construct Bent functions over finite fields [12,15] and to design codes [15]. In [8], behaviors of permutations of an affine equivalent class have been analyzed with respect to some cryptanalytic attacks, and it is shown that low-cycle permutations (such as involutions) are nice candidates against these attacks. ...
... By plugging Eqs. (14), (15), (16) and (17) into Eq. (21), one can arrive at ...
Article
$n$-cycle permutations with small $n$ have the advantage that their compositional inverses are efficient in terms of implementation. They can be also used in constructing Bent functions and designing codes. Since the AGW Criterion was proposed, the permuting property of several forms of polynomials has been studied. In this paper, characterizations of several types of $n$-cycle permutations are investigated. Three criteria for $ n $-cycle permutations of the form $xh(\lambda(x))$, $ h(\psi(x)) \varphi(x)+g(\psi(x)) $ and $g\left( x^{q^i} -x +\delta \right) +bx $ with general $n$ are provided. We demonstrate these criteria by providing explicit constructions. For the form of $x^rh(x^s)$, several new explicit triple-cycle permutations are also provided. Finally, we also consider triple-cycle permutations of the form $x^t + c\tr_{q^m/q}(x^s)$ and provide one explicit construction. Many of our constructions are both new in the $n$-cycle property and the permutation property.
... In particular, the channel coding research was accelerated by the implementation of turbo iterative decoding [109], which strongly approaches Shannon's limit in terms of performance. The Low Density Parity Check (LDPC) codes developed by Gallager in 1962 [110], and rediscovered by MacKay and Neal towards the end of the 1990s [111], [112], performed well and attracted attention. Erdal Arikan brought further progress in 2008 with the introduction of polar codes [113]. ...
... This is the author's version which has not been fully edited and content may change prior to final publication. Shannon Capacity Theorem Hamming Code [106] Classic Convolutional Codes [114] LDPC (Gallager) [110] Viterbi algorithm [107] Decoding algorithm (Bahl et al.) [115] Classic Trellis-Coded Modulation (Ungerboeck) [116] Max-Log MAP 2 algorithm (Koch) [117] Space Time Trellis Code (Tarokh) [118] LDPC Codes revisited [111], [112] LDPC Codes as a powerful code [111], [112] Berlekamp-Massey algorithm [119], [120] Chase algorithm [121] Linear Block Codes (Wolf) [122] SISO 1 Chase algo. (Pyndiah) [123] Space Time Block Coding (Alamouti) [124] Polar Code [113] Classic Concatenated Codes (Forney) [108] Turbo Codes (Berrou et al.) [109] Turbo Trellis-Coded Modulation (Robertson) [125] 1 Soft-Input Soft-Output 2 Maximum A-Posteriori Receive r(t) = s(t) + n(t) ...
... In 2012, Leeson and Higgins were the first to propose error correction channel codes in DBMC systems using Hamming codes [202], [203]; these seminal studies were the starting point for Hamming codes in DBMC systems. In further research, comparing Hamming codes, Cyclic Reed Muller codes [237], and Euclidean Geometry Low Density Parity Check (EGLDPC) codes [110], Lu et al. [133] found that Hamming codes are well suited for DBMC systems with long distances [65]. Cheong demonstrated in 2020 that Hamming codes using soft values achieve a significantly lower BER than hard-valued Hamming codes [207]. ...
Article
Full-text available
Diffusion-based molecular nanonetworks exploit the diffusion of molecules, e.g., in free space or in blood vessels, for the purpose of communication. This article comprehensively surveys coding approaches for communication in diffusion-based molecular nanonetworks. In particular, all three main purposes of coding for communication, namely source coding, channel coding, and network coding, are covered.We organize the survey of the channel coding approaches according to the different channel codes, including linear block codes, convolutional codes, and inter-symbol interference (ISI) mitigation codes. The network coding studies are categorized into duplex network coding, physical-layer network coding, multi-hop nanonetwork coding, performance improvements of network-coded nanosystems, and network coding in mobile nanonetworks. We also present a comprehensive set of future research directions for the still nascent area of coding for diffusion-based molecular nanonetworks; specifically, we outline research imperatives for each of the three main coding purposes, i.e., for source, channel, and network coding, as well as for overarching research goals.
... VI] as a class of error correcting codes based on sparse matrices for nonuniform sources. They can be seen as multi-edgetype low-density parity-check (LDPC) codes [2], [3] defined by an extended bipartite graph with a set of variable nodes (VNs) associated with the source bits, and the remaining VNs associated with the codeword bits. LDPC code constructions closely related to MN codes were proposed in [4], [5] for joint source and channel coding. ...
... The same sequence is made available to the decoder. Considering either (2) or (3), and owing to the symmetry of the biAWGN channel, we observe that the presence of the scrambler is irrelevant to the analysis of the error probability, since the addition of z at the transmitter side can be compensated at the decoder by computing first b = zG, and then flipping the sign of the observations y i for all i ∈ supp(b). The model of Figure 2 admits an equivalent model, provided in Figure 3: here, an i.i.d. ...
Preprint
A class of rate-adaptive protograph MacKay-Neal (MN) codes is introduced and analyzed. The code construction employs an outer distribution matcher (DM) to adapt the rate of the scheme. The DM is coupled with an inner protograph-based low-density parity-check (LDPC) code, whose base matrix is optimized via density evolution analysis to approach the Shannon limit of the binary-input additive white Gaussian noise (biAWGN) channel over a given range of code rates. The density evolution analysis is complemented by finite-length simulations, and by a study of the error floor performance.
... V concludes this paper. Figure 2 shows an SRTR BPMR channel model, where a 3571-bit input sequence u k ∈ {±1} with bit period Tx is encoded by a low-density parity-check (LDPC) code 15 and divided into two sequences, i.e., {x k,l } and {x k,l+1 } for the k-th data bit on the l-th track, before being stored onto a staggered array medium as illustrated in Fig. 3. In this work, an LDPC code is obtained from a modified array code (MAC) with parameter ( j, k, p) = (5, 39, 72). ...
... The SOVA detector and the LDPC decoder iteratively exchange soft information for N G = 3 iterations before outputting the estimated user bits {û k }. Note that, the LDPC decoder is implemented based on a message-passing algorithm 15 with three iterations. ...
Article
Full-text available
One of the key problems facing the designer of data storage devices is how to handle the exponential rise in demand for information storage. To enhance the storage capacity of bit-patterned magnetic recording (BPMR) that stores one data bit in a single magnetic island, the spacing among bit islands, i.e., bit period and track pitch, must be reduced. However, the interference effects from both the across- and along-track directions are unavoidably increased, which leads to performance degradation. This paper proposes to utilize the proper spacing arrangement of magnetic islands for a single-reader two-track reading scheme to increase the staggered BPMR system’s storage capacity. We also study how the system performs for different distances between magnetic islands when track misregistration and media noise are present. Simulation results reveal that the staggered BPMR system with an appropriate magnetic island spacing can provide better performance than the traditional island placement, where the spacing between bit islands in across- and along-track directions is same.
... The 8 bits of the data word are in the remaining positions. Each parity bit is calculated as follows: P1 XOR of bits (3,5,7,9,11) P2 XOR of bits (3,6,7,10,11) P4 XOR of bits (5,6,7,12) P8 XOR of bits (9,10,11,12) each parity bit is set so that the total number of 1's in the checked positions, including the parity bit, is always even. The 8-bit data word is written into the memory together with the 4 parity bits as a 12bit composite word. ...
... The 8 bits of the data word are in the remaining positions. Each parity bit is calculated as follows: P1 XOR of bits (3,5,7,9,11) P2 XOR of bits (3,6,7,10,11) P4 XOR of bits (5,6,7,12) P8 XOR of bits (9,10,11,12) each parity bit is set so that the total number of 1's in the checked positions, including the parity bit, is always even. The 8-bit data word is written into the memory together with the 4 parity bits as a 12bit composite word. ...
Article
Full-text available
Low-density parity-check (LDPC) have been shown to have good error correcting performance approaching Shannon’s limit. Good error correcting performance enables efficient and reliable communication. However, a LDPC code decoding algorithm needs to be executed efficiently to meet cost , time, power and bandwidth requirements of target applications. Quasi-cyclic low-density parity-check (QC-LDPC) codes are an important subclass of LDPC codes that are known as one of the most effective error controlling methods. Quasi cyclic codes are known to possess some degree of regularity. Many important communication standards such as DVB-S2 and 802.16e use these codes. The proposed Optimized Min-Sum decoding algorithm performs very close to the Sum-Product decoding while preserving the main features of the Min-Sum decoding, that is low complexity and independence with respect to noise variance estimation errors.Proposed decoder is well matched for VLSI implementation and will be implemented on Xilinx FPGA family
... A class of linear error-correction code, LDPC matrix code, was in 1962, introduced by Gallager [20]. It is characterized by a sparsed parity check matrix H. WiMAX and DVB-S2 adopted LDPC code. ...
... It is regular when H has equal number of ones in each column, wc and the equal number of ones in each row, wr [22]. An example of regular binary LDPC codes [21] is the original Gallager codes [20]. Although, it has generally very large size of H matrix, it has very low the density of the nonzero element [12]. ...
Article
Full-text available
There are various challenges in underwater acoustic communication (UWA) however bit error rate (BER) is considered as the main challenge as it significantly affects the UWA communication. In this paper, different coding schemes such as convolution, turbo, low density parity check (LDPC), and polar coding based on the t-distribution noise channel are investigated, and binary phase-shift keying (BPSK) modulation with a code rate of 1/2 has considered in the evaluation and analyses. The evaluation of these channel coding schemes is performed based on BER, computational complexity as well as latency. The results have shown the outperform of polar coding in UWA over other channel coding schemes as it has lower BER and lower computational complexity.
... It is known that low-density parity-check (LDPC) codes have capacity-approaching performance as channel codes [1,2]. As a consequence, LDPC codes have been widely used in modern communication standards and in industrial applications. ...
... where map 0 [i], map 1 [i] and q[i] represent the ith element of map 0 , map 1 and q, respectively. In Algorithm 1, the variables q and map are initialized from lines 1 to 6. ...
Article
Full-text available
It is challenging to design an efficient lossy compression scheme for complicated sources based on block codes, especially to approach the theoretical distortion-rate limit. In this paper, a lossy compression scheme is proposed for Gaussian and Laplacian sources. In this scheme, a new route using “transformation-quantization” was designed to replace the conventional “quantization-compression”. The proposed scheme utilizes neural networks for transformation and lossy protograph low-density parity-check codes for quantization. To ensure the system’s feasibility, some problems existing in the neural networks were resolved, including parameter updating and the propagation optimization. Simulation results demonstrated good distortion-rate performance.
... Different approaches exist to update the variable nodes, and to the best of our knowledge, no optimized codes or methods for the particular case of QKD have been studied or analyzed. Therefore, we selected one of the most prominent solutions-Gallager's algorithm [35]-to demonstrate feasibility and applied it to (non-optimized) codes available for BP algorithms. ...
... • Finally, the algorithm computes which variable nodes have to be flipped. Here, we used Gallager's algorithm B [35] in our implementation, which basically compares the counts computed in step two against a threshold value to decide which bits are flipped. Although the threshold value for comparison is public, this comparison has to be carried out obliviously to protect the variable node state as well as the bit flip information. ...
Article
Full-text available
Quantum key distribution (QKD) has been researched for almost four decades and is currently making its way to commercial applications. However, deployment of the technology at scale is challenging because of the very particular nature of QKD and its physical limitations. Among other issues, QKD is computationally intensive in the post-processing phase, and devices are therefore complex and power hungry, which leads to problems in certain application scenarios. In this work, we study the possibility to offload computationally intensive parts in the QKD post-processing stack in a secure way to untrusted hardware. We show how error correction can be securely offloaded for discrete-variable QKD to a single untrusted server and that the same method cannot be used for long-distance continuous-variable QKD. Furthermore, we analyze possibilities for multi-server protocols to be used for error correction and privacy amplification. Even in cases where it is not possible to offload to an external server, being able to delegate computation to untrusted hardware components on the device itself could improve the cost and certification effort for device manufacturers.
... A sparse matrix is a (0, 1)-matrix if many of its elements are zero. There are two broad types of sparse matrices: structured and unstructure see [17,Chapter 3 ] and they have many properties and are applied to different areas of mathematics, such as the theory of error-correcting detector codes, see [12], [11] and [13]. A matrix is of binomial order if its order is a product of two binomial numbers. ...
Preprint
In this article a family of recursive and self-similar matrices is constructed. It is shown that the Pl\"ucker matrix of the Isotropic Grassmannian variety is a direct sum of this class of matrices.
... Therefore, it can be realized relatively easily with a physical circuit. Furthermore, since Gallager first proposed LDPC codes in the 1960s [30], this class of classical code has shown good performance approaching the channel capacity [31][32][33][34][35]. Subsequently, its quantum versions has been investigated [22,23]. However, the achievements in this field have been explored far less than their classical counterparts. ...
Article
Full-text available
An effective construction method for long-length quantum code has important applications in the field based on large-scale data. With the rapid development of quantum computing, how to construct this class of quantum coding has become one of the key research fields in quantum information theory. Motivated by the block jacket matrix and its circulant permutation, we proposed a construction method for quantum quasi-cyclic (QC) codes with two classical codes. This simplifies the coding process for long-length quantum error-correction code (QECC) using number decomposition. The obtained code length N can achieve O(n2) if an appropriate prime number n is taken. Furthermore, with a suitable parameter in the construction method, the obtained codes have four cycles in their generator matrices and show good performance for low density codes.
... Let v (n) be the vector defined by a vector v and the sequence (s 1 , s 2 , . . . , s n ) through equation(1). Then the size of the DRS algorithm output for v (n) depends only on the values n 1 and n 2 .Proof: The proof is given in Appendix Section B1.Let a K × N matrix M = [u 1 , u 2 , . . . ...
Preprint
Full-text available
In this paper, we leverage polar codes and the well-established channel polarization to design capacity-achieving codes with a certain constraint on the weights of all the columns in the generator matrix (GM) while having a low-complexity decoding algorithm. We first show that given a binary-input memoryless symmetric (BMS) channel $W$ and a constant $s \in (0, 1]$, there exists a polarization kernel such that the corresponding polar code is capacity-achieving with the \textit{rate of polarization} $s/2$, and the GM column weights being bounded from above by $N^s$. To improve the sparsity versus error rate trade-off, we devise a column-splitting algorithm and two coding schemes for BEC and then for general BMS channels. The \textit{polar-based} codes generated by the two schemes inherit several fundamental properties of polar codes with the original $2 \times 2$ kernel including the decay in error probability, decoding complexity, and the capacity-achieving property. Furthermore, they demonstrate the additional property that their GM column weights are bounded from above sublinearly in $N$, while the original polar codes have some column weights that are linear in $N$. In particular, for any BEC and $\beta <0.5$, the existence of a sequence of capacity-achieving polar-based codes where all the GM column weights are bounded from above by $N^\lambda$ with $\lambda \approx 0.585$, and with the error probability bounded by $O(2^{-N^{\beta}} )$ under a decoder with complexity $O(N\log N)$, is shown. The existence of similar capacity-achieving polar-based codes with the same decoding complexity is shown for any BMS channel and $\beta <0.5$ with $\lambda \approx 0.631$.
... Unfortunately, interactive protocols requires a high number of message exchanges [19,20], and worse still, they do not guarantee the complete elimination of errors. The QKD also uses other reconciliation techniques developed in the field of telecommunication technologies, among which LDPC [21,22] stands out; however, its computational complexity is very demanding and requires transmitting redundant information [23]. Consider the following two scenarios: ...
Article
Full-text available
In this work, we introduce a new method for the establishment of a symmetric secret key through the reconciliation process in QKD systems that, we claim, is immune to the error rate of the quantum channel and, therefore, has an efficiency of 100% since it does not present losses during the distillation of secret keys. Furthermore, the secret rate is scaled to the square of the number of pulses on the destination side. The method only requires a single data exchange from Bob over the classic channel. We affirmed that our results constitute a milestone in the field of QKD and error correction methods at a crucial moment in the development of classical and quantum cryptanalytic algorithms. We believe that the properties of our method can be evaluated directly since it does not require the use of complex formal-theoretical techniques. For this purpose, we provide a detailed description of the reconciliation algorithm. The strength of the method against PNS and IR attacks is discussed. Furthermore, we define a method to analyze the security of the reconciliation approach based on frames that are binary arrays of 2×2. As a result, we came to the conclusion that the conjugate approach can no longer be considered secure, while we came up with a way to increase the secret gain of the method with measured bits.
... To correct DNA channel errors, a modified concatenated watermark code scheme and a modified concatenated marker code scheme were proposed in [10,11], respectively. The former uses watermark codes as inner codes and low-density parity-check (LDPC) codes [12] as outer codes. The latter employs the marker codes as inner codes and LDPC codes as outer codes. ...
Article
Full-text available
Due to the rapid growth in the global volume of data, deoxyribonucleic acid (DNA) data storage has emerged. Error correction in DNA data storage is a key part of this storage technology. In this paper, an improved marker code scheme is proposed to correct insertion, deletion, and substitution errors in deoxyribonucleic acid (DNA) data storage. To correct synchronization (i.e., insertion and deletion) errors, a novel base-symbol-based synchronization algorithm is proposed and used. In the improved scheme, the marker bits are encoded as the information part of the LDPC code, and then mapped into marker bases to correct the synchronization errors. Thus marker bits not only assist in regaining synchronization, but also play a role in LDPC decoding to improve decoding performance. An improved low-complexity normalized min-sum (INMS) algorithm is proposed to correct residual substitution errors after regaining synchronization. The simulation results demonstrate that the improved scheme provides a substantial performance improvement over the concatenated marker code scheme and concatenated watermark code scheme. At the same time, the complexity of the INMS algorithm was reduced, while its bit error rate (BER) performance was approximate to that of the belief propagation (BP) algorithm.
... To further enhance the equality of these LLRs, we also utilize the MLP-based LLR estimator 3 to reproduce the improved version of LLRs, λ ′′ k , which will be sent to the LDPC decoder to produce the estimated user bit,û k , for the 1st global iteration (NG = 1). Note that the LDPC decoder is implemented based on the message passing algorithm 15 with NLDPC internal iterations. For the next global iteration, the LLR sequence {λ ′′ k } is feedback to the rate-3/5 soft encoder 3,14 to produce the soft information for the 1-D m-SOVA 14 detector. ...
Article
Full-text available
To expand an areal density of hard disk drives, bit-patterned magnetic recording (BPMR) using a patterned medium instead of a granular medium as employed in perpendicular magnetic recording is attracting much attention as the next-generation recording technology. To further increase the storage capacity of BPMR, multi-layer magnetic recording can be combined with BPMR. Therefore, this paper considers double-layer magnetic recording with a single-reader/two-track reading technique for the staggered BPMR system, which is performed together with a rate-3/5 modulation code. This paper proposes to utilize a multilayer perceptron decoder to decode and estimate the log-likelihood ratio value of the recorded bit sequence that is obtained from the equalized channel. Simulation results show that at a bit-error rate of 10 ⁻⁴ , the proposed system with a double-layer recording medium can achieve an improvement gain of about 1.2 decibel if compared to that with a regular single-layer recording medium, even in the presence of media noise.
... By introducing randomness and sparsity in coding and propagating soft messages based on factor graphs in decoding, advanced probabilistic codes can approach or even achieve the Shannon limit. Among them, the most representative ECCs are Turbo codes [375], low-density parity-check (LDPC) codes [376], and polar codes [377], which are the standard codes for 4G data channels, 5G data channels, and 5G control channels, respectively. Though their de-facto decoding algorithms and implementations are different [303], they are all derived based on Bayes' theorem and competitive for 6G ultra-high speed and ultra-low power consumption requirements, which impel a unified decoding framework for complex and variable scenarios in 6G communication systems. ...
Preprint
Full-text available
Fifth generation (5G) mobile communication systems have entered the stage of commercial development, providing users with new services and improved user experiences as well as offering a host of novel opportunities to various industries. However, 5G still faces many challenges. To address these challenges, international industrial, academic, and standards organizations have commenced research on sixth generation (6G) wireless communication systems. A series of white papers and survey papers have been published, which aim to define 6G in terms of requirements, application scenarios, key technologies, etc. Although ITU-R has been working on the 6G vision and it is expected to reach a consensus on what 6G will be by mid-2023, the related global discussions are still wide open and the existing literature has identified numerous open issues. This paper first provides a comprehensive portrayal of the 6G vision, technical requirements, and application scenarios, covering the current common understanding of 6G. Then, a critical appraisal of the 6G network architecture and key technologies is presented. Furthermore, existing testbeds and advanced 6G verification platforms are detailed for the first time. In addition, future research directions and open challenges are identified for stimulating the on-going global debate. Finally, lessons learned to date concerning 6G networks are discussed.
... Path loss models the average power attenuation in V2X channels, is deterministic and it is a function of the three-dimensional (3D) distance d between the two vehicles in meters and the carrier frequency f c in GHz. The determination of large-scale modelling for V2X has Sensors 2023, 23, 2436 4 of 27 been of great concern. In [47], they studied the vehicle-induced path loss for millimetrewave V2V communication. ...
Article
Full-text available
Channel coding is a fundamental procedure in wireless telecommunication systems and has a strong impact on the data transmission quality. This effect becomes more important when the transmission must be characterised by low latency and low bit error rate, as in the case of vehicle-to-everything (V2X) services. Thus, V2X services must use powerful and efficient coding schemes. In this paper, we thoroughly examine the performance of the most important channel coding schemes in V2X services. More specifically, the impact of use of 4th-Generation Long-Term Evolution (4G-LTE) turbo codes, 5th-Generation New Radio (5G-NR) polar codes and low-density parity-check codes (LDPC) in V2X communication systems is researched. For this purpose, we employ stochastic propagation models that simulate the cases of line of sight (LOS), non-line of sight (NLOS) and line of sight with vehicle blockage (NLOSv) communication. Different communication scenarios are investigated in urban and highway environments using the 3rd-Generation Partnership Project (3GPP) parameters for the stochastic models. Based on these propagation models, we investigate the performance of the communication channels in terms of bit error rate (BER) and frame error rate (FER) performance for different levels of signal to noise ratio (SNR) for all the aforementioned coding schemes and three small V2X-compatible data frames. Our analysis shows that turbo-based coding schemes have superior BER and FER performance than 5G coding schemes for the vast majority of the considered simulation scenarios. This fact, combined with the low-complexity requirements of turbo schemes for small data frames, makes them more suitable for small-frame 5G V2X services.
... Let us finally discuss decoding quantum codes using belief propagation; a standard approach for decoding classical codes in linear time [27]. In such decoders, prior information about the error model is updated by passing messages in the form of probability distributions, that correspond to the local error model, between adjacent code variables and nodes of the syndrome. ...
Article
Full-text available
Decoding algorithms are essential to fault-tolerant quantum-computing architectures. In this perspective we explore decoding algorithms for the surface code; a prototypical quantum low-density parity-check code that underlies many of the leading efforts to demonstrate scalable quantum computing. Central to our discussion is the minimum-weight perfect-matching decoder. The decoder works by exploiting underlying structure that arises due to materialised symmetries among surface-code stabilizer elements. By concentrating on these symmetries, we begin to address the question of how a minimum-weight perfect-matching decoder might be generalised for other families of codes. We approach this question first by investigating examples of matching decoders for other codes. These include decoding algorithms that have been specialised to correct for noise models that demonstrate a particular structure or bias with respect to certain codes. In addition to this, we propose a systematic way of constructing a minimum-weight perfect-matching decoder for codes with certain characteristic properties. The properties we make use of are common among topological codes. We discuss the broader applicability of the proposal, and we suggest some questions we can address that may show us how to design a generalised matching decoder for arbitrary stabilizer codes.
... The hard input constraint due to optical technology, greatly reducing the information diversity in the decoding process, involves another algorithmic choice. Gallager-B algorithm was initially proposed in [40] to process hard input values. However, its decoding performance is relatively poor due to binary values used to represent exchanged messages. ...
... In our scenario, the channel encoded data by using LDPC is passing through UL-SCH and PUSCH channels respectively. LDPC is a linear error correcting code which is defined by a sparse parity check matrix [32] and UL-SCH and PUSCH channels are the transport and physical channels respectively. Then, the demodulation reference signal (DM-RS) is added to the NR grid. ...
Preprint
Offloading computationally heavy tasks from an unmanned aerial vehicle (UAV) to a remote server helps improve the battery life and can help reduce resource requirements. Deep learning based state-of-the-art computer vision tasks, such as object segmentation and object detection, are computationally heavy algorithms, requiring large memory and computing power. Many UAVs are using (pretrained) off-the-shelf versions of such algorithms. Offloading such power-hungry algorithms to a remote server could help UAVs save power significantly. However, deep learning based algorithms are susceptible to noise, and a wireless communication system, by its nature, introduces noise to the original signal. When the signal represents an image, noise affects the image. There has not been much work studying the effect of the noise introduced by the communication system on pretrained deep networks. In this work, we first analyze how reliable it is to offload deep learning based computer vision tasks (including both object segmentation and detection) by focusing on the effect of various parameters of a 5G wireless communication system on the transmitted image and demonstrate how the introduced noise of the used 5G wireless communication system reduces the performance of the offloaded deep learning task. Then solutions are introduced to eliminate (or reduce) the negative effect of the noise. The proposed framework starts with introducing many classical techniques as alternative solutions first, and then introduces a novel deep learning based solution to denoise the given noisy input image. The performance of various denoising algorithms on offloading both object segmentation and object detection tasks are compared. Our proposed deep transformer-based denoiser algorithm (NR-Net) yields the state-of-the-art results on reducing the negative effect of the noise in our experiments.
... It first computes the error vector used to create c 0 by e = Decode(c 0 h 0 , h 0 , h 1 ). Here Decode is a kind of bitflipping decoder [Gal62]. The choice of decoder is a trade-off between efficiency and failure probability. ...
Article
Full-text available
Well before large-scale quantum computers will be available, traditional cryptosystems must be transitioned to post-quantum (PQ) secure schemes. The NIST PQC competition aims to standardize suitable cryptographic schemes. Candidates are evaluated not only on their formal security strengths, but are also judged based on the security with regard to resistance against side-channel attacks. Although round 3 candidates have already been intensively vetted with regard to such attacks, one important attack vector has hitherto been missed: PQ schemes often rely on rejection sampling techniques to obtain pseudorandomness from a specific distribution. In this paper, we reveal that rejection sampling routines that are seeded with secretdependent information and leak timing information result in practical key recovery attacks in the code-based key encapsulation mechanisms HQC and BIKE.Both HQC and BIKE have been selected as alternate candidates in the third round of the NIST competition, which puts them on track for getting standardized separately o the finalists. They have already been specifically hardened with constant-time decoders to avoid side-channel attacks. However, in this paper, we show novel timing vulnerabilities in both schemes: (1) Our secret key recovery attack on HQC requiresonly approx. 866,000 idealized decapsulation timing oracle queries in the 128-bit security setting. It is structurally different from previously identified attacks on the scheme: Previously, exploitable side-channel leakages have been identified in the BCH decoder of a previously submitted HQC version, in the ciphertext check as well as in the pseudorandom function of the Fujisaki-Okamoto transformation. In contrast, our attack uses the fact that the rejection sampling routine invoked during the deterministic re-encryption of the decapsulation leaks secret-dependent timing information, which can be efficiently exploited to recover the secret key when HQC is instantiated with the (now constant-time) BCH decoder, as well as with the RMRS decoder of the current submission. (2) From the timing information of the constant weight word sampler in the BIKE decapsulation, we demonstrate how to distinguish whether the decoding step is successful or not, and how this distinguisher is then used in the framework of the GJS attack to derive the distance spectrum of the secret key, using 5.8 x 107 idealized timing oracle queries. We provide details and analyses of the fully implemented attacks, as well as a discussion on possible countermeasures and their limits.
... The computer user expects that the computer system is reliable for accurate computing and no error in communication and data management. Therefore, to prevent error in the computing system, several works are considered, such as the error detection and error correction techniques are proposed: checksum, forward error correction, and parity bit [62,63,64]. However, this desire for perfection might be abstract and the cost of these additional techniques is too high and sometimes not affordable. ...
Thesis
Over the years, System-on-Chip (SoC) has evolved from a single processor in a chip to multi/many processors in chips containing billions of transistors. With the evolution of SoC, new research topics have risen on interconnect between processors in a chip. Network-on-Chip(NoC) has been proposed as a solution for more dynamic communication links to connect large number of Intellectual Property (IP)s. Secondly, to overcome drawbacks of electrical NoC, Optical communication link has been proposed as a promissing solution. This type of NoC provides low latency and high bandwith, but it suffers from low power efficiency. In this thesis, we address this topic and we aim to develop techniques to manage laser power consumption. To address this challenge, we exploit the approximation concept and we apply it to the ONoC to propose two types of communications: Approximate and accurate communications. Our proposal is applied to Floating-Point (FP) numbers by using low-power optical signals for LSB, at the cost of higher error rate. These approximate optical signals allow a drastic reduction in the laser power consumption. In parallel, to ensure the communication accuracy on MSBs, the laser power levels are remained using high power signals. Simulations results demonstrate that a reduction of to 42% of laser power can be obtained for Streamcluster application with a limited degradation at the application level. Furthermore, we propose to manage the communications accordong to the distance between source and destination cores. However, this fine-grain distance-aware management could be too costly, so we propose a low overhead distance-aware technique based on only two distances Short/Long classes. The results of our evaluation show drastic laser power reduction of 20%for example for Streamcluster application.
Article
We evaluate the burst-error performance of the regular low-density parity-check (LDPC) code and the irregular LDPC code that has been considered for ITU-T’s 50G-PON standard via experimental measurements in FPGA. By using intra codeword interleaving and parity-check matrix rearrangement, we demonstrate that the BER performance can be improved under ∼44-ns-duration burst errors for 50-Gb/s upstream signals.
Article
Full-text available
Since their rediscovery in the early 1990s, low-density parity-check (LDPC) codes have become the most popular error-correcting codes owing to their excellent performance. An LDPC code is a linear block code that has a sparse parity-check matrix. Cycles in this matrix, particularly short cycles, degrade the performance of such a code. Hence, several methods for counting short cycles in LDPC codes have been proposed, such as Fan?s method to detect 4-cycles, 6- cycles, 8-cycles, and 10-cycles. Unfortunately, this method fails to count all 6- cycles, i.e., ignores numerous 6-cycles, in some given parity-check matrices. In this paper, an improvement of this algorithm is presented that detects all 6-cycles in LDPC codes, as well as in general bipartite graphs. Simulations confirm that the improved method offers the exact number of 6-cycles, and it succeeds in detecting those ignored by Fan?s method.
Article
We study the problem of spectrum-blind sampling, that is, sampling signals with sparse spectra whose frequency support is unknown. The minimum sampling rate for this class of signals has been established as twice the measure of its frequency support; however, constructive sampling schemes that achieve this minimum rate are not known to exist, to the best of our knowledge. We propose a novel constructive sampling framework by leveraging a mix of tools from modern coding theory , which has been largely untapped in the field of sampling. We make interesting connections between the fundamental problem of spectrum-blind sampling, and that of designing erasure-correcting codes based on sparse graphs, which have both theoretically and practically revolutionized the design of modern communication systems. Our key idea is to cleverly exploit, rather than avoid, the resulting aliasing artifacts induced by subsampling, which introduce linear mixing of spectral components in the form of parity constraints for sparse-graph codes . We achieve this by subsampling the input signal after filtering it using a carefully designed ‘sparse-graph coded filter-bank’ structure, where the pass-band on/off patterns of the filters are designed to match the parity-check constraints of a sparse-graph code. We show that the signal reconstruction under this sampling scheme is equivalent to the fast message-passing based “peeling” decoding of sparse-graph codes for reliable transmission over erasure channels. Most importantly, we further show that the achievable sampling rate is determined by the rate of the sparse-graph codes used in the filter bank. As a result, based on insights derived from the design of capacity-achieving sparse-graph codes (such as Low Density Parity Check codes), we can simultaneously approach the minimum sampling rate for spectrum-blind sampling and low computational complexity based on fast peeling-based decoding with operations per unit of time scaling linearly with the sampling rate.
Article
In the matter of channel coding and spectral efficiency, up to the invention of turbo codes, 3 dB or more stood between what the theory promised and what real systems were able to offer. This gap has now been removed, allowing communication systems to be designed with quasi-optimum performance. Ten years after the first publication on this new technique, turbo codes have commenced their practical service.
Article
Full-text available
Due to higher integration densities, technology scaling and variation in parameters, the performance failures may occur for every application. The memory applications are also prone to single event upsets and transient errors which may lead to malfunctions. This paper proposed a novel error detection and correction method using EG-LDPC. This is useful as majority logic decoding can be implemented serially with simple hardware but requires a large decoding time. For memory applications, this increases the memory access time. The method detects whether a word has errors in the first iterations of majority logic decoding, and when there are no errors the decoding ends without completing the rest of the iterations. Also, errors affecting more than five bits were detected with a probability very close to one. The probability of undetected errors was also found to decrease as the code block length increased. For a billion error patterns only a few errors (or sometimes none) were undetected. This may be sufficient for some applications. Error commonly occurs in the Flash memory while employing LDPC decoding. The SRMMU actually suggests to use a the VTVI design by introducing the Context Number register, however also a PTPI or VTPI design could be implemented that complies to the SRMMU standard. The VTVI design with a physical write buffer and a combined I/D Cache TLB is the simplest design to implement. This will give error correction in minimum cyclic period using LDPC method.
Article
The error correction of information reconciliation affects the performance of the continuous-variable quantum key distribution (CV-QKD). Polar codes can be strictly proven to reach the Shannon-limit. However, due to the insufficient polarization of finite code-length, partial subchannels are neither completely noise-free nor completely noisy. In this paper, an intermediate channel low-density parity check code concatenated polar code (IC-LDPC Polar codes)-based reconciliation for CV-QKD is proposed for the above shortcomings. The experimental results show that the reconciliation efficiency of IC-LDPC Polar code can be over 98% when the signal-to-noise ratio is from −13.2 dB to −20.8 dB, the secret keys can be extracted, and the minimum frame error rate (FER) is 0.19. Therefore, the proposed scheme can improve the reconciliation efficiency and reduce the FER at a very low signal-to-noise ratio range, and it is more useful for a practical long-distance CV-QKD system.
Article
Full-text available
This paper discusses the results of simulations relating to the performances of turbo codes, low density parity check (LDPC) codes, and polar codes over an additive white Gaussian noise (AWGN) channel in the presence of inter symbol interference, denoting the disturbances that altered the original signal. To eliminate the negative effects of inter symbol interference (ISI), an equalizer was used at the level of the receiver. Practically, two types of equalizers were used: zero forcing (ZF) and minimum mean square error (MMSE), considering the case of perfect channel estimation and the case of estimation using the least square algorithm. The performance measure used was the modification of the bit error rate compared to a given signal to noise ratio; in this sense, the MMSE equalizer offered a higher performance than the ZF equalizer. The aspect of channel equalization considered here is not novel, but there have been very few works that dealt with equalization in the context of the use of turbo codes, especially LDPC codes and polar codes for channel coding. In this respect, this research can be considered a contribution to the field of digital communications.
Article
The success of deep learning has encouraged its applications in decoding error-correcting codes, e.g., LDPC decoding. In this paper, we propose a model-driven deep learning method for normalized min-sum (NMS) low-density parity-check (LDPC) decoding, namely neural NMS (NNMS) LDPC decoding network. By unfolding the iterative decoding progress between checking nodes (CNs) and variable nodes (VNs) into a feed-forward propagation network, we can harvest the benefits of both the model-driven deep learning and the conventional normalized min-sum (NMS) LDPC decoding method. In addition, we proposed a shared parameters NNMS with the LeakyReLU and a 12-bit quantizer (SNNMS-LR-Q) which reduces the number of required multipliers and correction factors by sharing parameters, increasing the nonlinear fitting ability by adding LeakyReLU. By utilizing the 12-bit quantizer, we can improve the confrontation ability. Thorough experiments with different code lengths, code rates, channel conditions, and check matrices are implemented to demonstrate the advantages and robustness of our proposed networks. The BER performance of the proposed NNMS is 1.5 dB better than the NMS, using fewer iterations. Meanwhile, The SNNMS-LR-Q outperforms the NNMS regarding the BER performance and efficiency.
Conference Paper
Trigonal habituation partitioning beam forming (OFDM) is antiophthalmic factor better-known model for top news pace trackless coefficient. OFDM perhaps collude wedding reception separator crystals at sensational radio transmitter furthermore honoree to create spectacular variety earn and maybe even step up suspenseful frame cut-off date in the week time-variation along with return fact stations, bringing abouts peculiar curriculum numerous move over (MIMO) frame of reference. the aforementioned one insubstantial appoints variant valid ozonosphere research project troubles made MIMO-OFDM pelmet network topology, as well as valid epithelial duct indices as well as eschewing, acuminate thrust back metalworking ways employing virtuoso dinner nuclear reactor objects, space–time systems given that MIMO-OFDM, bungle valvular committal to writing equipment, OFDM premise together with sheaf time table, furthermore alert scheduling probabilities utilised in the interest of per- shaping future also return synchronizing, ductus deferens judgment, also straight-from-the-shoulder under mentioned successful MIMO-OFDM systems. At long last, striking insubstantial remembers blood group piece goods radio receiver slaying consisting of MIMO-OFDM.
Article
5G wireless network will take place after 4G. It will create many new issues. Some problems like, communication with low BER, and performance might be severe issue. In this thesis, coding methods are proposed in order to decrease the signal loss during the transmission process of data. Also, the LDPC system is explained in order to get good results with lower bit error rate over 5G standards by trying to compare it with systems like, LDPC, Convolutional and Turbo code system. Finally, a framework is designed which is a combination of LDPC codes with polar codes in order to improve information transmission efficiency.
Article
Full-text available
In this work, we introduce a novel decoding algorithm named “Reliability Ratio Weighted Bit Flipping–Sum Product” (RRWBFSP) is proposed for regular LDPC codes. “Sum Product” [4] and “Reliability Ratio Weighted Bit Flipping” [6] are two separate methods that are combined in the new algorithm. The simulations show novel algorithm to exceed Sum-Product decoding algorithms by 0.34 dB. In addition, when compared to the Sum-Product, the RRWBFSP approach has about the same computational complexity. Thus, LDPC codes, of which the “Double-Orthogonal Convolutional Recursive” (RCDO) subfamily is envisioned for use in electronic hard disks and mobile terminals, can be easily iteratively decoded. This would have the impact of prolonging the life of the batteries and consequently reducing the ecological footprint of the discarded batteries.
Conference Paper
Full-text available
Low-Density Parity-Check (LDPC) codes have been widely used for Forward Error Correction (FEC) in wireless networks, since they can approach the capacity of wireless links with light-weight encoding complexity. Although LoRa networks have been deployed for many applications, they still adopt a simple FEC code, Hamming codes, which provides limited FEC capacity, causing unreliable data transmissions and high energy consumption of LoRa nodes. To close this gap, this paper develops \ourSystem, which realizes LDPC coding in LoRa networks. Three challenges are addressed. 1) LoRa employs Chirp Spreading Spectrum (CSS) modulation, which only provides a hard demodulation result without any soft information; however, LDPC needs the Log Likelihood Ratio (LLR) of every received bit for decoding. We develop a LLR extractor for CSS modulation in LoRa. 2) Some erroneous bits may have high LLRs (i.e., wrongly confident in their correctness), which significantly impact the LDPC decoding efficiency. We use symbol-level information to fine-tune the LLRs of some bits for improving the LDPC decoding efficiency. 3) Soft Belief Propagation (SBP) is normally used as the LDPC decoding algorithm. It involves heavy iterative computation, resulting in a long decoding latency, which makes the gateway not be able to send an acknowledgment on time. We leverage the recent advance of graph neural network for fast belief propagation in LDPC decoding. Extensive simulations on a large-scale synthetic dataset and in-filed experiments reveal that \ourSystem can extend the lifetime of default LoRa by 86.9\%, and reduce the decoding latency of the SBP algorithm by $58.09\times$.
Preprint
We study the dynamic behavior of frameless ALOHA, both in terms of throughput and age of information (AoI). In particular, differently from previous studies, our analysis accounts for the fact that the number of terminals contending the channel may vary over time, as a function of the duration of the previous contention period. The stability of the protocol is analyzed via a drift analysis, which allows us to determine the presence of stable and unstable equilibrium points. We also provide an exact characterization of the AoI performance, through which we determine the impact of some key protocol parameters, such as the maximum length of the contention period, on the average AoI. Specifically, we show that configurations of parameters that maximize the throughput may result in a degradation of the AoI performance.
Article
Scitation is the online home of leading journals and conference proceedings from AIP Publishing and AIP Member Societies
Article
A class of binary signaling alphabets called “group alphabets” is described. The alphabets are generalizations of Hamming's error correcting codes and possess the following special features: (1) all letters are treated alike in transmission; (2) the encoding is simple to instrument; (3) maximum likelihood detection is relatively simple to instrument; and (4) in certain practical cases there exist no better alphabets. A compilation is given of group alphabets of length equal to or less than 10 binary digits.
Article
In this paper we will develop certain extensions and refinements of coding theory for noisy communication channels. First, a refinement of the argument based on “random” coding will be used to obtain an upper bound on the probability of error for an optimal code in the memoryless finite discrete channel. Next, an equation is obtained for the capacity of a finite state channel when the state can be calculated at both transmitting and receiving terminals. An analysis is also made of the more complex case where the state is calculable at the transmitting point but not necessarily at the receiving point.
Article
A generalization of Hamming's single error correcting codes is given along with a simple maximum likelihood detection scheme. For small redundancy these alphabets are unexcelled. The Reed-Muller alphabets are described as parity check alphabets and a new detection scheme is presented for them.
Article
In the customary methods of transmitting binary data the receiver, as a result of the decision process made on each transmitted pulse, prints out one of two symbols. Schemes are considered here in which the receiver prints out one of three symbols (single-null zone reception) or one of four symbols (double-null zone reception). These extra symbols permit the receiver to indicate when the a posteriori probabilities of the two transmitted states are nearly equal. Single-null zone reception is shown to be capable, under optimum conditions, of achieving about one-half of the improvement in information rate theoretically attainable by increasing the number of receiver levels without limit. Double-null reception, which splits the null zone and thereby retains polarity information, offers only a slight additional increase in rate. It affords a significant advantage over single-null reception, though, because it is much less sensitive to variations in null level.
Coding for two noisy channels Information Theory
  • P Elias
P. Elias, " Coding for two noisy channels, " in " Information Theory, " C. Cherry, Ed., 3rd London Symp., September, 1955: Butterworths Scientific Publications, London, Eng., 1956.