Article

The Art of Combining Distance-Enhancing Constrained Codes with Parity-Check Codes for Data Storage Channels

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

A general and systematic code design methodology is proposed to efficiently combine constrained codes with PC codes for data storage channels. The proposed constrained PC code includes two component codes: the normal constrained (NC) code and the parity-related constrained (PRC) code. The NC code can be any distance-enhancing constrained code, such as the maximum transition run (MTR) code or repeated minimum transition runlength (RMTR) code. The PRC code can be any linear binary PC code. The constrained PC codes can be designed either in non-return-to-zero-inverse (NRZI) format or non-return-to-zero (NRZ) format. The rates of the designed codes are only a few tenths of a percent below the theoretical maximum. The proposed code design method enables soft information to be available to the PC decoder and facilitates soft decoding of PC codes. Furthermore, since errors are corrected equally well over the entire constrained PC codeword, error propagation due to parity bits is avoided. Efficient finite-state encoding methods are proposed to design capacity-approaching constrained codes and constrained PC codes with RMTR or MTR constraint. The generality and efficiency of the proposed code design methodology are shown by various code design examples for both magnetic and optical recording channels.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Book
Full-text available
Preface to the Second Edition About five years after the publication of the first edition, it was felt that an update of this text would be inescapable as so many relevant publications, including patents and survey papers, have been published. The author's principal aim in writing the second edition is to add the newly published coding methods, and discuss them in the context of the prior art. As a result about 150 new references, including many patents and patent applications, most of them younger than five years old, have been added to the former list of references. Fortunately, the US Patent Office now follows the European Patent Office in publishing a patent application after eighteen months of its first application, and this policy clearly adds to the rapid access to this important part of the technical literature. I am grateful to many readers who have helped me to correct (clerical) errors in the first edition and also to those who brought new and exciting material to my attention. I have tried to correct every error that I found or was brought to my attention by attentive readers, and seriously tried to avoid introducing new errors in the Second Edition. China is becoming a major player in the art of constructing, designing, and basic research of electronic storage systems. A Chinese translation of the first edition has been published early 2004. The author is indebted to prof. Xu, Tsinghua University, Beijing, for taking the initiative for this Chinese version, and also to Mr. Zhijun Lei, Tsinghua University, for undertaking the arduous task of translating this book from English to Chinese. Clearly, this translation makes it possible that a billion more people will now have access to it. Kees A. Schouhamer Immink Rotterdam, November 2004
Article
Full-text available
Runlength-limited (RLL) codes, generically designated as (d, k) RLL codes, have been widely and successfully applied in modern magnetic and optical recording systems. The design of codes for optical recording is essentially the design of combined dc-free and runlength limited (DCRLL) codes. We will discuss the development of very efficient DCRLL codes, which can be used in upcoming generations of high-density optical recording products.
Conference Paper
Full-text available
Efficient implementations of the sum-product algorithm (SPA) are presented for decoding low-density parity-check (LDPC) codes using log-likelihood ratios (LLR) as messages between symbol and parity-check nodes. Various reduced-complexity derivatives of the LLR-SPA are proposed. Both serial and parallel implementations are investigated, leading to trellis and tree topologies, respectively. Furthermore, by exploiting the inherent robustness of LLRs, it is shown, via simulations, that coarse quantization tables are sufficient to implement complex core operations with negligible or no loss in performance. The unified treatment of decoding techniques for LDPC codes presented here provides flexibility in selecting the appropriate design point in high-speed applications from a performance, latency and computational complexity perspective
Article
Full-text available
During the past few years, significant progress has been made in defining high capacity constraints which prohibit specified differences between constrained sequences, thus ensuring that the minimum distance between them is larger than for the uncoded system. However, different constraints which avoid the same prescribed set of differences may have different capacities, and codes into such constraints may have different complexity of encoder/decoder architecture and different performance on more realistic channel models. These issues, which have to be considered in application of distance enhancing codes, are discussed here. We define several distance enhancing constraints which support design of high rate codes. We also define weak constraints for which the minimum distance between sequences may be the same as for the uncoded system but the number of pairs of sequences at tile minimum distance is smaller. These constraints support design of even higher rate codes. We discuss the implementation issues of both types of constraints as well as their performance on the ideal channel and channels with colored noise and intertrack interference
Article
Full-text available
Ideas which have origins in C. E. Shannon's work in information theory have arisen independently in a mathematical discipline called symbolic dynamics. These ideas have been refined and developed in recent years to a point where they yield general algorithms for constructing practical coding schemes with engineering applications. In this work we prove an extension of a coding theorem of B. Marcus and trace a line of mathematics from abstract topological dynamics to concrete logic network diagrams.
Article
Full-text available
Runlength limited (RLL) codes are used in magnetic recording. The error patterns that occur with peak detection magnetic recording systems when using a runlength-limited code consist of both symmetric errors and shift errors. We refer to shift errors and symmetric errors collectively as mixed type errors. A method of providing error control for mixed type errors that occur in a runlength limited code comprised of (d, k) constrained sequences is examined. The coding scheme is to choose parity blocks to insert in the constrained information sequence. The parity blocks are chosen to satisfy the constraints and to provide some error control. The cases of single error-detection and single error-correction are investigated, where the single error is allowed to be a shift error or a symmetric error. Bounds are discussed on the possible lengths for the parity blocks. It is shown that the single error-detection codes are the best possible in terms of the length of the parity blocks
Article
We have developed a new error correction method (Picket: a combination of a long distance code (LDC) and a burst indicator subcode (BIS)), a new channel modulation scheme (17PP, or (1, 7) RLL parity preserve (PP)-prohibit repeated minimum transition runlength (RMTR) in full), and a new address format (zoned constant angular velocity (ZCAV) with headers and wobble, and practically constant linear density) for a digital video recording system (DVR) using a phase change disc with 9.2 GB capacity with the use of a red (lambda=650 nm) laser and an objective lens with a numerical aperture (\mathit{NA}) of 0.85 in combination with a thin cover layer. Despite its high density, this new format is highly reliable and efficient. When extended for use with blue-violet (lambda≈ 405 nm) diode lasers, the format is well suited to be the basis of a third-generation optical recording system with over 22 GB capacity on a single layer of a 12-cm-diameter disc.
Article
Novel constrained parity-check code and post-processor are proposed for advanced blue laser disk systems. Simulation results with the blu-ray disc show that an increase of 5GB in capacity can be achieved over the standard system.
Article
A new code is presented which improves the minimum distance properties of sequence detectors operating at high linear densities. This code, which is called the maximum transition run code, eliminates data patterns producing three or more consecutive transitions while imposing the usual k-constraint necessary for timing recovery. The code possesses the similar distance-gaining property of the (1,k) code, but can be implemented with considerably higher rates. Bit error rate simulations on fixed delay tree search with decision feedback and high order partial response maximum likelihood detectors confirm large coding gains over the conventional (0,k) code
Conference Paper
Efficient combination of a modulation code with a parity-check code is studied for magnetic recording systems. A new approach to the design of combined modulation/parity codes that largely retains the properties of the original modulation code is proposed. It is based on the matrix transformation of a set of desired parity-check equations at the partial-response channel input into a set of parity-check equations at the input of the precoder. The code design methodology is illustrated by constructing a rate-96/104 dual-parity code that satisfies maximum transition run constraints. Simulation results for a Lorentzian recording channel show that this code significantly outperforms a single-parity code for channels dominated by electronics noise. Moreover, the rate-96/104 dual-parity code, which has been used extensively in commercial disk drives, performs as well as a single-parity code in stationary/nonstationary data-dependent noise conditions. Finally, the low-average-transition-density constraint is proposed to enhance error-rate performance in channels dominated by transition noise.
Conference Paper
We derive performance bounds on bit error rates and error event probabilities for optical recording channels with d = 1 constraint. The bounds account for the use of various parity codes. They serve as benchmarks for the development of parity codes and post-processing schemes. Computer simulations have been carried out to demonstrate the accuracy of the proposed bounds and to evaluate the performance of various parity codes.
Conference Paper
We investigate the performance of single and multiple parity codes in magnetic recording systems. We evaluate the codes with respect to bit error rate as well as error correction code (ECC) failure rate. While multiple parity codes outperform the single parity code in bit error rate, their performance is worse with respect to the ECC failure rate
Article
A low-density parity-check code is a code specified by a parity-check matrix with the following properties: each column contains a small fixed number j geq 3 of l's and each row contains a small fixed number k > j of l's. The typical minimum distance of these codes increases linearly with block length for a fixed rate and fixed j . When used with maximum likelihood decoding on a sufficiently quiet binary-input symmetric channel, the typical probability of decoding error decreases exponentially with block length for a fixed rate and fixed j . A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described. Both the equipment complexity and the data-handling capacity in bits per second of this decoder increase approximately linearly with block length. For j > 3 and a sufficiently low rate, the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length. Some experimental results show that the actual probability of decoding error is much smaller than this theoretical bound.
Article
The authors provide a self-contained exposition of modulation code design methods based upon the state splitting algorithm. They review the necessary background on finite state transition diagrams, constrained systems, and Shannon (1948) capacity. The state splitting algorithm for constructing finite state encoders is presented and summarized in a step-by-step fashion. These encoders automatically have state-dependent decoders. It is shown that for the class of finite-type constrained systems, the encoders constructed can be made to have sliding-block decoders. The authors consider practical techniques for reducing the number of encoder states as well as the size of the sliding-block decoder window. They discuss the class of almost-finite-type systems and state the general results which yield noncatastrophic encoders. The techniques are applied to the design of several codes of interest in digital data recording
Article
We propose a system for magnetic recording channels where we use a novel structured low-density parity-check (LDPC) code, called a random interleaved array (RIA) code, combined with a reversed maximum-transition run code/error-correction code. Simulation results show that the RIA code provides a significantly better error-rate floor performance than the conventionally structured LDPC code when applied to perpendicular magnetic recording channels. They also show that the system can achieve a 1.2-dB gain at a sector error rate of 10<sup>-5 </sup> compared with a PRML system combined with a high-rate run-length-limited code
Article
We propose a novel approach to modulation and error control coding. The idea is to completely eliminate a constrained encoder and, instead, impose the constraint by the deliberate introduction of bit errors before transmission. The redundancy that would have been used for imposing the constraint is used in our scheme to strengthen the error control code (ECC), in such a way that the ECC becomes capable of correcting both deliberate errors as well as channel errors that occur during the detection. Our ECC-modulation scheme is based on iterative decoding of low-density parity check codes and a run-length constraint.
Article
The performance of magnetic recording systems that include conventional modulation codes combined with multiple parity bits is studied. Various performance measures, including bit error rate at the output of time inverse precoder, byte error probability at the input of the Reed-Solomon (RS) decoder and sector error rate, are used to evaluate the performance of various coding/detection schemes. Suboptimum detection/decoding schemes consisting of a 16-state noise-predictive maximum-likelihood (NPML) detector followed by parity-based noise-predictive post-processing, and maximum-likelihood sequence detection/decoding on the combined channel/parity trellis are considered. For conventional modulation codes, it is shown that although the dual-parity post-processor gains 0.5 dB over the single-parity post-processor in terms of bit- and byte-error-rate performance, the sector-error-rate performance of both schemes is almost the same. Furthermore, the sector-error-rate performance of optimum 64-state combined channel/parity detection for the dual-parity code is shown to be approximately 0.1 dB better than that of optimum 32-state combined channel/parity detection for the single-parity code. These performance gains can be even more substantial if appropriate coding techniques that eliminate certain error events and minimize error burst length or multiparity codes in conjunction with combined parity/channel detection are used
Article
The general problem of estimating the a posteriori probabilities of the states and transitions of a Markov source observed through a discrete memoryless channel is considered. The decoding of linear block and convolutional codes to minimize symbol error probability is shown to be a special case of this problem. An optimal decoding algorithm is derived.
Article
The authors obtain general lower bounds on the number of states in any encoder for a given constrained system and rate. Lower bounds on the number of states are exhibited in a fixed-rate finite-state encoder that maps unconstrained n-ary sequences into a given set of constrained sequences, defined by a finite labeled graph G. In particular, one simple lower bound is given by min<sub>x</sub>max<sub>v</sub>x<sub>v</sub> where x=(x<sub>v</sub>) ranges over certain (nonnegative integer) approximate eigenvectors of the adjacency matrix for G. In some sense, the bounds are close to what can be realized by the state splitting algorithm and in some cases, they are shown to be tight. In particular, these bounds are used to show that the smallest (in number of states) known encoders for the
RMTR constrained parity-check codes for high-density blue laser disk systems
  • K Cai
  • K A S Immink
  • S Zhang
  • Z Qin
  • X Zou
K. Cai, K.A.S. Immink, S. Zhang, Z. Qin, and X. Zou, "RMTR constrained parity-check codes for high-density blue laser disk systems," in Tech. Dig. Int. Symp. Optical Media and Optcal Data Storage (ISOM/ODS), Hawaii, USA, Jul. 2008, pp. MoA3.
Error floors of LDPC codes
  • T Rchardson
T. Rchardson, "Error floors of LDPC codes," in Proc. of the 41st Allerton Conf. Oct. 2003.
Application of distance enhancing codes
  • E Soljianin
  • A J Van Wijngaarden
E. Soljianin and A.J. van Wijngaarden, "Application of distance enhancing codes," IEEE Trans. Magn., vol. 37, no. 2, pp. 762-767, Mar. 2001.