Article

Error propagation assessment of enumerative coding schemes

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Enumerative coding is an attractive algorithmic procedure for translating long source words into codewords and vice versa. The usage of long codewords makes it possible to approach a code rate which is as close as desired to Shannon's noiseless capacity of the constrained channel. Enumerative encoding is prone to massive error propagation as a single bit error could ruin entire decoded words. This contribution evaluates the effects of error propagation of the enumerative coding of runlength-limited sequences

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The difference in rate of the two techniques, 64/68 versus 64/72, effects the magnitude of the noise variance. The rate effect is insignificant in this case and therefore ignored in the simulations presented in Figure 2. We notice that Schalkwijk's prior art enumeration scheme shows severe error propagation, a phenomenon that has been reported in the literature [21]. ...
... The Kautz-type enumeration scheme is a base conversion method where the binary codeword is represented in a positional numeral system where the weight coefficients do not equal the usual powers of 2. Let the weight coefficients be denoted by , . The encoder translates the binary -bit input word into an -bit codeword , in such a way that The weight coefficients are equal to the number of -constrained sequences of length , and each coefficient requires approximately bits for its representation [14]. The decoder forms the binary representation of the inner product , where , , denote the received channel bits. ...
Article
In this paper, we will present coding techniques for the character-constrained channel, where information is conveyed using q-bit characters (nibbles), and where w prescribed characters are disallowed. Using codes for the character-constrained channel, we present simple and systematic constructions of high-rate binary maximum runlength constrained codes. The new constructions have the virtue that large lookup tables for encoding and decoding are not required. We will compare the error propagation performance of codes based on the new construction with that of prior art codes.
... The Kautz-type enumeration scheme is a base conversion method where the binary codeword is represented in a positional numeral system where the weight coefficients do not equal the usual powers of 2. Let the weight coefficients be denoted by , . The encoder translates the binary -bit input word into an -bit codeword , in such a way that The weight coefficients are equal to the number of -constrained sequences of length , and each coefficient requires approximately bits for its representation [14]. The decoder forms the binary representation of the inner product , where , , denote the received channel bits. ...
Article
Full-text available
In this paper, we will present coding techniques for the character-constrained channel, where information is conveyed using q-bit characters (nibbles), and where w prescribed characters are disallowed. Using codes for the character-constrained channel, we present simple and systematic constructions of high-rate binary maximum runlength constrained codes. The new constructions have the virtue that large lookup tables for encoding and decoding are not required. We will compare the error propagation performance of codes based on the new construction with that of prior art codes.
Book
Full-text available
Preface to the Second Edition About five years after the publication of the first edition, it was felt that an update of this text would be inescapable as so many relevant publications, including patents and survey papers, have been published. The author's principal aim in writing the second edition is to add the newly published coding methods, and discuss them in the context of the prior art. As a result about 150 new references, including many patents and patent applications, most of them younger than five years old, have been added to the former list of references. Fortunately, the US Patent Office now follows the European Patent Office in publishing a patent application after eighteen months of its first application, and this policy clearly adds to the rapid access to this important part of the technical literature. I am grateful to many readers who have helped me to correct (clerical) errors in the first edition and also to those who brought new and exciting material to my attention. I have tried to correct every error that I found or was brought to my attention by attentive readers, and seriously tried to avoid introducing new errors in the Second Edition. China is becoming a major player in the art of constructing, designing, and basic research of electronic storage systems. A Chinese translation of the first edition has been published early 2004. The author is indebted to prof. Xu, Tsinghua University, Beijing, for taking the initiative for this Chinese version, and also to Mr. Zhijun Lei, Tsinghua University, for undertaking the arduous task of translating this book from English to Chinese. Clearly, this translation makes it possible that a billion more people will now have access to it. Kees A. Schouhamer Immink Rotterdam, November 2004
Book
Full-text available
Since the early 1980s we have witnessed the digital audio and video revolution: the Compact Disc (CD) has become a commodity audio system. CD-ROM and DVD-ROM have become the de facto standard for the storage of large computer programs and files. Growing fast in popularity are the digital audio and video recording systems called DVD and BluRay Disc. The above mass storage products, which form the backbone of modern electronic entertainment industry, would have been impossible without the usage of advanced coding systems. Pulse Code Modulation (PCM) is a process in which an analogue, audio or video, signal is encoded into a digital bit stream. The analogue signal is sampled, quantized and finally encoded into a bit stream. The origins of digital audio can be traced as far back as 1937, when Alec H. Reeves, a British scientist, invented pulse code modulation \cite{Ree}. The advantages of digital audio and video recording have been known and appreciated for a long time. The principal advantage that digital implementation confers over analog systems is that in a well-engineered digital recording system the sole significant degradation takes place at the initial digitization, and the quality lasts until the point of ultimate failure. In an analog system, quality is diminished at each stage of signal processing and the number of recording generations is limited. The quality of analog recordings, like the proverbial 'old soldier', just fades away. The advent of ever-cheaper and faster digital circuitry has made feasible the creation of high-end digital video and audio recorders, an impracticable possibility using previous generations of conventional analog hardware. The general subject of coding for digital recorders is very broad, with its roots deep set in history. In digital recording (and transmission) systems, channel encoding is employed to improve the efficiency and reliability of the channel. Channel coding is commonly accomplished in two successive steps: (a) error-correction code followed by (b) recording (or modulation) code. Error-correction control is realized by adding extra symbols to the conveyed message. These extra symbols make it possible for the receiver to correct errors that may occur in the received message. In the second coding step, the input data are translated into a sequence with special properties that comply with the given "physical nature" of the recorder. Of course, it is very difficult to define precisely the area of recording codes and it is even more difficult to be in any sense comprehensive. The special attributes that the recorded sequences should have to render it compatible with the physical characteristics of the available transmission channel are called channel constraints. For instance, in optical recording a '1' is recorded as pit and a '0' is recorded as land. For physical reasons, the pits or lands should neither be too long or too short. Thus, one records only those messages that satisfy a run-length-limited constraint. This requires the construction of a code which translates arbitrary source data into sequences that obey the given constraints. Many commercial recorder products, such as Compact Disc and DVD, use an RLL code. The main part of this book is concerned with the theoretical and practical aspects of coding techniques intended to improve the reliability and efficiency of mass recording systems as a whole. The successful operation of any recording code is crucially dependent upon specific properties of the various subsystems of the recorder. There are no techniques, other than experimental ones, available to assess the suitability of a specific coding technique. It is therefore not possible to provide a cookbook approach for the selection of the 'best' recording code. In this book, theory has been blended with practice to show how theoretical principles are applied to design encoders and decoders. The practitioner's view will predominate: we shall not be content with proving that a particular code exists and ignore the practical detail that the decoder complexity is only a billion times more complex than the largest existing computer. The ultimate goal of all work, application, is never once lost from sight. Much effort has been gone into the presentation of advanced topics such as in-depth treatments of code design techniques, hardware consequences, and applications. The list of references (including many US Patents) has been made as complete as possible and suggestions for 'further reading' have been included for those who wish to pursue specific topics in more detail. The decision to update Coding Techniques for Digital Recorders, published by Prentice-Hall (UK) in 1991, was made in Singapore during my stay in the winter of 1998. The principal reason for this decision was that during the last ten years or so, we have witnessed a success story of coding for constrained channels. The topic of this book, once the province of industrial research, has become an active research field in academia as well. During the IEEE International Symposia on Information Theory (ISIT and the IEEE International Conference on Communications (ICC), for example, there are now usually three sessions entirely devoted to aspects of constrained coding. As a result, very exciting new material, in the form of (conference) articles and theses, has become available, and an update became a necessity. The author is indebted to the Institute for Experimental Mathematics, University of Duisburg-Essen, Germany, the Data Storage Institute (DSI) and National University of Singapore (NUS), both in Singapore, and Princeton University, US, for the opportunity offered to write this book. Among the many people who helped me with this project, I like to thank Dr. Ludo Tolhuizen, Philips Research Eindhoven, for reading and providing useful comments and additions to the manuscript. Preface to the Second Edition About five years after the publication of the first edition, it was felt that an update of this text would be inescapable as so many relevant publications, including patents and survey papers, have been published. The author's principal aim in writing the second edition is to add the newly published coding methods, and discuss them in the context of the prior art. As a result about 150 new references, including many patents and patent applications, most of them younger than five years old, have been added to the former list of references. Fortunately, the US Patent Office now follows the European Patent Office in publishing a patent application after eighteen months of its first application, and this policy clearly adds to the rapid access to this important part of the technical literature. I am grateful to many readers who have helped me to correct (clerical) errors in the first edition and also to those who brought new and exciting material to my attention. I have tried to correct every error that I found or was brought to my attention by attentive readers, and seriously tried to avoid introducing new errors in the Second Edition. China is becoming a major player in the art of constructing, designing, and basic research of electronic storage systems. A Chinese translation of the first edition has been published early 2004. The author is indebted to prof. Xu, Tsinghua University, Beijing, for taking the initiative for this Chinese version, and also to Mr. Zhijun Lei, Tsinghua University, for undertaking the arduous task of translating this book from English to Chinese. Clearly, this translation makes it possible that a billion more people will now have access to it. Kees A. Schouhamer Immink, Rotterdam, November 2004
Conference Paper
We present techniques for reducing error propagation in modulation encoded data. Error propagation is reduced by using Fibonacci modulation codes that have limited span at certain positions. Errors occurring in bit positions of an encoded sequence that correspond to the limited span elements do not propagate beyond the span of those elements in the decoded sequence. Simulation results show that the proposed variable span modulation codes yield improved sector error rates when used instead of fixed span codes in magnetic recording systems.
Article
Full-text available
We discuss experimental results of a versatile nonbinary modulation and channel code appropriatefor two-dimentional page-oriented holographic memories. An enumerative permutation code is used to provide a modulation code that permits a simple maximum-likelihood detection scheme. Experimental results from the IBM Demon testbed are used to characterize the performance and feasibility of the proposed modulation and channel codes. A reverse coding technique is introduced to combat the effects of error propagation on the modulation-code performance. We find experimentally that level-3 pixels achieve the beet practical result, offering an 11-35% improvement in capacity and a 12% increase in readout rate as compared with local binary thresholding techniques.
Article
Volume holographic memories (VHM) are page-oriented optical storage systems whose pages commonly contain on the order of one million pixels. Typically, each stored data page is composed of an equal number of binary pixels in either a low-contrast (“off”) state or a high-contrast (“on”) state. By increasing the number of “off” pixels and decreasing the number of “on” pixels per page, there is an associated gain in VHM system storage capacity. When grayscale pixels are used, a further gain is possible by similarly controlling the fraction of pixels at each gray level. This paper introduces a constant-weight, nonbinary, shortened enumerative permutation modulation block code to produce pages that exploit the proposed capacity advantage. In addition to the code description, we present an encoder and a low-complexity maximum-likelihood (ML) decoder for the shortened permutation code. A proof verifies our claim of ML decoding. Applying this class of code to VHMs predicts a 49% increase in storage capacity when recording modulation coded 3-bit (eight gray level) pixels compared with a VHM using a binary signaling alphabet and equal-probable (unbiased) data
Conference Paper
The problem of enumerative coding was considered by Cover (1973) for the first time. By coding words of a length n the method of Cover has an encoding and decoding speed which equals to O(n) when n→∞. We propose a code which has a high speed: O(log<sup>2 </sup>nloglogn),n→∞. This code is close to author's (see IEEE Trans. Inf. Theory, vol.30, no.1, p.98, 1994) previous method
Article
A special case with binary sequences was presented at the IEEE 1969 International Symposium on Information Theory in a paper titled “Run-Length-Limited Codes.
Article
Combined equalization and coding approaches which significantly outperform previous techniques are presented for the binary Lorentzian channel with additive Gaussian noise. The authors develop a technique based on the concatenation of standard trellis codes with an equalization code and a block decision feedback equalizer (BDFE). Signal sets for the trellis code are generated by partitioning BDFE output vectors according to four- and eight-dimensional lattices. They also investigate the combination of a decision feedback equalizer (DFE) and a convolutional code (CC) and find that this system provides theoretical coding gains from 1 to 3 dB in the high linear recording density range of 2&les; pw <sub>50</sub>/ T &les;3. Although the BDFE with the trellis code system does not perform as well as the DFE with CC system at high densities, it does produce substantial coding gains at low linear recording densities
Article
The problem of trellis coding for multilevel baseband transmission over partial response channels with transfer polynomials of the form (1± D <sup>N</sup>) is addressed. The novel method presented here accounts for the channel memory by using multidimensional signal sets and partitioning the signal set present at the noiseless channel output. It is shown that this coding technique can be viewed as a generalization of a well-known procedure for binary signaling: the concatenation of convolutional codes and inner block codes that are tuned to the channel polynomial. It results in high coding gains with moderate complexity if some bandwidth expansion is accepted
Article
H. Imai and S. Hirakawa have proposed (1977) a multilevel coding method based on binary block codes that admits a staged decoding procedure. The author extends the coding method to coset codes and shows how to calculate minimum squared distance and path multiplicity in terms of the norms and multiplicities of the different cosets. The multilevel structure allows the redundancy in the coset selection procedure to be allocated efficiently among the different levels. It also allows the use of suboptimal multistage decoding procedures that have performance/complexity advantages over maximum-likelihood decoding
Article
Let S be a given subset of binary n-sequences. We provide an explicit scheme for calculating the index of any sequence in S according to its position in the lexicographic ordering of S . A simple inverse algorithm is also given. Particularly nice formulas arise when S is the set of all n -sequences of weight k and also when S is the set of all sequences having a given empirical Markov property. Schalkwijk and Lynch have investigated the former case. The envisioned use of this indexing scheme is to transmit or store the index rather than the sequence, thus resulting in a data compression of (logmidSmid)/n .
Article
A new family of codes is described for representing serial binary data, subject to constraints on the maximum separation between successive changes in value (0 rightarrow 1, 1 rightarrow , or both), or between successive like digits ( 0 's, 1 's, or both). These codes have application to the recording or transmission of digital data without an accompanying clock. In such cases, the clock must be regenerated during reading (receiving, decoding), and its accuracy controlled directly from the data itself. The codes developed for this type of synchronization are shown to be optimal, and to require a very small amount of redundancy. Their encoders and decoders are not unreasonably complex, and they can be easily extended to include simple error detection or correction for almost the same additional cost as is required for arbitrary data.
Article
A new coding technique is proposed that translates user information into a constrained sequence using very long codewords. Huge error propagation resulting from the use of long codewords is avoided by reversing the conventional hierarchy of the error control code and the constrained code. The new technique is exemplified by focusing on (d, k)-constrained codes. A storage-effective enumerative encoding scheme is proposed for translating user data into long dk sequences and vice versa. For dk runlength-limited codes, estimates are given of the relationship between coding efficiency versus encoder and decoder complexity. We show that for most common d, k values, a code rate of less than 0.5% below channel capacity can be obtained by using hardware mainly consisting of a ROM lookup table of size 1 kbyte. For selected values of d and k, the size of the lookup table is much smaller. The paper is concluded by an illustrative numerical example of a rate 256/466, (d=2, k=15) code, which provides a serviceable 10% increase in rate with respect to its traditional rate 1/2, (2, 7) counterpart
Article
Traditional schemes for encoding and decoding runlength-constrained sequences using the enumeration principle require two sets of weighting coefficients. A new enumeration is presented requiring only one set of coefficients
Associate Editor for Coding Techniques
  • E Communicated
  • Soljanin
Communicated by E. Soljanin, Associate Editor for Coding Techniques. Publisher Item Identifier S 0018-9448(99)07003-0.
have presented many results in the area of high-order spectral-null codes. In this correspondence, some of the results given in [10] are improved This work was supported by the National Science Foundation under Grant MIP-9705738. The material in this correspondence was presented in part at the
  • Recently
  • Roth
  • Siegel
  • Vardy
], [10]. Recently, Roth, Siegel, and Vardy, in [10], have presented many results in the area of high-order spectral-null codes. In this correspondence, some of the results given in [10] are improved. In particular, a new Manuscript received March 8, 1998; revised September 23, 1998. This work was supported by the National Science Foundation under Grant MIP-9705738. The material in this correspondence was presented in part at the 1995 IEEE International Symposium on Information Theory, Whistler, BC, Canada, Sept. 1995. L. G. Tallini is with the Dipartimento Di. Tec., Politecnico di Milano, 20133 Milano, Italy (e-mail: luca.tallini@polimi.it).
Germany (e-mail: immink@exp-math.uni-essen.de)
  • Essen
Essen, Germany (e-mail: immink@exp-math.uni-essen.de).