ArticlePDF Available

Abstract

This paper proposes a general and systematic code design method to efficiently combine constrained codes with parity-check (PC) codes for optical recording. The proposed constrained PC code includes two component codes: the normal constrained (NC) code and the parity-related constrained (PRC) code. They are designed based on the same finite state machine (FSM). The rates of the designed codes are only a few tenths below the theoretical maximum. The PC constraint is defined by the generator matrix (or generator polynomial) of a linear binary PC code, which can detect any type of dominant error events or error event combinations of the system. Error propagation due to parity bits is avoided, since both component codes are protected by PCs. Two approaches are proposed to design the code in the non-return-to-zero-inverse (NRZI) format and the non-return-to-zero (NRZ) format, respectively. Designing the codes in NRZ format may reduce the number of parity bits required for error detection and simplify post-processing for error correction. Examples of several newly designed codes are illustrated. Simulation results with the Blu-Ray disc (BD) systems show that the new d = 1 constrained 4-bit PC code significantly outperforms the rate 2/3 code without parity, at both nominal density and high density.
A preview of the PDF is not available
... Compared with the iteratively decodable codes, the error detection codes based approach, which is referred to post- Viterbi processor (or maximum likelihood (ML) post-processor), has found wide acceptance since the performance-complexity trade-off offered by these codes is very attractive and affordable. This approach has been widely studied for magnetic recording channels [4][6][10], and for DVD optical recording systems [5]. There has been unignorable mis-correction of a post-Viterbi processor due to channel impairments [5][6]. ...
... This approach has been widely studied for magnetic recording channels [4][6][10], and for DVD optical recording systems [5]. There has been unignorable mis-correction of a post-Viterbi processor due to channel impairments [5][6]. This paper proposes two new techniques for reducing mis-correction of a post-Viterbi processor based on an error detection code. ...
... Finally, based on an error-type j and error starting position i of an error eventêeventˆeventê corresponding to the largest likelihood value, the error event is corrected by the post-Viterbi processor. However, it is observed that there has been unignorable mis-correction of a post-Viterbi processor due to correlated noise and residual inter-symbol interference [5][6]. The mis-correction can be classified into mis-selection in actual error-type and mis-positioning in error-location of an occurred error event. ...
Article
A post-Viterbi processor has found wide acceptance in recording systems since it can correct dominant error events at the channel detector output using only a few parity bits, and thereby significantly reduce the correction capacity loss of the error correction code. This paper presents two novel techniques for minimizing the mis-correction of a post-Viterbi processor based on an error detection code. One is a method for achieving a low probability of mis-selection in actual error-type. The other is a method for achieving a low probability of mis-positioning in error-location of an occurred error event. Simulation results show that an application of these techniques to conventional post-Viterbi processor considerably reduces the probability of mis-correction and the performance approaches the corresponding bit error rate and symbol error rate bound.
... The construction of Hamming codes for both the encoding and decoding process requires the calculation of the extra control bits that will allow the detection of the errors. Different algorithms have been proposed for this calculation, such as [15] and [17]. On average, these will require e.g. for the (16,11,4) code the use of a matrix 5 16 × 5 7 × [15] while the (7,4,3) code uses a matrix. ...
Article
Full-text available
Computer Systems used in Military and other challenging applications, are often exposed to increased levels of electromagnetic radiation. Embedded systems falling in this category often suffer from this exposure due to the operation of the device to which they belong. Consequently data communications within such devices need to be protected against transmission errors. Current general purpose encoding schemes that are used in other communication applications are prohibitively complex for this application. In this paper an innovative extension to the well known checksum concept is proposed that is capable of controlling errors in intra device data transfers. The new technique is shown to be simple enough for implementation and to increase the probability of detection of errors by several orders of magnitude. The scheme is hence shown to be suitable for embedded computing platforms for military and other demanding systems. More specifically, the modified checksum is examined in respect of its suitability for use in schemes for the transient error detection in schemes for reliable data storage within computing systems and it is explained why it is extremely suitable for this application. The modified checksum is also considered in the context of algorithm based fault tolerance schemes and it is again concluded that it can contribute to the overall scheme efficiency and effectiveness. The modified checksum is hence shown to be an algorithmic tool that can significantly contribute to the design of reliable and fault tolerant computing systems, such as the ones used for military systems or other applications that operate in adverse environments.
... Spectrum shaping codes, also known as spectrum null codes, are channel codes that have spectrum nulls at specific frequencies. They have been applied extensively in the magnetic tape as well as the optical recording systems [1], [2], which are key constituents of consumer electronics products. They are also expected to bring significant performance gain for the recently proposed dedicated servo recording system [3], which is a promising technology for the ultra mobile hard-disk drive for tablets. ...
Conference Paper
We propose a systematic code design method for constructing efficient spectrum shaping constrained codes for high-density data storage systems. Simulation results demonstrate that the designed codes can achieve significant spectrum shaping effect with only around 1% code rate loss and reasonable computational complexity.
... The data formats include blue-ray Disk (BD) and high-definition DVD (HD DVD). Various kinds of coding schemes have been developed [4]- [6] for the ODD so that error detection is greatly improved for data reading out or writing in from/to the optical disk. However, if the wrong disk type is discriminated, all of the delicate coding Manuscript schemes are in vain, resulting in a fatal data reading or writing mistake. ...
Article
Full-text available
A novel disk discrimination approach with two levels of classification is proposed. The optical pickup head is controlled to emit the digital versatile disk laser followed by the compact disk laser during the process of being pulled in from the neutral position toward the optical disk. Four features are measured and analyzed based on the reflected signals from the optical disk. Two levels of classifications are designed for the disk discrimination using these four features. A supervised learning approach based on the evolutionary ellipsoid classification algorithm is utilized to learn the classifiers and optimize the classifier parameterizations. Six different disk types have been successfully discriminated using the proposed approach.
... The data formats include blue-ray Disk (BD) and high-definition DVD (HD DVD). Various kinds of coding schemes have been developed [4] [6] for the ODD so that error detection is greatly improved for data reading out or writing in from/to the optical disk. However, if the wrong disk type is discriminated, all of the delicate coding Manuscript received July 7, 2014;revised January 3, 2015;accepted February 19, 2015 schemes are in vain, resulting in a fatal data reading or writing mistake. ...
... Finally, based on the error-type and its error starting position of the error event corresponding to the largest normalized output, the error event is corrected. It is observed that there has been miss-correction of an error correlation filter-based post-Viterbi processor due to correlated noise and residual inter-symbol interference [6][7]. Now suppose that the actual error position is judged as incorrect one due to channel impairments. ...
Article
This paper proposes a soft-reliability information- based post-Viterbi processor for reducing miss-correction of an error correlation filter-based post-Viterbi processor. The essential difference between the soft-reliability information- based and the error correlation filter-based post-Viterbi processors, is how to locate the most probable error starting position. The new scheme determines an error starting position based on a soft-reliability estimate, while the conventional scheme chooses an error starting position based on likelihood value. Among all likely error starting positions for prescribed error events, the new scheme attempts to correct error- type corresponding to the position only if there exists a position where the soft-reliability estimate is negative, while the conventional scheme performs error correction based on error- type and its error starting position of an error event associated with the maximum likelihood value. A correction made by the conventional scheme may result in miss-correction because the scheme does not have any criterion for judgment whether an estimated error starting position is correct. In case error correction is only performed when a position with negative soft-reliability estimate exists, the probability of miss-correction of the new scheme is less than the one of the conventional scheme.
... The approach using error detection codes, referred to as post-Viterbi processor (in other words, maximum likelihood (ML) postprocessor), has found wide acceptance since the performance-complexity trade-off offered is very attractive and affordable. The above approach has been widely studied for magnetic recording channels, and for optical recording systems [1][2][3][4][5][6][7][8][9][10]. ...
Article
This paper proposes a new soft-reliability information-based post-Viterbi processor with advanced noise-robustness for reducing probability of miss-correction and no correction of a conventional soft-reliability-based post-Viterbi processor. Among all likely error starting positions for prescribed error events, the two schemes are equal to attempt to correct error-type corresponding to a position with minimum one only if there exist positions where a soft-reliability estimate is negative. The main difference between the two schemes is how they acquire the softreliability estimate. The soft-reliability estimate of the new scheme is obtained through the elimination of the noisesensitive component from the log-likelihood ratio of the posteriori probabilities, which is the soft-reliability estimate of conventional scheme. As a result, the new scheme is based on more reliable soft-reliability information so reducing the probability of miss-correction and no correction.
... The construction of Hamming codes for both the encoding and decoding process requires the calculation of the extra control bits that will allow the detection of the errors. Different algorithms have been proposed for this calculation, such as [15] and [17]. On average, these will require e.g. for the (16,11,4) code the use of a matrix 5 16 × 5 7 × [15] while the (7,4,3) code uses a matrix. ...
Article
Full-text available
Computer Systems used in Military and other challenging applications, are often exposed to increased levels of electromagnetic radiation. Embedded systems falling in this category often suffer from this exposure due to the operation of the device to which they belong. Consequently data communications within such devices need to be protected against transmission errors. Current general purpose encoding schemes that are used in other communication applications are prohibitively complex for this application. In this paper an innovative extension to the well known checksum concept is proposed that is capable of controlling errors in intra device data transfers. The new technique is shown to be simple enough for implementation and to increase the probability of detection of errors by several orders of magnitude. The scheme is hence shown to be suitable for embedded computing platforms for military and other demanding systems. More specifically, the modified checksum is examined in respect of its suitability for use in schemes for the transient error detection in schemes for reliable data storage within computing systems and it is explained why it is extremely suitable for this application. The modified checksum is also considered in the context of algorithm based fault tolerance schemes and it is again concluded that it can contribute to the overall scheme efficiency and effectiveness. The modified checksum is hence shown to be an algorithmic tool that can significantly contribute to the design of reliable and fault tolerant computing systems, such as the ones used for military systems or other applications that operate in adverse environments.
Article
Full-text available
Tremendous and inescapable application of full adder adds impetus to its optimization till high-end performance. Use of full adder propels the design engineer to unearth various digital circuits, whose implementation otherwise would not be a cakewalk. This paper exhumes finest 3-bit parity checker in terms of power dissipation (PWR) and energy-delay product (EDP) variability. MCML (MOS Current Mode Logic) based implementation is practiced to improvise the circuit. Further, in this treatise, above mentioned 'CNFET-based 3-bit MCML parity checker' (used as full adder) and Transmission Gate based multiplexer is used to implement a novel design of '4-Bit 4-Tube CNFET based ALU' at 16-nm Technology Node. This CNFET-based ALU thus implemented is further compared with its CMOS counterpart. Simulation results establish the superior performance of proposed '4-Bit 4-Tube CNFET-based ALU' in terms of propagation delay (tp) (9.04×), PWR (1.68×), PDP (15.09×) and EDP (136.42×). The exposition establishes that the idea of using a 'CNFET-based 3-bit MCML parity checker' to design a new ALU circuit, i.e., '4-Bit 4-Tube CNFET-based ALU' would provide a gigantic horizon for a design engineer.
Article
Full-text available
In the presented paper we designed the parity checker by using EX-OR modules. The two EX-OR modules are presented to design the parity checker and correlated their outcomes based on the constraints like power, area, delay and power delay product (PDP). The previous design is with eight transistors EX-OR, but in the present six transistors EX-OR is used to design the parity checker. While correlating the parity checker design with 8T EX-OR and 6T EX-OR, the 6T EX-OR parity checker design gives optimized power, delay, area and PDP over the 8T EX-OR parity checker design. Simulations are done by using the 130nm mentor graphics tool. Finally the constraints like power, area, delay and PDP gets optimized successfully with the presented technology. Also, alternatively we can replace EXOR modules with NAND modules to design parity checker.
Patent
Full-text available
A system for block encoding words of a digital signal achieves a maximum of error compaction and ensures reliability of a self-clocking decoder, while minimizing any DC in the encoded signal. Data words of m bits are translated into information blocks ofn1 bits (n1 >m) that satisfy a (d,k)-constraint in which at least d "0" bits, but no more than k "0" bits occur between consecutive "I" bits. The information blocks are concatenated by inserting separation blocks of n2 bits there between, selected so that the (d,k)-constraint is satisfied over the boundary between any two information words. For each information word, the separation block that will yield the lowest net digital sum value is selected. Then, the encoded signal is modulated as an NRZ-M signal in which a "1" becomes a transition and a "0" becomes an absence of a transition. A unique synchronizing block is inserted periodically. A decoder circuit, using the synchronizing blocks to control its timing, disregards the separation blocks, but detects the information blocks and translates them back into reconstituted data words of m bits. The foregoing technique can be used to advantage in recording digitized music on an optical disc.
Book
Full-text available
Preface to the Second Edition About five years after the publication of the first edition, it was felt that an update of this text would be inescapable as so many relevant publications, including patents and survey papers, have been published. The author's principal aim in writing the second edition is to add the newly published coding methods, and discuss them in the context of the prior art. As a result about 150 new references, including many patents and patent applications, most of them younger than five years old, have been added to the former list of references. Fortunately, the US Patent Office now follows the European Patent Office in publishing a patent application after eighteen months of its first application, and this policy clearly adds to the rapid access to this important part of the technical literature. I am grateful to many readers who have helped me to correct (clerical) errors in the first edition and also to those who brought new and exciting material to my attention. I have tried to correct every error that I found or was brought to my attention by attentive readers, and seriously tried to avoid introducing new errors in the Second Edition. China is becoming a major player in the art of constructing, designing, and basic research of electronic storage systems. A Chinese translation of the first edition has been published early 2004. The author is indebted to prof. Xu, Tsinghua University, Beijing, for taking the initiative for this Chinese version, and also to Mr. Zhijun Lei, Tsinghua University, for undertaking the arduous task of translating this book from English to Chinese. Clearly, this translation makes it possible that a billion more people will now have access to it. Kees A. Schouhamer Immink Rotterdam, November 2004
Article
We report on the technical progress in increasing the recording density of optical storage systems by means of improved read-channel signal processing and write-channel optimisation. The recording density increase is realized by employing PRML (Viterbi) bit detection in combination with improved timing recovery and adaptive equalisation algorithms, and by using a signal quality characterisation scheme which enables a proper control of the write process in the considered range of storage densities. The Blu-ray Disc (BD) optical disc system employing blue-violet laser with the wavelength of 405nm, objective lens with numerical aperture of 0.85 and disc cover layer thickness of 0.1mm is used as an experimental platform in our present study. Multi-track experimental results for both single-layer read-only (BD-ROM) and single-layer rewritable (BD-RE) media are presented to show the feasibility of the increased-density BD.
Article
We have developed a new error correction method (Picket: a combination of a long distance code (LDC) and a burst indicator subcode (BIS)), a new channel modulation scheme (17PP, or (1, 7) RLL parity preserve (PP)-prohibit repeated minimum transition runlength (RMTR) in full), and a new address format (zoned constant angular velocity (ZCAV) with headers and wobble, and practically constant linear density) for a digital video recording system (DVR) using a phase change disc with 9.2 GB capacity with the use of a red (lambda=650 nm) laser and an objective lens with a numerical aperture (\mathit{NA}) of 0.85 in combination with a thin cover layer. Despite its high density, this new format is highly reliable and efficient. When extended for use with blue-violet (lambda≈ 405 nm) diode lasers, the format is well suited to be the basis of a third-generation optical recording system with over 22 GB capacity on a single layer of a 12-cm-diameter disc.
Article
We propose an advanced detection approach based on capacity approaching constrained parity-check codes and multiple-error-event correction post-processing for high density blue laser disk systems. Simulation results with the blu-ray disc show that an increase of 5GB in capacity can be achieved over the standard system.
Article
Burst error detection codes and cyclic redundancy check (CRC) codes are generalized to event-error detection codes, which are useful in various noisy channels. A systematic linear block code is constructed that detects any event error from an arbitrary list of event errors. The result is generalized to detection and correction of multiple event errors. Bounds are found on the minimum number of redundant bits needed to construct such codes. It is shown that, under certain conditions, the linear code construction is optimal. Various applications are discussed, where there is a Markov source or a Markov channel. It is argued that the codes described herein can be employed either as error detection codes, or as distance-enhancing codes when complete decoders are applied. Specific examples covered in this correspondence include hybrid automatic repeat request (ARQ) systems, intersymbol interference (ISI) channels, and Gilbert (1958) channels