Article

Advanced Signal Processing Technique for Storage Systems

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

A post-Viterbi processor has found wide acceptance in recording systems since it can correct dominant error events at the channel detector output using only a few parity bits, and thereby significantly reduce the correction capacity loss of the error correction code. This paper presents two novel techniques for minimizing the mis-correction of a post-Viterbi processor based on an error detection code. One is a method for achieving a low probability of mis-selection in actual error-type. The other is a method for achieving a low probability of mis-positioning in error-location of an occurred error event. Simulation results show that an application of these techniques to conventional post-Viterbi processor considerably reduces the probability of mis-correction and the performance approaches the corresponding bit error rate and symbol error rate bound.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The performance of a partial response maximum likelihood (PRML) system can be improved by employing an error correlation filter-based post-Viterbi processor based on an error detection code [4][12] that can correct a dominant error event at the output of the channel detector (in other words, ML detector). In error correlation filter-based post-Viterbi processors based on error detection code, an error detection decoder computes a syndrome to check for the presence of errors in the estimated codeword, which is found at the output of the channel detector. ...
... Here, the matched filter for a given error event is the time-reversed version of the convolution between the error event and the channel target response. Each matched filter computes and outputs the likelihood values over the error starting positions for each dominant error event, where error starting positions are all positions within a codeword [5][12] or more probable positions [4]. The outputs (likelihood values) of the matched filters are normalized by subtracting a set of offset values associated with the corresponding error events. ...
... That is, equation (15) means that the error correction of the new scheme is only performed based on an error starting position and its error-type associated with minimum one when it exists positions with negative soft-reliability value among all likely error starting positions for prescribed error events.Figure 1 shows a generic soft-reliability information-based error correction scheme. In case the bolded block inFig. 1 is replaced by the bank of error correlation filters, the post- Viterbi detector becomes almost the same as an error correlation filter-based post-Viterbi processor [4]. In essence, the procedure for finding likely error starting positions is the same as error correlation filter-based error correction scheme [4][12]. ...
Article
This paper proposes a soft-reliability information- based post-Viterbi processor for reducing miss-correction of an error correlation filter-based post-Viterbi processor. The essential difference between the soft-reliability information- based and the error correlation filter-based post-Viterbi processors, is how to locate the most probable error starting position. The new scheme determines an error starting position based on a soft-reliability estimate, while the conventional scheme chooses an error starting position based on likelihood value. Among all likely error starting positions for prescribed error events, the new scheme attempts to correct error- type corresponding to the position only if there exists a position where the soft-reliability estimate is negative, while the conventional scheme performs error correction based on error- type and its error starting position of an error event associated with the maximum likelihood value. A correction made by the conventional scheme may result in miss-correction because the scheme does not have any criterion for judgment whether an estimated error starting position is correct. In case error correction is only performed when a position with negative soft-reliability estimate exists, the probability of miss-correction of the new scheme is less than the one of the conventional scheme.
... There has been a growing interest in error detection codes with error correction properties [1]- [10]. Unlike conventional read channels, where the error correction code (ECC) is expected to correct all the errors at the output of the constrained decoder, dominant error events are corrected by applying a low redundancy error detection code. ...
... The approach using error detection codes, referred to as post-Viterbi processor (in other words, maximum likelihood (ML) postprocessor), has found wide acceptance since the performance-complexity trade-off offered is very attractive and affordable. The above approach has been widely studied for magnetic recording channels, and for optical recording systems [1][2][3][4][5][6][7][8][9][10]. ...
... Actually, a conventional post-Viterbi scheme based on error correlation filter [1] is easily derived from the soft information estimate in (7). The error signal ...
Article
This paper proposes a new soft-reliability information-based post-Viterbi processor with advanced noise-robustness for reducing probability of miss-correction and no correction of a conventional soft-reliability-based post-Viterbi processor. Among all likely error starting positions for prescribed error events, the two schemes are equal to attempt to correct error-type corresponding to a position with minimum one only if there exist positions where a soft-reliability estimate is negative. The main difference between the two schemes is how they acquire the softreliability estimate. The soft-reliability estimate of the new scheme is obtained through the elimination of the noisesensitive component from the log-likelihood ratio of the posteriori probabilities, which is the soft-reliability estimate of conventional scheme. As a result, the new scheme is based on more reliable soft-reliability information so reducing the probability of miss-correction and no correction.
Conference Paper
Full-text available
We investigate the performance of low density parity check (LDPC) codes, single-parity turbo product codes (TPC/SPC) and multi-parity turbo product codes (TPC/MPC) over various partial response channels (PR) encountered in magnetic and magneto-optical (MO) recording systems, like PR4/EPR4 and PR1/PR2 channels. The codes have similarity in structures and can be decoded using simple message-passing algorithms. We show that the combination of a TPC/SPC code and a precoded PR channel results in good distance spectrum due to interleaving gain. Density evolution is then used to compute the thresholds for TPC/SPC and LDPC codes over PR channels. Through analysis and through simulations, we show the three types of codes yield comparable bit error rate performance with similar complexity, but they exhibit quite different error statistics, which in turn may result in sharp differences in block failure rate after the Reed-Solomon error correction code (RS-ECC)
Article
Full-text available
Turbo codes and low-density parity check (LDPC) codes with iterative decoding have received significant research attention because of their remarkable near-capacity performance for additive white Gaussian noise (AWGN) channels. Previously, turbo code and LDPC code variants are being investigated as potential candidates for high-density magnetic recording channels suffering from low signal-to-noise ratios (SNR). We address the application of turbo codes and LDPC codes to magneto-optical (MO) recording channels. Our results focus on a variety of practical MO storage channel aspects, including storage density, partial response targets, the type of precoder used, and mark edge jitter. Instead of focusing just on bit error rates (BER), we also study the block error statistics. Our results for MO storage channels indicate that turbo codes of rate 16/17 can achieve coding gains of 3-5 dB over partial response maximum likelihood (PRML) methods for a 10<sup>-4</sup> target BER. Simulations also show that the performance of LDPC codes for MO channels is comparable to that of turbo codes, while requiring less computational complexity. Both LDPC codes and turbo codes with iterative decoding are seen to be robust to mark edge jitter
Article
Full-text available
This paper proposes a general and systematic code design method to efficiently combine constrained codes with parity-check (PC) codes for optical recording. The proposed constrained PC code includes two component codes: the normal constrained (NC) code and the parity-related constrained (PRC) code. They are designed based on the same finite state machine (FSM). The rates of the designed codes are only a few tenths below the theoretical maximum. The PC constraint is defined by the generator matrix (or generator polynomial) of a linear binary PC code, which can detect any type of dominant error events or error event combinations of the system. Error propagation due to parity bits is avoided, since both component codes are protected by PCs. Two approaches are proposed to design the code in the non-return-to-zero-inverse (NRZI) format and the non-return-to-zero (NRZ) format, respectively. Designing the codes in NRZ format may reduce the number of parity bits required for error detection and simplify post-processing for error correction. Examples of several newly designed codes are illustrated. Simulation results with the Blu-Ray disc (BD) systems show that the new d = 1 constrained 4-bit PC code significantly outperforms the rate 2/3 code without parity, at both nominal density and high density.
Conference Paper
We discuss an error detection technique geared to a prescribed set of error events. The traditional method of error detection and correction attempts to detect/correct as many erroneous bits as possible within a codeword, irrespective of the pattern of the error events. The proposed approach, on the other hand, is less concerned about the total number of erroneous bits it can detect, but focuses on specific error events of known types. We take perpendicular recording systems as an application example. Distance analysis and simulation can easily identify the dominant error events for the given noise environment and operating density. We develop a class of simple error detection parity check codes that can detect these specific error events. The proposed coding method, when used in conjunction with post-Viterbi error correction processing, provides a substantial performance gain compared to the uncoded perpendicular recording system.
Conference Paper
A statistical simulation method is introduced to estimate the sector error rate (SER) of proposed triply concatenated coding systems for magnetic hard disk drives. The outer code is a standard interleaved Reed-Solomon code. The `middle' codes are run length limiting codes used to demonstrate the effect of error propagation. The inner code(s) are block codes with 1 or 3 parity check bits. First, a baud rate simulation is used to estimate sufficient statistics to extrapolate ECC performance. Second, a Monte Carlo error event simulator generates error instances which are simply checked for correctability. In all code combinations the apparent SNR gains when considering error (or bit) event rates are considerably lessened or completely lost when considering SER
Conference Paper
We investigate the performance of single and multiple parity codes in magnetic recording systems. We evaluate the codes with respect to bit error rate as well as error correction code (ECC) failure rate. While multiple parity codes outperform the single parity code in bit error rate, their performance is worse with respect to the ECC failure rate
Article
We present a particular generator polynomial for a cyclic redundancy check (CRC) code that can be used to detect all dominant error events in perpendicular recording over a broad range of densities. This polynomial is also effective in detecting error events that occur at codeword boundaries. The bit-error-rate and the sector-error-rate performances are validated that result from the use of the corresponding CRC code in conjunction with the well-known post-Viterbi error correction method.
Article
The performance of single-parity codes used in conjunction with the Reed-Solomon error-correcting code (ECC) is investigated. Specifically, the tradeoff between simply increasing ECC power instead of using a parity code is explored.
Article
Recent work on turbo codes applied to partial response (PR) optical recording channels has focused on unconstrained channels. In this paper, we consider the application of turbo codes to a (1,7) constrained, PR-equalized optical recording channel with digital versatile disc (DVD) parameters. The addition of a (1,7) run-length-limited (RLL) constraint requires the use of a soft RLL decoder to communicate with the turbo code. Although soft RLL decoders were previously developed for use with iterative decoding, application to a practical optical channel has not been addressed until now. Here, results on both correlated noise and media noise optical recording channel models are given for two PR targets, 1+D and 1+D+D<sup>2</sup>+D <sup>3</sup>. We achieved coding gains of 4 to 6.3 dB over a baseline RLL-coded system. We also evaluated system performance at smaller mark sizes, and found that density gains of 17% and 22% are achievable for the two targets, respectively
Article
The performance of magnetic recording systems that include conventional modulation codes combined with multiple parity bits is studied. Various performance measures, including bit error rate at the output of time inverse precoder, byte error probability at the input of the Reed-Solomon (RS) decoder and sector error rate, are used to evaluate the performance of various coding/detection schemes. Suboptimum detection/decoding schemes consisting of a 16-state noise-predictive maximum-likelihood (NPML) detector followed by parity-based noise-predictive post-processing, and maximum-likelihood sequence detection/decoding on the combined channel/parity trellis are considered. For conventional modulation codes, it is shown that although the dual-parity post-processor gains 0.5 dB over the single-parity post-processor in terms of bit- and byte-error-rate performance, the sector-error-rate performance of both schemes is almost the same. Furthermore, the sector-error-rate performance of optimum 64-state combined channel/parity detection for the dual-parity code is shown to be approximately 0.1 dB better than that of optimum 32-state combined channel/parity detection for the single-parity code. These performance gains can be even more substantial if appropriate coding techniques that eliminate certain error events and minimize error burst length or multiparity codes in conjunction with combined parity/channel detection are used
Article
This paper presents models and techniques for fairly comparing detection systems for magnetic recording and for choosing the optimal tradeoff between Error Correction Control (ECC) overhead and detection performance. Models are presented for soft bit error rate (SBER), Hard Bit Error Rate (HBER), and optimization for both standard and modified concatenated coding schemes. The effect of error propagation of the RLL code is taken into account providing a more realistic method for comparing overall system performance of different combinations of detector, RLL code, and ECC
Article
In this paper, a modified equalization target is proposed for the high density magnetic recording channel. This target is a closer match to the channel than the EEPR4 response and hence has better detection performance. Based on the dominant error events for this target, a parity-based coding scheme is also proposed to achieve a coding gain with the modified target. The use of the parity code detects the occurrence of the dominant error events while achieving a high code rate. The detection system consists of a Viterbi detector matched to the channel response and a post processor to handle the code constraints. This system is shown to perform well compared to other proposed detection systems through analysis and simulation on a Lorentzian based channel model