Article

A New Post-Viterbi Processor Based on Soft-Reliability Information

If you want to read the PDF, try requesting it from the authors.

Abstract

This paper proposes a soft-reliability information- based post-Viterbi processor for reducing miss-correction of an error correlation filter-based post-Viterbi processor. The essential difference between the soft-reliability information- based and the error correlation filter-based post-Viterbi processors, is how to locate the most probable error starting position. The new scheme determines an error starting position based on a soft-reliability estimate, while the conventional scheme chooses an error starting position based on likelihood value. Among all likely error starting positions for prescribed error events, the new scheme attempts to correct error- type corresponding to the position only if there exists a position where the soft-reliability estimate is negative, while the conventional scheme performs error correction based on error- type and its error starting position of an error event associated with the maximum likelihood value. A correction made by the conventional scheme may result in miss-correction because the scheme does not have any criterion for judgment whether an estimated error starting position is correct. In case error correction is only performed when a position with negative soft-reliability estimate exists, the probability of miss-correction of the new scheme is less than the one of the conventional scheme.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The approach using error detection codes, referred to as post-Viterbi processor (in other words, maximum likelihood (ML) postprocessor), has found wide acceptance since the performance-complexity trade-off offered is very attractive and affordable. The above approach has been widely studied for magnetic recording channels, and for optical recording systems [1][2][3][4][5][6][7][8][9][10]. ...
... In conventional soft-reliability information-based post-Viterbi processor in conjunction with an error detection code [2], when the syndrome is non-zero, the error detection code generates a set of all likely error starting positions for prescribed error events. Then, the scheme computes the softreliability values over the set of likely error starting positions, and it outputs an error starting position and its errortype associated with minimum one only if there exist likely error starting positions that generate the negative softreliability estimate. ...
... A soft-reliability estimate for the conventional scheme is given as the log-likelihood ratio of the posteriori probabilities [2], which includes a noise-sensitive component. The noisesensitive component is the main source of detrimental factors such as miss-correction and no correction. ...
Article
This paper proposes a new soft-reliability information-based post-Viterbi processor with advanced noise-robustness for reducing probability of miss-correction and no correction of a conventional soft-reliability-based post-Viterbi processor. Among all likely error starting positions for prescribed error events, the two schemes are equal to attempt to correct error-type corresponding to a position with minimum one only if there exist positions where a soft-reliability estimate is negative. The main difference between the two schemes is how they acquire the softreliability estimate. The soft-reliability estimate of the new scheme is obtained through the elimination of the noisesensitive component from the log-likelihood ratio of the posteriori probabilities, which is the soft-reliability estimate of conventional scheme. As a result, the new scheme is based on more reliable soft-reliability information so reducing the probability of miss-correction and no correction.
Article
Recently, IEEE 802.11ah standard has been proposed to extend the range of wireless local area network operating in the sub-1-GHz frequency band. This standard along with other protocols can provide communication services to the Internet of Things applications. However, in future, this band is also expected to be crowded like 2.45 GHz ISM band and cause interference to other devices operating in the same band. For a communication channel affected by additive white Gaussian noise, the least square (LS) based estimator and Euclidean distance based Viterbi decoder give optimal performance. However, the receiver’s performance with LS estimator followed by the Viterbi decoder degrades for high interference affected communication channels. In this paper, a new orthogonal frequency division multiplexing based receiver structure operating in high interference environment is proposed. The proposed receiver is based on non-parametric maximum likelihood channel estimation followed by Viterbi decoder. The Viterbi decoder’s branch metric is updated based on the distribution of residual error. The proposed receiver structure is tested on IEEE 802.11ah based receiver in two different type of additive interference: 1) IEEE 802.15.4 device, and 2) impulsive noise. Both simulations and real-world experimental results on standard compliant platform show that the proposed algorithm performs better in terms of bit error rate than other receivers in all the considered interference models. Additionally, we also derive analytical expression for the probability of symbol error.
Conference Paper
Full-text available
We investigate the performance of low density parity check (LDPC) codes, single-parity turbo product codes (TPC/SPC) and multi-parity turbo product codes (TPC/MPC) over various partial response channels (PR) encountered in magnetic and magneto-optical (MO) recording systems, like PR4/EPR4 and PR1/PR2 channels. The codes have similarity in structures and can be decoded using simple message-passing algorithms. We show that the combination of a TPC/SPC code and a precoded PR channel results in good distance spectrum due to interleaving gain. Density evolution is then used to compute the thresholds for TPC/SPC and LDPC codes over PR channels. Through analysis and through simulations, we show the three types of codes yield comparable bit error rate performance with similar complexity, but they exhibit quite different error statistics, which in turn may result in sharp differences in block failure rate after the Reed-Solomon error correction code (RS-ECC)
Article
Full-text available
Turbo codes and low-density parity check (LDPC) codes with iterative decoding have received significant research attention because of their remarkable near-capacity performance for additive white Gaussian noise (AWGN) channels. Previously, turbo code and LDPC code variants are being investigated as potential candidates for high-density magnetic recording channels suffering from low signal-to-noise ratios (SNR). We address the application of turbo codes and LDPC codes to magneto-optical (MO) recording channels. Our results focus on a variety of practical MO storage channel aspects, including storage density, partial response targets, the type of precoder used, and mark edge jitter. Instead of focusing just on bit error rates (BER), we also study the block error statistics. Our results for MO storage channels indicate that turbo codes of rate 16/17 can achieve coding gains of 3-5 dB over partial response maximum likelihood (PRML) methods for a 10<sup>-4</sup> target BER. Simulations also show that the performance of LDPC codes for MO channels is comparable to that of turbo codes, while requiring less computational complexity. Both LDPC codes and turbo codes with iterative decoding are seen to be robust to mark edge jitter
Article
Full-text available
This paper proposes a general and systematic code design method to efficiently combine constrained codes with parity-check (PC) codes for optical recording. The proposed constrained PC code includes two component codes: the normal constrained (NC) code and the parity-related constrained (PRC) code. They are designed based on the same finite state machine (FSM). The rates of the designed codes are only a few tenths below the theoretical maximum. The PC constraint is defined by the generator matrix (or generator polynomial) of a linear binary PC code, which can detect any type of dominant error events or error event combinations of the system. Error propagation due to parity bits is avoided, since both component codes are protected by PCs. Two approaches are proposed to design the code in the non-return-to-zero-inverse (NRZI) format and the non-return-to-zero (NRZ) format, respectively. Designing the codes in NRZ format may reduce the number of parity bits required for error detection and simplify post-processing for error correction. Examples of several newly designed codes are illustrated. Simulation results with the Blu-Ray disc (BD) systems show that the new d = 1 constrained 4-bit PC code significantly outperforms the rate 2/3 code without parity, at both nominal density and high density.
Article
A post-Viterbi processor has found wide acceptance in recording systems since it can correct dominant error events at the channel detector output using only a few parity bits, and thereby significantly reduce the correction capacity loss of the error correction code. This paper presents two novel techniques for minimizing the mis-correction of a post-Viterbi processor based on an error detection code. One is a method for achieving a low probability of mis-selection in actual error-type. The other is a method for achieving a low probability of mis-positioning in error-location of an occurred error event. Simulation results show that an application of these techniques to conventional post-Viterbi processor considerably reduces the probability of mis-correction and the performance approaches the corresponding bit error rate and symbol error rate bound.
Conference Paper
We discuss an error detection technique geared to a prescribed set of error events. The traditional method of error detection and correction attempts to detect/correct as many erroneous bits as possible within a codeword, irrespective of the pattern of the error events. The proposed approach, on the other hand, is less concerned about the total number of erroneous bits it can detect, but focuses on specific error events of known types. We take perpendicular recording systems as an application example. Distance analysis and simulation can easily identify the dominant error events for the given noise environment and operating density. We develop a class of simple error detection parity check codes that can detect these specific error events. The proposed coding method, when used in conjunction with post-Viterbi error correction processing, provides a substantial performance gain compared to the uncoded perpendicular recording system.
Conference Paper
We investigate the performance of single and multiple parity codes in magnetic recording systems. We evaluate the codes with respect to bit error rate as well as error correction code (ECC) failure rate. While multiple parity codes outperform the single parity code in bit error rate, their performance is worse with respect to the ECC failure rate
Article
We present a particular generator polynomial for a cyclic redundancy check (CRC) code that can be used to detect all dominant error events in perpendicular recording over a broad range of densities. This polynomial is also effective in detecting error events that occur at codeword boundaries. The bit-error-rate and the sector-error-rate performances are validated that result from the use of the corresponding CRC code in conjunction with the well-known post-Viterbi error correction method.
Article
The performance of single-parity codes used in conjunction with the Reed-Solomon error-correcting code (ECC) is investigated. Specifically, the tradeoff between simply increasing ECC power instead of using a parity code is explored.
Article
Recent work on turbo codes applied to partial response (PR) optical recording channels has focused on unconstrained channels. In this paper, we consider the application of turbo codes to a (1,7) constrained, PR-equalized optical recording channel with digital versatile disc (DVD) parameters. The addition of a (1,7) run-length-limited (RLL) constraint requires the use of a soft RLL decoder to communicate with the turbo code. Although soft RLL decoders were previously developed for use with iterative decoding, application to a practical optical channel has not been addressed until now. Here, results on both correlated noise and media noise optical recording channel models are given for two PR targets, 1+D and 1+D+D<sup>2</sup>+D <sup>3</sup>. We achieved coding gains of 4 to 6.3 dB over a baseline RLL-coded system. We also evaluated system performance at smaller mark sizes, and found that density gains of 17% and 22% are achievable for the two targets, respectively
Article
The performance of magnetic recording systems that include conventional modulation codes combined with multiple parity bits is studied. Various performance measures, including bit error rate at the output of time inverse precoder, byte error probability at the input of the Reed-Solomon (RS) decoder and sector error rate, are used to evaluate the performance of various coding/detection schemes. Suboptimum detection/decoding schemes consisting of a 16-state noise-predictive maximum-likelihood (NPML) detector followed by parity-based noise-predictive post-processing, and maximum-likelihood sequence detection/decoding on the combined channel/parity trellis are considered. For conventional modulation codes, it is shown that although the dual-parity post-processor gains 0.5 dB over the single-parity post-processor in terms of bit- and byte-error-rate performance, the sector-error-rate performance of both schemes is almost the same. Furthermore, the sector-error-rate performance of optimum 64-state combined channel/parity detection for the dual-parity code is shown to be approximately 0.1 dB better than that of optimum 32-state combined channel/parity detection for the single-parity code. These performance gains can be even more substantial if appropriate coding techniques that eliminate certain error events and minimize error burst length or multiparity codes in conjunction with combined parity/channel detection are used
Article
This paper presents models and techniques for fairly comparing detection systems for magnetic recording and for choosing the optimal tradeoff between Error Correction Control (ECC) overhead and detection performance. Models are presented for soft bit error rate (SBER), Hard Bit Error Rate (HBER), and optimization for both standard and modified concatenated coding schemes. The effect of error propagation of the RLL code is taken into account providing a more realistic method for comparing overall system performance of different combinations of detector, RLL code, and ECC
Article
In this paper, a modified equalization target is proposed for the high density magnetic recording channel. This target is a closer match to the channel than the EEPR4 response and hence has better detection performance. Based on the dominant error events for this target, a parity-based coding scheme is also proposed to achieve a coding gain with the modified target. The use of the parity code detects the occurrence of the dominant error events while achieving a high code rate. The detection system consists of a Viterbi detector matched to the channel response and a post processor to handle the code constraints. This system is shown to perform well compared to other proposed detection systems through analysis and simulation on a Lorentzian based channel model
Digital Audio Tape recorder, Digital Compact Cassette system He received widespread recognition for his many contributions to the technologies of video, audio, and data recording. He received a
  • S Lin
  • D J Costello Fundamentals
  • Compact Disc
S. Lin and D. J. Costello, Error control coding : Fundamentals and Compact Disc, CD-ROM, CD-Video, Digital Audio Tape recorder, Digital Compact Cassette system, DCC, Digital Versatile Disc, DVD, Video Disc Recorder, and Blu-ray Disc. He received widespread recognition for his many contributions to the technologies of video, audio, and data recording. He received a Knighthood in 2000, a personal 'Emmy' award in 2004, the 1996 IEEE Masaru Ibuka Consumer Electronics Award, the 1998 IEEE Edison Medal, 1999 AES Gold Medal, the 2004 SMPTE Progress Medal, and with Jun Lee, the Chester Sall Award for the 1 st place best paper in the IEEE Transactions on Consumer Electronics 2009. He was named a fellow of the IEEE, AES, and SMPTE, and was inducted into the Consumer Electronics Hall of Fame, and elected into the Royal Netherlands Academy of Sciences and the US National Academy of Engineering. He served the profession as President of the Audio Engineering Society inc., New York, in 2003.