May 1987
·
53 Reads
·
155 Citations
IEEE Communications Magazine
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
May 1987
·
53 Reads
·
155 Citations
IEEE Communications Magazine
November 1986
·
13 Reads
·
9 Citations
In a recent experimental project, a long, high rate Reed-Solomon error correction codec was developed and integrated into a satellite TDMA system. Although the coding overhead was only about 6%, a dramatic error rate improvement was achieved for channel error rates less than 5Ã10-4. This paper describes the design and implementation of the codec, its integration into a TDMA format, and the results of the experimental testing.
January 1985
·
7 Reads
·
1 Citation
In many error-correction coding systems, the required decoding time is a random variable which depends on the noise severity. When there are few errors, the decoder has a relatively easy time, but when there are many errors, the decoder must work much harder. If the “many errors” situation is relatively rare, the average decoding time may be much less than the maximum decoding time. If this is so, it will be possible to increase the decoder’s effective speed dramatically, through the use of a buffered decoder architecture.
June 1983
·
8 Reads
·
21 Citations
IEEE Transactions on Information Theory
A detailed description is given of a fast soft decision decoding procedure for high-rate block codes. The high speed is made possible (in part) by using the symmetries of the code to simplify the syndrome decoding by table look-up and by making the best use of the soft decision information. The (128,106,8) BCH code is used as an example.
January 1983
·
9 Reads
·
10 Citations
In many error-correction coding systems, the required decoding time is a random variable which depends on the noise severity. When there are few errors, the decoder has a relatively easy time, but when there are many errors, the decoder must work much harder. If the “many errors” situation is relatively rare, the average decoding time may be much less than the maximum decoding time. If this is so, it will be possible to increase the decoder’s effective speed dramatically, through the use of a buffered decoder architecture.
November 1982
·
87 Reads
·
303 Citations
IEEE Transactions on Information Theory
June 1980
·
54 Reads
·
250 Citations
Proceedings of the IEEE
This paper is a survey of error-correcting codes, with emphasis on the costs of encoders and decoders, and the relationship of these costs to various important system parameters such as speed and delay. Following an introductory overview, the remainder of this paper is divided into three sections corresponding to the three major types of channel noise: white Gaussian noise, interference, and digital errors of the sort which occur in secondary memories such as disks. Appendix A presents some of the more important facts about modern implementations of decoders for long high-rate Reed-Solomon codes, which play an important role throughout the paper. Appendix B investigates some important aspects of the tradeoffs between error correction and error detection.
October 1978
·
10 Reads
·
21 Citations
IEEE Transactions on Information Theory
Since well-known decoding algorithms [1],[2] are able to correct both character errors and character erasures with q -ary Reed-Solomon codes, some modulation and demodulation schemes designed to be used with such codes provide a channel that has q inputs and q + 1 outputs, q of which correspond to the q inputs and one of which corresponds to the "erasure" symbol. This correspondence points om the relative advantages offered by a slightly more refined demodulation scheme, which creates a digital channel that has q inputs and 2_{q} outputs, q of which correspond to "strong" receptions of the inputs and q of which correspond to "weak" receptions of the inputs. For decoding purposes, all q weak outputs may be treated as erasures, but the fact that the channel now provides additional information facilitates improved decoding performance by reading the weak characters under the erasures.
June 1978
·
188 Reads
·
1,185 Citations
IEEE Transactions on Information Theory
MEMBER, IEEE, AND HENK C. A. V~ TILBORG The fact that the general decoding problem for linear codes and the general problem of finding the weights of a linear code are both NP-complete is shown. This strongly suggests, but does not rigorously imply, that no algorithm for either of these problems which runs in polynomial time exists.
May 1978
·
107 Reads
·
1,011 Citations
IEEE Transactions on Information Theory
The fact that the general decoding problem for linear codes and the general problem of finding the weights of a linear code are both NP-complete is shown. This strongly suggests, but does not rigorously imply, that no algorithm for either of these problems which runs in polynomial time exists.
... The steps are of high efficiency because we can find the root of a polynomial in polynomial time. [29] One can show that using these steps, we can solve HDLP by computing at most m DLPs, which can be done in sub-exponential time [29] with classical computer, or be done in polynomial time with quantum computer [3]. ...
July 1970
... Such solution formulas have been the mathematical basis of fast algorithms to compute the roots of polynomials ( ) ∈ F [ ] of degree up to 4, where F is the finite field with elements. Algorithms using algebraic solution formulas to find the roots of polynomials of degree up till 4 are for example given in [2], [3], [6], [9], [11]. If the degree of the polynomial exceeds 4, no solution formula for the roots of ( ) in terms of radicals exists in general and other methods are used. ...
January 1966
... Moreover, there exist some self-dual codes with square-root-like lower bound constructed in [8, 16-18, 29, 31, 33] and the references therein. Some families of rate 1/2 ternary negacyclic codes and (1,4) [15,7,5] Optimal C (2,4) [15,5,7] Optimal C (1,6) [63, 39,9] Best-known C (2,6) [63, 36,11] Best-known constacyclic codes were developed in [28,29], which exhibit a square-root-like lower bound on their minimum distances. It is always an interesting problem that if there exists an infinite family of asymptotically good cyclic codes [26]. ...
October 1967
Bell System Technical Journal
... Any such code is a subgroup of F~ and so partitions V~ into cosets. The classification of these cosets is an unsolved problem for most codes, the cosets of the first order Reed Muller code having received attention recently in Berlekamp (1968band 1970), 81oane and Dick (1971, Holmes (1971), and Berlekamp and Welch (1972). In this note we examine some properties of the structure code of a vector, and answer a question proposed by Sloane and Dick (1971). ...
July 1970
Bell System Technical Journal
... Since char(F) ≤ q and [F : F p ] = [F : F q ] · [F q : F p ] = (q − 1) log q, we have [F : F p ] ≤ q log q. Using Berlekamp's deterministic factorization algorithm, we can find all roots of R 2 (Y 1 ) in time poly(d, q) [Ber70], [Ker09]. Each such root is retrieved as an element inF, which corresponds to a polynomial f (X) ∈ F q [X] of degree less than q − 1. ...
October 1967
Bell Labs Technical Journal
... It is well-known that the minimum nonzero Hamming weight of RM(r, n) equals 2 n−r (see [22,Chapter 13], and see [8,Chapter 4] for a more direct proof), and that the nonzero minimum weight codewords in this code are the indicators of the (n − r)-dimensional affine subspaces of F n 2 . All the low Hamming weights are known in all Reed-Muller codes, and there are very few: Berlekamp and Sloane [4] (see the Addendum in this paper) and Kasami and Tokura [16] have shown that, for r ≥ 2, the only Hamming weights in RM(r, n) occurring in the range [2 n−r ; 2 n−r+1 [ are of the form 2 n−r+1 − 2 n−r+1−i , where we have i ≤ max(min(n − r, r), n−r+2 2 ). The latter has completely characterized the codewords: The corresponding functions are affinely equivalent either to x 1 · · · x r−2 (x r−1 x r + x r+1 x r+2 + · · · + x r+2l−3 x r+2l−2 ), 2 ≤ 2l ≤ n − r + 2, or to x 1 · · · x r−l (x r−l+1 · · · x r + x r+1 · · · x r+l ), 3 ≤ l ≤ min(r, n − r). ...
May 1969
Information and Control
... For S = (s 1 Further, let £> n denote the dihedral group of a regular n-gon. Then we say two n-tuples S and R are related if either R -o(S) or R = a(9fll (5)) for some o e £> n -If S and R are related, then we write S ~ R. It is easily seen that ~ is an equivalence relation. ...
Reference:
Length of the 7-Number Game
January 1975
Mathematics of Computation
... When designing decoders, taking into account only the worst case decoding complexity is suboptimal. In [2], which discusses a traditional Reed-Solomon decoder, a similar observation (experimental in that case) of error-weight dependent decoding time is made. That observation then motivates a buffered implementation of the decoder that dramatically increases its effective speed. ...
January 1983
... Such solution formulas have been the mathematical basis of fast algorithms to compute the roots of polynomials ( ) ∈ F [ ] of degree up to 4, where F is the finite field with elements. Algorithms using algebraic solution formulas to find the roots of polynomials of degree up till 4 are for example given in [2], [3], [6], [9], [11]. If the degree of the polynomial exceeds 4, no solution formula for the roots of ( ) in terms of radicals exists in general and other methods are used. ...
June 1967
Information and Control
... In 1973 Sloane [34] posed a question which remains unresolved: is there a binary self-dual doubly-even [72, 36, 16] code? The automorphism group of the extended Golay code is the 5-transitive Mathieu group M 24 of order 2 10 ·3 3 ·5·7·11·23 (see [3]), as the automorphism group of q 48 is only 2-transitive and is isomorphic to the projective special linear group PSL(2, 47) of order 2 4 · 3 · 23 · 47 [25]. The first authors to study the automorphism group of the putative [72, 26,16] code were Conway and Pless [13], in particular they focused on the possible automorphisms of odd prime order. ...
February 1971
Information and Control