## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

Decoders minimizing the Euclidean distance between the received word and the candidate codewords are known to be optimal for channels suffering from Gaussian noise. However, when the stored or transmitted signals are also corrupted by an unknown offset, other decoders may perform better. In particular, applying the Euclidean distance on normalized words makes the decoding result independent of the offset. The use of this distance measure calls for alternative code design criteria in order to get good performance in the presence of both noise and offset. In this context, various adapted versions of classical binary block codes are proposed, such as (i) cosets of linear codes, (ii) (unions of) constant weight codes, and (iii) unordered codes. It is shown that considerable performance improvements can be achieved, particularly when the offset is large compared to the noise.

To read the full-text of this research,

you can request a copy directly from the authors.

... A novel decoding algorithm for the concatenated scheme is proposed, aiming to exploit its error correction potential better. The concatenation is between a Reed-Solomon (RS) code and a certain coset of a block code proposed in [66]. The modified Pearson distance detection is used to decode the inner code. ...

... For the noisy channels with unknown offset, we design a concatenated code. A Reed-Solomon (RS) code [69] and a certain coset of a binary block code proposed in [66] are used as outer and inner codes, respectively. The two codes are chosen according to a rule that the inner code is of a short length. ...

... Binary block codes proposed in [66] work well with the modified Pearson distance based decoding criterion (2.39), which guarantees immunity to channel offset mismatch. Note that for any binary linear block code S containing the all-one vector, the minimum δ P distance is zero since δ P (0, 1) = 0. ...

We consider the transmission and storage of data
that use coded symbols over a channel, where a Pearson-distance based
detector is used for achieving resilience against unknown
channel gain and offset, and corruption with additive noise. We
discuss properties of binary Pearson codes, such as the Pearson
noise distance that plays a key role in the error performance of
Pearson-distance-based detection. We also compare the Pearson
noise distance to the well-known Hamming distance, since the
latter plays a similar role in the error performance of Euclidean distance-
based detection.

We consider the transmission and storage of encoded strings of symbols over a noisy channel, where dynamic threshold detection is proposed for achieving resilience against unknown scaling and offset of the received signal. We derive simple rules for dynamically estimating the unknown scale (gain) and offset. The estimates of the actual gain and offset so obtained are used to adjust the threshold levels or to re-scale the received signal within its regular range. Then, the re-scaled signal, brought into its standard range, can be forwarded to the final detection/decoding system, where optimum use can be made of the distance properties of the code by applying, for example, the Chase algorithm. A worked example of a spin-torque transfer magnetic random access memory (STT-MRAM) with an application to an extended (72, 64) Hamming code is described, where the retrieved signal is perturbed by additive Gaussian noise and unknown gain or offset.

The recently proposed Pearson codes offer immunity against channel gain and offset mismatch. These codes have very low redundancy, but efficient coding procedures were lacking. In this paper, systematic Pearson coding schemes are presented. The redundancy of these schemes is analyzed for memoryless uniform sources. It is concluded that simple coding can be established at only a modest rate loss.

The Pearson distance has been advocated for improving the error performance
of noisy channels with unknown gain and offset. The Pearson distance can only
fruitfully be used for sets of $q$-ary codewords, called Pearson codes, that
satisfy specific properties. We will analyze constructions and properties of
optimal Pearson codes. We will compare the redundancy of optimal Pearson codes
with the redundancy of prior art $T$-constrained codes, which consist of
$q$-ary sequences in which $T$ pre-determined reference symbols appear at least
once. In particular, it will be shown that for $q\le 3$ the $2$-constrained
codes are optimal Pearson codes, while for $q\ge 4$ these codes are not
optimal.

The performance of certain transmission and storage channels, such as optical data storage and nonvolatile memory (flash), is seriously hampered by the phenomena of unknown offset (drift) or gain. We will show that minimum Pearson distance (MPD) detection, unlike conventional minimum Euclidean distance detection, is immune to offset and/or gain mismatch. MPD detection is used in conjunction with (T) -constrained codes that consist of (q) -ary codewords, where in each codeword (T) reference symbols appear at least once. We will analyze the redundancy of the new (q) -ary coding technique and compute the error performance of MPD detection in the presence of additive noise. Implementation issues of MPD detection will be discussed, and results of simulations will be given.

The most common method to construct a t-error correcting/all unidirectional error detecting (EC/AUED) code is to choose a t-error correcting (EC) code and then to append a tail in such a way that the new code can detect more than t errors when they are unidirectional. The tail is a function of the weight of the codeword.
We present two new techniques for constructing t-EC/AUED codes. The first technique modifies the t-EC code in such a way that the weight distribution of the original code is reduced. So, a smaller tail is needed. Frequently, this technique gives less overall redundancy than the best
available t-EC/AUED codes.

The authors present families of binary systematics codes that can
correct t random errors and detect more than t
unidirectional errors. The first step of the construction is encoding
the k information symbols into a codeword of an [ n ',
k , 2 t +1] error-correcting code. The second step
involves adding more bits to this linear error-correcting code in order
to obtain the detection capability of all unidirectional errors.
Asymmetric error-correcting codes turn out to be a powerful tool in the
proposed construction. The resulting codes significantly improve
previous results. Asymptotic estimates and decoding algorithms are
presented

Flash, already one of the dominant forms of data storage for mobile consumer devices, such as smartphones and media players, is experiencing explosive growth in cloud and enterprise applications. Flash devices offer very high access speeds, low power consumption, and physical resiliency. Our goal in this article is to provide a high-level overview of error correction for Flash. We will begin by discussing Flash functionality and design. We will introduce the nature of Flash deficiencies. Afterwards, we describe the basics of ECCs. We discuss BCH and LDPC codes in particular and wrap up the article with more directions for Flash coding.

The reliability of mass storage systems, such as optical data recording and non-volatile memory (Flash), is seriously hampered by uncertainty of the actual value of the offset (drift) or gain (amplitude) of the retrieved signal. The recently introduced minimum Pearson distance detection is immune to unknown offset or gain, but this virtue comes at the cost of a lessened noise margin at nominal channel conditions. We will present a novel hybrid detection method, where we combine the outputs of the minimum Euclidean distance and Pearson distance detectors so that we may trade detection robustness versus noise margin. We will compute the error performance of hybrid detection in the presence of unknown channel mismatch and additive noise.

Some new codes are described which are separable and are perfect error detection codes in a completely asymmetric channel. Some results are given of comparisons between one simple form of the code in which the check bits correspond to the sum of ones in the information bits and the four out of eight code. The new code is found to compare favorably in error detection capability in several cases. In addition, some more complex codes of this type are indicated.

Let A(n,d,w) denote the maximum possible number of codewords in an
(n,d,w) constant-weight binary code. We improve upon the best known
upper bounds on A(n,d,w) in numerous instances for n⩽24 and
d⩽12, which is the parameter range of existing tables. Most
improvements occur for d=8, 10, where we reduce the upper bounds in more
than half of the unresolved cases. We also extend the existing tables up
to n⩽28 and d⩽14. To obtain these results, we develop new
techniques and introduce new classes of codes. We derive a number of
general bounds on A(n,d,w) by means of mapping constant-weight codes
into Euclidean space. This approach produces, among other results, a
bound on A(n,d,w) that is tighter than the Johnson bound. A similar
improvement over the best known bounds for doubly-constant-weight codes,
studied by Johnson and Levenshtein, is obtained in the same way.
Furthermore, we introduce the concept of doubly-bounded-weight codes,
which may be thought of as a generalization of the
doubly-constant-weight codes. Subsequently, a class of Euclidean-space
codes, called zonal codes, is introduced, and a bound on the size of
such codes is established. This is used to derive bounds for
doubly-bounded-weight codes, which are in turn used to derive bounds on
A(n,d,w). We also develop a universal method to establish constraints
that augment the Delsarte inequalities for constant-weight codes, used
in the linear programming bound. In addition, we present a detailed
survey of known upper bounds for constant-weight codes, and sharpen
these bounds in several cases. All these bounds, along with all known
dependencies among them, are then combined in a coherent framework that
is amenable to analysis by computer. This improves the bounds on
A(n,d,w) even further for a large number of instances of n, d, and w

In this paper we present some basic theory on unidirectional error correcting/detecting codes. We define symmetric, asymmetric, and unidirectional error classes and proceed to derive the necessary and sufficient conditions for a binary code to be unidirectional error correcting/detecting.