## No full-text available

To read the full-text of this research,

you can request a copy directly from the author.

We will present coding techniques for transmission and storage channels with unknown gain and/or offset. It will be shown that a codebook of length-n q-ary codewords, S, where all codewords in S have equal balance and energy show an intrinsic resistance against unknown gain and/or offset. Generating functions for evaluating the size of S will be presented. We will present an approximate expression for the code redundancy for asymptotically large values of n.

To read the full-text of this research,

you can request a copy directly from the author.

... (ii) Up to now, various coding techniques have been applied to alleviate the detection in case of channel mismatch, such as, rank modulation [32], balanced codes [33][34][35][36][37], and composition check codes [38]. ...

... The notion of dynamic thresholds based on balanced codes is introduced in [33] for the reading of binary sequences. It is further shown to be highly effective against errors caused by voltage drift in Flash memories [34][35][36]. A balanced code consists of the sequences where the number of ones equals the number of zeros. ...

... Gain and offset mismatch have a significant bearing on the error performance of MED asx related terms are dependent on a and b. In the prior art, constrained codes, specifically, d c/d c 2 −bal anced codes, are considered to counter the effects of gain and offset mismatch [36]. By definition, all codewords x in a dc/dc 2balanced code satisfy that the symbol sum n i =1 x i = a 1 and symbol energy n i =1 x 2 i = a 2 , are prescribed, where a 1 and a 2 are two positive integers selected by the code designer. ...

... Clearly, the redundancy of the method is two symbols per codeword. In a second prior art method, codes satisfying equal balance and energy constraints [8], which are immune to gain and offset mismatch, have been advocated. The redundancy of these codes, denoted by r 0 , is given by [8] r 0 ≈ log q n + log q (q 2 − 1) q 2 − 4 + log q π 12 √ 15 . ...

... In a second prior art method, codes satisfying equal balance and energy constraints [8], which are immune to gain and offset mismatch, have been advocated. The redundancy of these codes, denoted by r 0 , is given by [8] r 0 ≈ log q n + log q (q 2 − 1) q 2 − 4 + log q π 12 √ 15 . ...

... where φ( j ) is Euler's totient function that counts the totatives of j , i.e., the positive integers less than or equal to j that are relatively prime to j . We have computed the cardinalities of N 1 (q, n), N 2 (q, n), and P q,n by invoking (7), (8), and the expressions in Table I. Table II lists the results of our computations for selected values of q and n. ...

The Pearson distance has been advocated for improving the error performance
of noisy channels with unknown gain and offset. The Pearson distance can only
fruitfully be used for sets of $q$-ary codewords, called Pearson codes, that
satisfy specific properties. We will analyze constructions and properties of
optimal Pearson codes. We will compare the redundancy of optimal Pearson codes
with the redundancy of prior art $T$-constrained codes, which consist of
$q$-ary sequences in which $T$ pre-determined reference symbols appear at least
once. In particular, it will be shown that for $q\le 3$ the $2$-constrained
codes are optimal Pearson codes, while for $q\ge 4$ these codes are not
optimal.

... For evaluating (7), the decoder requires |S| = 2 n − 1 computations of δ(r,x) plus comparisons, which makes the new method unattractive for very large n. It is shown in [6] that the (time) complexity of the prior art method based on (3) can be reduced to n computations and comparisons using Slepian's method [14]. ...

We consider noisy communications and storage systems that are hampered by varying offset of unknown magnitude such as low-frequency signals of unknown amplitude added to the sent signal. We study and analyze a new detection method whose error performance is independent of both unknown base offset and offset’s slew rate. The new method requires, for a codeword length n ≥ 12, less than 1.5 dB more noise margin than Euclidean distance detection. The relationship with constrained codes based on mass-centered codewords and the new detection method is discussed.

... Another example where the data undergoes possible corruption is in a compact disk, where a scratch or fingerprint leads to unknown gain and drift, depending on reflective index and dimensions of the disc. In [4], it is shown that a codebook having equal balance and energy codewords shows intrinsic resistance to unknown gain / offsets. Generating functions are used to keep a count of such codes. ...

The performance of certain transmission and storage channels, such as optical data storage and nonvolatile memory (flash), is seriously hampered by the phenomena of unknown offset (drift) or gain. We will show that minimum Pearson distance (MPD) detection, unlike conventional minimum Euclidean distance detection, is immune to offset and/or gain mismatch. MPD detection is used in conjunction with (T) -constrained codes that consist of (q) -ary codewords, where in each codeword (T) reference symbols appear at least once. We will analyze the redundancy of the new (q) -ary coding technique and compute the error performance of MPD detection in the presence of additive noise. Implementation issues of MPD detection will be discussed, and results of simulations will be given.

Maximum-likelihood sequence estimation of binary coded and uncoded information, stored on an optical disc, corrupted with additive Gaussian noise is considered. We assume the presence of inter-symbol interference and channel/receiver mismatch. The performance of the maximum-likelihood detection of runlength-limited sequences is compared against both
uncoded information and information encoded by Hamming-distance-increasing
convolutional codes.

This paper presents a practical writing/reading scheme in nonvolatile
memories, called balanced modulation, for minimizing the asymmetric component
of errors. The main idea is to encode data using a balanced error-correcting
code. When reading information from a block, it adjusts the reading threshold
such that the resulting word is also balanced or approximately balanced.
Balanced modulation has suboptimal performance for any cell-level distribution
and it can be easily implemented in the current systems of nonvolatile
memories. Furthermore, we studied the construction of balanced error-correcting
codes, in particular, balanced LDPC codes. It has very efficient encoding and
decoding algorithms, and it is more efficient than prior construction of
balanced error-correcting codes.

We explore a novel data representation scheme for multi-level flash memory cells, in which a set of n cells stores information in the permutation induced by the different charge levels of the individual cells. The only allowed charge-placement mechanism is a "push-to-the-top" operation which takes a single cell of the set and makes it the top-charged cell. The resulting scheme eliminates the need for discrete cell levels, as well as overshoot errors, when programming cells.
We present unrestricted Gray codes spanning all possible n-cell states and using only "push-to-the-top" operations, and also construct balanced Gray codes. We also investigate optimal rewriting schemes for translating arbitrary input alphabet into n-cell states which minimize the number of programming operations.

In an unordered code, no code word is contained in any other code word. Unordered codes are all unidirectional error detecting (AUED) codes. In the binary case, it is well known that among all systematic codes with k information bits, Berger codes are optimal unordered codes with r=[log2(k+1)] ≅ log2k check bits. This paper gives some new theory on variable length unordered codes and introduces a new class of systematic (instantaneous) unordered codes with variable length check symbols. The average redundancy of the new codes presented here is r ≅ (1/2)log2k+c, where c ∈ (1.0470,1.1332) ⊆ IR and k ∈ IN is the number of information bits. When k is large, it is shown that such redundancy is at most 0.6069 bits off the redundancy of an optimal systematic unordered code design with fixed length information symbols and variable length check symbols; and, at most 2.8075 bits off the redundancy of an optimal variable length unordered code design. The generalization is also given for the nonbinary case and it is shown that similar results hold true.

We present a Knuth-like method for balancing q-ary codewords, which is characterized by the absence of a prefix that carries the information of the balancing index. Look-up tables for coding and decoding the prefix are avoided. We also show that this method can be extended to include error correction of single channel errors.

An m-ary balanced code with r check digits and k information digits is a code over the alphabet Zm = {0,1, …, m−1} of length n = k+r and cardinality mk such that each codeword is balanced; that is, the real sum of its components (or weight) is equal to [(m − 1)n/2]. This paper contains new efficient methods to design m-ary balanced codes which improve the constructions found in the literature, for all alphabet size m ⩾2. To design such codes, the information words which are close to be balanced are encoded using single maps obtained by a new generalization of Knuth's complementation method to the m-ary alphabet that we introduce in this paper. Whereas, the remaining information words are compressed via suitable m-ary uniquely decodable variable length codes and then balanced using the saved space. For any m⩾2, infinite families of m-ary balanced codes are given with r check digits and k⩽[1/(1 − 2α)][mr − 1)/(m − 1)] − c1 (m, α) r −c2(m, α) information digits, where α ϵ [0, 1/2) can be chosen arbitrarily close to 1/2. The codes can be implemented with O(mk logmk) m-ary digit operations and O(m + k) memory elements to store m-ary digits.

A constant composition code over a k-ary alphabet has the property that the numbers of occurrences of the k symbols within a codeword is the same for each codeword. These specialize to constant weight codes in the binary case, and permutation codes in the case that each symbol occurs exactly once. Constant composition codes arise in powerline communication and balanced scheduling, and are used in the construction of permutation codes. In this paper, direct and recursive methods are developed for the construction of constant composition codes.

A method is presented for designing binary channel codes in such a way that both the power spectral density function and its low-order derivatives vanish at zero frequency. The performance of the new codes is compared with that of channel codes designed with a constraint on the unbalance Of the number of transmitted positive and negative pulses. Some remarks are made on the error-correcting capabilities of these codes.

A symbol permutation invariant balanced (SPI-balanced) code over the alphabet /spl Zopf//sub m/ = {0, 1, ..., m - 1} is a block code over /spl Zopf//sub m/ such that each alphabet symbol occurs as many times as any other symbol in every codeword. For this reason, every permutation among the symbols of the alphabet changes an SPI-balanced code into an SPI-balanced code. This means that SPI-balanced words are "the most balanced" among all possible m-ary balanced word types and this property makes them very attractive from the application perspective. In particular, they can be used to achieve m-ary DC-free communication, to detect/correct asymmetric/unidirectional errors on the m-ary asymmetric/unidirectional channel, to achieve delay-insensitive communication, to maintain data integrity in digital optical disks, and so on. This paper gives some efficient methods to convert (encode) m-ary information sequences into m-ary SPI-balanced codes whose redundancy is equal to roughly double the minimum possible redundancy r/sub min/. It is proven that r/sub min/ /spl sime/ [(m - 1)/2]log/sub m/ n - (1/2)[1 - (1/log/sub 2/spl pi// m)]m - (1/log/sub 2/spl pi// m) for any code which converts k information digits into an SPI-balanced code of length n = k + r. For example, the first method given in the paper encodes k information digits into an SPI-balanced code of length n = k + r, with r = (m - 1) log/sub m/ k + O(m log/sub m/ log/sub m/ k). A second method is a recursive method, which uses the first as base code and encodes k digits into an SPI-balanced code of length n = k + r, with r /spl sime/ (m - 1) log/sub m/ n - log/sub m/[(m - 1)!].

A class of codes and decoders is described for transmitting digital information by means of bandlimited signals in the presence of additive white Gaussiau noise. The system, called permutation modulation, has many desirable features. Each code word requires the same energy for transmission. The receiver, which is maximum likelihood, is algebraic in nature, relatively easy to instrument, and does not require local generation of the possible sent messages. The probability of incorrect decoding is the same for each sent message. Certain of the permutation modulation codes are more efficient (in a sense described precisely) than any other known digital modulation scheme. PCM, ppm, orthogonal and biorthogonal codes are included as special cases of permutation modulation.

This booklet develops in nearly 200 pages the basics of combinatorial enumeration through an approach that revolves around generating functions. The major objects of interest here are words, trees, graphs, and permutations, which surface recurrently in all areas of discrete mathematics. The text presents the core of the theory with chapters on unlabelled enumeration and ordinary generating functions, labelled enumeration and exponential generating functions, and finally multivariate enumeration and generating functions. It is largely oriented towards applications of combinatorial enumeration to random discrete structures and discrete mathematics models, as they appear in various branches of science, like statistical physics, computational biology, probability theory, and, last not least, computer science and the analysis of algorithms.

Analytic Combinatorics

- P Flajonet
- R Sedgewick

P. Flajonet and R. Sedgewick, 'Analytic Combinatorics', ISBN 978-0521-89806-5, Cambridge University Press, 2009.