## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

We consider the transmission and storage of data that use coded binary symbols over a channel, where a Pearsondistance-based detector is used for achieving resilience against additive noise, unknown channel gain, and varying offset. We study Minimum Pearson Distance (MPD) detection in conjunction with a set, S, of codewords satisfying a center-of-mass constraint. We investigate the properties of the codewords in S, compute the size of S, and derive its redundancy for asymptotically large values of the codeword length n. The redundancy of S is approximately 3/2 log2 n + α where α = log2 √π/24 =-1.467. for n odd and α =-0.467. for n even. We describe a simple encoding algorithm whose redundancy equals 2 log2 n + o(log n). We also compute the word error rate of the MPD detector when the channel is corrupted with additive Gaussian noise.

To read the full-text of this research,

you can request a copy directly from the authors.

... Then, the re-scaled signal, brought into its standard range, can be forwarded to the final detection/decoding system, where the distance properties of the code can be optimally utilized by applying, for example, the Chase algorithm [58]. A detection scheme for channels with gain and such varying offset is investigated in [59,60], where, for the binary case, minimum Pearson distance based detection is used in conjunction with mass-centered codewords. ...

... Here, we consider the situation in which the offset varies linearly within a codeword, where the slope of the offset, represented by the parameter c, is unknown. A detection scheme for channels with gain and such varying offset is investigated in [59], where, for the binary case, MPD detection is used in conjunction with mass-centered codewords, in such a way that the system is insensitive to both gain and varying offset, i.e., it is (a, b, c)-immune. However, this scheme is very expensive in terms of redundancy. ...

... Another example of constrained code techniques is advocated in [59] with a less redundant option that also guarantees (a, b, c)-immunity. The Pearson distance offers immunity to gain and non-varying offset mismatch [49], and an MPD decoder chooses among all candidate codewordsx ∈ S the codeword x o whose Pearson distance to the received vector r is smallest. ...

... discussed in [9], memory cells of nonvolatile data storage products that are closer to warmer spots lose their data charge more rapidly than memory cells closer to colder spots, so that offset loss is not constant within a codeword [4]. Evidently, the (varying) offset cannot be considered to be equal for all symbols in a codeword, and alternative detection methods have been sought for. ...

... The quest for advanced detection techniques that are immune to unknown, first-order, offset variation is not new. Skachek and Immink [9] introduced mass centered codewords whose detection is independent of both unknown base offset and offset's slew rate. They concluded that the redundancy of their scheme is prohibitively large for many applications. ...

... They introduced Pearson-distance-based detection in conjunction with a difference operator and a pair-constrained code. Their adopted code has significantly less redundancy than the previously proposed mass-centered codes [9]. However, it requires a 3 dB higher noise margin, which makes it less suitable for noise-dominant channels. ...

We consider noisy communications and storage systems that are hampered by varying offset of unknown magnitude such as low-frequency signals of unknown amplitude added to the sent signal. We study and analyze a new detection method whose error performance is independent of both unknown base offset and offset’s slew rate. The new method requires, for a codeword length n ≥ 12, less than 1.5 dB more noise margin than Euclidean distance detection. The relationship with constrained codes based on mass-centered codewords and the new detection method is discussed.

... Here, we consider the situation in which the offset varies linearly within a codeword, where the slope of the offset, represented by the parameter c, is unknown. A detection scheme for channels with scaling and such varying offset is investigated in [4], where, for the binary case, MPD detection is used in conjunction with mass-centered codewords, in such a way that the system is insensitive to both scaling and varying offset, i.e., it is (a, b, c)-immune. However, this scheme is very expensive in terms of redundancy. ...

... In case of varying offset, mass-centered codes in combination with the MPD detector are advocated in [4] for the binary case, where the codebook S * ⊆ [2] n is chosen such that each codeword x ∈ S * satisfies ...

... The error performance of the MPD detector with the employment of mass-centered codes is insensitive to scaling and varying offset mismatch, i.e., (a, b, c)-immune. However, the redundancy is O(log n) [4]. In this paper, we will propose a less redundant scheme that also guarantees (a, b, c)-immunity. ...

We consider noisy data transmission channels with unknown scaling and varying offset mismatch. Minimum Pearson distance detection is used in cooperation with a difference operator, which offers immunity to such mismatch. Pair-constrained codes are proposed for unambiguous decoding, where in each codeword certain adjacent symbol pairs must appear at least once. We investigate the cardinality and redundancy of these codes.

... Most of the literature related to CS codes is restricted to the design and analysis of fixed-length codes [1], [9]- [11]. However it has been shown that simple, variable-length CS codes can have higher maximum possible code rates and lower implementation complexity than fixed-length codes [2]- [4], [12]- [16]. ...

... → 0, and hence λ 3 = λ 2 , and C 3 = C 2 by comparing this equation and the second equation in (9). Considering the asymptotic behavior we know that the rate of increase of C diminishes and becomes negligible as l max → ∞. ...

The use of constrained sequence (CS) codes is important for the robust operation of transmission and data storage systems. While most analysis and development of CS codes has focused on fixedlength codes, recent research has demonstrated advantages of variable-length CS codes. In our design of capacity-approaching variable-length CS codes, the construction of minimal sets is critical. In this paper we propose an approach to construct minimal sets for a variety of constraints based on the finite state machine (FSM) description of constrained sequences. We develop three criteria to select the optimal state of the FSM that enables the design of a single-state encoder which results in the highest maximum possible code rate, and we apply these criteria to several constraints to illustrate the advantages that can be achieved. We then introduce FSM partitions and propose a recursive construction algorithm to establish the minimal set of the specified state. Finally, we present the construction of single-state capacity-approaching variable-length CS codes to show the improved efficiency and reduced implementation complexity that can be achieved compared to CS codes currently in use.

... RDS is the ongoing summation of encoded bit weights in the sequence, where a logic one has weight +1 and a logic zero has weight −1. Other constraints include the Pearson constraint that is immune to unknown channel gain and offset [13]- [15], and constraints that mitigate inter-cell interference in flash memories [23], [36]- [39]. It is well known that a constraint can be described with a finite state machine (FSM) that contains states, edges and labels. ...

We study the ability of recently developed variable-length constrained sequence codes to determine codeword boundaries in the received sequence upon initial receipt of the sequence and if errors in the received sequence cause synchronization to be lost.We first investigate construction of these codes based on the finite state machine description of a given constraint, and develop new construction criteria to achieve high synchronization probabilities. Given these criteria, we propose a guided partial extension algorithm to construct variable-length constrained sequence codes with high synchronization probabilities. With this algorithm we construct new codes and determine the number of codewords and coded bits that are needed to recover synchronization once synchronization is lost.We consider a large variety of constraints including the runlength limited (RLL) constraint, the DC-free constraint, the Pearson constraint and constraints for inter-cell interference mitigation in flash memories. Simulation results show that the codes we construct exhibit excellent synchronization properties, often resynchronizing within a few bits.

... The authors assume that the offset is constant (uniform) for all symbols in the codeword. In [10], it is assumed that the offset varies linearly over the codeword symbols, where the slope of the offset is unknown. The error performance of Pearson-distance-based detectors is intrinsically resistant to both offset and gain mismatch. ...

We consider the transmission and storage of encoded strings of symbols over a noisy channel, where dynamic threshold detection is proposed for achieving resilience against unknown scaling and offset of the received signal. We derive simple rules for dynamically estimating the unknown scale (gain) and offset. The estimates of the actual gain and offset so obtained are used to adjust the threshold levels or to re-scale the received signal within its regular range. Then, the re-scaled signal, brought into its standard range, can be forwarded to the final detection/decoding system, where optimum use can be made of the distance properties of the code by applying, for example, the Chase algorithm. A worked example of a spin-torque transfer magnetic random access memory (STT-MRAM) with an application to an extended (72, 64) Hamming code is described, where the retrieved signal is perturbed by additive Gaussian noise and unknown gain or offset.

In 1989 we organized the first Benelux‐Japan workshop on Information and Communication theory in Eindhoven, the Netherlands. This year, 2019 we celebrate 30 years of our friendship between Asian and European scientists at the AEW11 in Rotterdam, the Netherlands. Many of the 1989 participants are also present at the 2019 event. This year we have many participants from different parts of Asia and Europe. It shows the importance of this “small” event.
It is a good tradition to pay a tribute to a special lecturer in our community. This year we selected Hiroyoshi Morita. Hiro is a well known information theorist with many original contributions. We also appreciate very much his contributions to the Information theory community in general.
We expect all contributors to this workshop to pay special attention to the concept of their work. In this way, the workshop is also of interest to our young newcomers.
The organizers prepared an ideal environment in the world‐harbor city of Rotterdam, also known for its modern design architectures, for the exchange of ideas and intensive discussions.
A warm welcome from the organizers
K.A. Schouhamer Immink and A.J. Han Vinck

We consider the construction of capacity-approaching variable-length constrained sequence codes based on multi-state encoders that permit state-independent decoding. Based on the finite state machine description of the constraint, we first select the principal states and establish the minimal sets. By performing partial extensions and normalized geometric Huffman coding, efficient codebooks that enable state-independent decoding are obtained. We then extend this multi-state approach to a construction technique based on n-step FSMs. We demonstrate the usefulness of this approach by constructing capacity-approaching variable-length constrained sequence codes with improved efficiency and/or reduced implementation complexity to satisfy a variety of constraints, including the runlength-limited (RLL) constraint, the DC-free constraint, and the DC-free RLL constraint, with an emphasis on their application in visible light communications.

The 10th Asia-Europe workshop in "Concepts in Information Theory and Communications" AEW10 was held in Boppard, Germany on June 21-23, 2017. It is based on a longstanding cooperation between Asian and European scientists. The first workshop was held in Eindhoven, the Netherlands in 1989. The idea of the workshop is threefold: 1) to improve the communication between the scientist in the different parts of the world; 2) to exchange knowledge and ideas; and 3) to pay a tribute to a well respected and special scientist.

The Pearson distance has been advocated for improving the error performance
of noisy channels with unknown gain and offset. The Pearson distance can only
fruitfully be used for sets of $q$-ary codewords, called Pearson codes, that
satisfy specific properties. We will analyze constructions and properties of
optimal Pearson codes. We will compare the redundancy of optimal Pearson codes
with the redundancy of prior art $T$-constrained codes, which consist of
$q$-ary sequences in which $T$ pre-determined reference symbols appear at least
once. In particular, it will be shown that for $q\le 3$ the $2$-constrained
codes are optimal Pearson codes, while for $q\ge 4$ these codes are not
optimal.

A method is presented for designing binary channel codes in such a way that both the power spectral density function and its second-derivative vanish at zero frequency. Recursion relations are derived to determine the number of codewords that can be used in this coding scheme. A simple algorithm for encoding and decoding codewords is developed. The performance of the new codes is compared with that of classical channel codes designed with a constraint on the unbalance of the number of transmitted positive and negative pulses.

The performance of certain transmission and storage channels, such as optical data storage and nonvolatile memory (flash), is seriously hampered by the phenomena of unknown offset (drift) or gain. We will show that minimum Pearson distance (MPD) detection, unlike conventional minimum Euclidean distance detection, is immune to offset and/or gain mismatch. MPD detection is used in conjunction with (T) -constrained codes that consist of (q) -ary codewords, where in each codeword (T) reference symbols appear at least once. We will analyze the redundancy of the new (q) -ary coding technique and compute the error performance of MPD detection in the presence of additive noise. Implementation issues of MPD detection will be discussed, and results of simulations will be given.

In 1986, Don Knuth published a very simple algorithm for constructing sets of bipolar codewords with equal numbers of 1s and 0s, called balanced codes. Knuth's algorithm is, since look-up tables are absent, well suited for use with large codewords. The redundancy of Knuths balanced codes is a factor of two larger than that of a code comprising the full set of balanced codewords. In our paper we will present results of our attempts to improve the performance of Knuths balanced codes.

Codes were designed for optical disk recording system and future options were explored. The designed code was a combination of dc-free and runlength limited (DCRLL) codes. The design increased minimum feature size for replication and sufficient rejection of low-frequency components enabling a simple noise free tracking. Error-burst correcting Reed-Solomon codes were suggested for the resolution of read error. The features of DCRLL and runlength limited (RLL) sequences was presented and practical codes were devised to satisfy the given channel constraints. The mechanism of RLL codes supressed the components of the genarated sequences. The construction and performance of alternative Eight to fourteen modulation (EFM)-like codes was studied.

Two constructions of a low-complexity near-optimal detection method of dc2-balanced codes are presented. The methods presented are improvements on Slepian's algorithm for optimal detection of permutation codes.

The reliability of mass storage systems, such as optical data recording and non-volatile memory (Flash), is seriously hampered by uncertainty of the actual value of the offset (drift) or gain (amplitude) of the retrieved signal. The recently introduced minimum Pearson distance detection is immune to unknown offset or gain, but this virtue comes at the cost of a lessened noise margin at nominal channel conditions. We will present a novel hybrid detection method, where we combine the outputs of the minimum Euclidean distance and Pearson distance detectors so that we may trade detection robustness versus noise margin. We will compute the error performance of hybrid detection in the presence of unknown channel mismatch and additive noise.

Phase change memory (PCM) is a new solid-state memory technology that promises disruptive changes in the way servers and enterprise storage systems are built. Multilevel-cell (MLC) storage is highly desirable for increasing capacity and thus lowering cost-per-bit in memory technologies. In PCM, MLC storage is hampered by noise and resistance drift. In this paper, the issue of reliability in MLC PCM is addressed. A statistical model is developed that captures the main impairments in MLC PCM cell-arrays. A signal processing and coding framework is then introduced that provides robustness to drift and noise, improving reliability and prolonging data retention. Several examples of codes are provided and practical detection schemes are described.

Coding schemes in which each codeword contains equally many zeros and ones are constructed in such a way that they can be efficiently encoded and decoded.

Second-order spectral-null (2-OSN) codes are, in general, constructed from concatenating any codewords in 2-OSN code. Most 2-OSN codes adopt the Tallini-Bose random walk method in encoding and decoding, which exchanges two adjacent bits each time in a binary vector. In this brief contribution, we give a new implementation of the Tallini-Bose random walk method for 2-OSN codes which is shown by experiment to be faster. Index Terms—Second-order spectral-null code, balanced code, dc-free code, 1-EC/AUED code.

Let T(n,k) denote the set of all words of length n over the alphabet {+1, -1}, having a kth order spectral-null at zero frequency. A subset of T(n,k) is a spectral-null code of length n and order k. Upper and lower bounds on the cardinality of T(n,k) are derived. In particular we prove that (k - 1) log2 (n/k) less-than-or-equal-to n - log2 \T(n,k)\ less-than-or-equal-to O(2k log2 n) for infinitely many values of n. On the other hand, we show that T(n,k) is empty unless n is divisible by 2m, where m = left-perpendicularlog2 kright-perpendicular + 1. Furthermore, bounds on the minimum Hamming distance d of T(n,k) are provided, showing that 2k less-than-or-equal-to d less-than-or-equal-to k(k - 1) + 2 for infinitely many n. We also investigate the minimum number of sign changes in a word x is-an-element-of T(n,k) and provide an equivalent definition of T(n,k) in terms of the positions of these sign changes. An efficient algorithm for encoding arbitrary information sequences into a second-order spectral-null code of redundancy 3 log2 n + O(log log n) is presented. Furthermore, we prove that the first nonzero moment of any word in T(n,k) is divisible by k! and then show how to construct a word with a spectral null of order k whose first nonzero moment is any even multiple of k!. This leads to an encoding scheme for spectral-null codes of length n and any fixed order k, with rate approaching unity as n --> infinity.

We introduce a new method for enumerating codewords that can be
applied to DC<sup>2</sup>-constrained channels. Based on this method,
two efficient algorithms for evaluating the number of codewords with
specified characteristics are developed. Computer calculation results
show that these algorithms are significantly more computationally
efficient than other techniques developed to date

Let S (N,q) be the set of all words of length N over the bipolar
alphabet (-1,+1), having a qth order spectral-null at zero frequency.
Any subset of S (N,q) is a spectral-null code of length N and order q.
This correspondence gives an equivalent formulation of S(N,q) in terms
of codes over the binary alphabet (0,1), shows that S(N,2) is equivalent
to a well-known class of single-error correcting and all
unidirectional-error detecting (SEC-AUED) codes, derives an explicit
expression for the redundancy of S(N,2), and presents new efficient
recursive design methods for second-order spectral-null codes which are
less redundant than the codes found in the literature

Some efficient second-order spectral-null codes encoded an index of random walk function recursively and ended with the short base second-order spectral-null codes. All these codes used the Tallini-Bose random walk function that exchanges two consecutive bits. In this paper, we propose a new random walk function based on cyclic bit-shift, on which the redundancy can be improved. Moreover, the bit-shift can be implemented efficiently by either software or hardware.

This booklet develops in nearly 200 pages the basics of combinatorial enumeration through an approach that revolves around generating functions. The major objects of interest here are words, trees, graphs, and permutations, which surface recurrently in all areas of discrete mathematics. The text presents the core of the theory with chapters on unlabelled enumeration and ordinary generating functions, labelled enumeration and exponential generating functions, and finally multivariate enumeration and generating functions. It is largely oriented towards applications of combinatorial enumeration to random discrete structures and discrete mathematics models, as they appear in various branches of science, like statistical physics, computational biology, probability theory, and, last not least, computer science and the analysis of algorithms.