## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

In this article, we study properties and algorithms for constructing sets of 'constant weight' codewords with bipolar symbols, where the sum of the symbols is a constant q, q 6 0. We show various code constructions that extend Knuth's balancing vector scheme, q = 0, to the case where q > 0. We compute the redundancy of the new coding methods. Index Terms—Balanced code, channel capacity, constrained code, magnetic recording, optical recording. I. INTRODUCTION Let q be an integer. A setC, which is a subset of ( w = (w1;w2;:::;wn)2f 1; +1g n : n X i=1 wi = q )

To read the full-text of this research,

you can request a copy directly from the authors.

... In fact, this problem was posed by MacWilliams and Sloane as Research Problem 17.3 [15]. To the best of our knowledge, there are three encoding approaches: the enumerative method of Schalkwijk [16], the geometric approach of Tian et al. [17], and the Knuth-like method of Skachek and Immink [18]. For the case where q is constant, the first two methods encode in quadratic time O(n 2 ), while the third method runs in linear time. ...

... That is, there exist binary words x with T (x, q) = ∅ and we refer to such words as bad words. Hence, a different encoding rule must be applied to these bad words, and simple linear-time methods were proposed and studied by Skachek and Immink [18]. While their q-balancing schemes are in fact variable-length schemes, Skachek and Immink did not provide an analysis of the average redundancy of their schemes and instead argued that log 2 n+O(1) redundant bits are sufficient in the worst case (when q is constant). ...

... (I) In this paper, we amalgamate the variable-length scheme in [10] with the q-balancing schemes in [18] to obtain new variable-length q-balancing schemes. We formally describe Schemes A and B in Sections III and V, respectively. ...

We study and propose schemes that map messages onto constant-weight codewords using variable-length prefixes. We provide polynomial-time computable formulas that estimate the average number of redundant bits incurred by our schemes. In addition to the exact formulas, we also perform an asymptotic analysis and demonstrate that our scheme uses $\frac12 \log n+O(1)$ redundant bits to encode messages into length-$n$ words with weight $(n/2)+{\sf q}$ for constant ${\sf q}$.

... However, the construction is limited to specific constant dimension codes. In [2], a construction of CW codes based on Knuth's balancing approach [3] is presented. The flexibility on the tail bits is used to generate CW codes including balanced codes. ...

... In the previous section, the construction of q-ary CW sequences was achieved with a weight range shown in (2). However, because of the limited interval, we will present an approach to extend this range. ...

... Example 4: Consider the same ternary information sequence x = 212 of length 3 as in Example 3. We would like to generate a (7, 3, 12, 3) CW sequence of weight W = 12 and n = 7 as described in Table II. We observe from (2) that the information sequence 212 can only generate (n, k, W, q) CW sequences with W ∈ [2,10]. In order to extend this range of weight, we append a ternary redundant vector u of length e to c . ...

We present an encoding and decoding scheme for constant weight sequences, that is, given an information sequence, the construction results in a sequence of specific weight within a certain range. The scheme uses a prefix design that is based on Gray codes. Furthermore, by adding redundant symbols we extend the range of weight values for output sequences, which is useful for some applications.

... However, the construction is limited to specific constant dimension codes. In [2], a construction of CW codes based on Knuth's balancing approach [3] is presented. The flexibility on the tail bits is used to generate CW codes including balanced codes. ...

... In the previous section, the construction of q-ary CW sequences was achieved with a weight range shown in (2). However, because of the limited interval, we will present an approach to extend this range. ...

... Example 4: Consider the same ternary information sequence x = 212 of length 3 as in Example 3. We would like to generate a (7, 3, 12, 3) CW sequence of weight W = 12 and n = 7 as described in Table II. We observe from (2) that the information sequence 212 can only generate (n, k, W, q) CW sequences with W ∈ [2,10]. In order to extend this range of weight, we append a ternary redundant vector u of length e to c . ...

We present an encoding and decoding scheme for
constant weight sequences, that is, given an information sequence,
the construction results in a sequence of specific weight within
a certain range. The scheme uses a prefix design that is based
on Gray codes. Furthermore, by adding redundant symbols we
extend the range of weight values for output sequences, which is
useful for some applications.

... • Step 1: Calculating probabilities for programming the memory cells to the LRS; • Step 2: Encoding row-by-row the user message M of length-into A , such that every row and column satisfies the corresponding weight-constraints. For the first row, we can use any coding methods for constructing the 1-D constant-weight codes [9]- [11]. In this work, we apply the enumerative coding technique [10] so as to achieve a high coding efficiency. ...

... Due to 2 < 2 ( 2 = 140), then 2 = 2 . The index sets 1 and 0 where 1 = (1, 6, 9, 10) (filled by the yellow color) and 0 = (2,3,4,5,7,8,11) (no fill) are then assigned by C 1 ( , 2,1 →1 ) and C 0 ( , 2,0 →1 ), respectively. Two codewords are then extracted from the codebook P 2 . ...

This paper proposes novel methods for designing two-dimensional (2-D) weight-constrained codes for reducing the parasitic currents in the crossbar resistive memory array. In particular, we present efficient encoding/decoding algorithms for capacity-approaching 2-D weight-constrained codes of size m×n, where each row has a weight pn with p < 1/2; and each column has a weight qm with q ≤ 1/2. We show that the designed codes provide higher code rates compared to the prior art codes for p ≤ 1/2 and q ≤ 1/2.

... The main reason is due to the provable difficulty of 2D-constraints compared to 1D-constraints. For example, consider certain weight-constrained codes such as the balanced codes or constant-weight codes, there are several efficient prior-art coding methods for designing 1Dcodes with optimal or almost optimal redundancy [14]- [17]. Here, almost optimal refers to the cases that the encoder's redundancy is at most a constant bit away from the optimal redundancy. ...

In this work, given n, p>0 , efficient encoding/decoding algorithms are presented for mapping arbitrary data to and from n×n binary arrays in which the weight of every row and every column is at most pn. Such constraint, referred as p-bounded-weight-constraint, is crucial for reducing the parasitic currents in the crossbar resistive memory arrays, and has also been proposed for certain applications of the holographic data storage. While low-complexity designs have been proposed in the literature for only the case p=1/2 , this work provides efficient coding methods that work for arbitrary values of p . The coding rate of our proposed encoder approaches the channel capacity for all p .

... Encoding and decoding of constant composition code is a field of active research, see, for example [4], [5], [6]. For the binary case, Weber and Immink [7] and Skachek et al. [8] presented methods that translate arbitrary data into a codeword having a prescribed number of one's and zero's. Enumerative methods for generating codewords have been presented in [9], [10], [11]. ...

... Given that balanced sequences are a specic case of constant-weight sequences, Knuth's algorithm was extended to constant-weight sequences in [19]. Other algorithms for constructing constant-weight sequences can be found in [28,34,52]. ...

Our research deals with the encoding and decoding of balanced sequences using Gray
codes. Given that any non-binary sequence, can always be balanced through certain
algorithms, we show that the encoding and decoding of a balanced sequence can be
performed through a simple and efficient method where the prefix is a Gray code. Our
balancing scheme makes use of a generalization of Knuth's balancing algorithm, per-
formed on the overall sequence length which includes the information sequence as well
as the designed prefix. Our proposed method was firstly applied to certain information
source lengths and then generalized for any length.
We conclude with a detailed complexity and redundancy analysis for our balancing
algorithm.

... In [9], Carlet determined one weight linear codes over Z 4 and in [23], Wood studied linear one weight codes over Z m . Constant weight codes are very useful in a variety of applications such as data storage, fault-tolerant circuit design and computing, pattern generation for circuit testing, identification coding, and optical overlay networks [20]. Moreover, the reader can find the other applications of constant weight codes; determining the zero error decision feedback capacity of discrete memoryless channels in [21], multiple access communications and spherical codes for modulation in [14,15], DNA codes in [18,19], powerline communications and frequency hopping in [11]. ...

Inspired by the Z2Z4-additive codes, linear codes over Z2^r x(Z2+uZ2)^s have been introduced by Aydogdu et al. more recently. Although these family of codes are similar to each other, linear codes over Z2^r x(Z2+uZ2)^s have some advantages compared to Z2Z4-additive codes. A code is called constant weight(one weight) if all the codewords have the same weight. It is well known that constant weight or one weight codes have many important applications. In this paper, we study the structure of one weight Z2Z2[u]-linear and cyclic codes. We classify these type of one weight codes and also give some illustrative examples.

... The mapping of bits into activation patterns and vice-versa can be seen as a constant weight coding that maps between unconstrained binary b 1 -tuples and constant weight binary n-tuples of weight k. A lot of work has been done on constant weight coding [31]- [33], with special focus on balanced codes (k = n/2) [34]- [36]. In the original OFDM-IM scheme [24], a mapping that is based on the combinatorial number system of degree k [37, p. 27-30] was employed. ...

We present a novel data programming scheme for flash memory. In each word-line, exactly k out of n memory cells are programmed while the rest are kept in the erased state. Information is then conveyed by the index set of the k programmed cells, of which there are �n k possible choices (also called activation patterns). In the case of multi-level flash, additional information is conveyed by the threshold voltage levels of the k programmed cells (similar to traditional programming). We derive the storage efficiency of the new scheme as a function of the fraction of programmed cells and determine the fraction that maximizes it. Then, we analyse the effect of this scheme on cell-to-cell interference and derive the conditions that ensure its reduction compared to traditional programming. Following this, we analyse the performance of our new scheme using two detection methods: fixed reference detection and dynamic reference detection, and conclude that using dynamic reference detection will result in page error performance improvements that can reach orders of magnitude compared to that attainable by the fixed reference approach. We then discuss how logical pages can be constructed in the index programming similarly to traditional programming. Finally, we discuss the results and trade-offs between storage efficiency and error resilience proposed by the scheme along with some future directions.

The subblock energy-constrained codes (SECCs) and sliding window-constrained codes (SWCCs) have recently attracted attention due to various applications in communication systems such as simultaneous energy and information transfer. In a SECC, each codeword is divided into smaller non-overlapping windows, called subblocks, and every subblock is constrained to carry sufficient energy. In a SWCC, however, the energy constraint is enforced over every window. In this work, we focus on the binary channel, where sufficient energy is achieved theoretically by using relatively high weight codes, and study the bounded SECCs and bounded SWCCs, where the weight in every window is bounded between a minimum and maximum number. Particularly, we focus on the cases of parameters that there is no rate loss, i.e. the channel capacity is one, and propose two methods to construct capacity-approaching codes with low redundancy and linear-time complexity, based on Knuth’s balancing technique and sequence replacement technique. These methods can be further extended to construct SECCs and SWCCs. For certain codes parameters, our methods incur only one redundant bit.We also impose the minimum distance constraint for error correction capability of the designed codes, which helps to reduce the error propagation during decoding as well.

We propose coding techniques that simultaneously limit the length of homopolymers runs, ensure the GC-content constraint, and are capable of correcting a single edit error in strands of nucleotides in DNA-based data storage systems. In particular, for given ℓ, ϵ > 0, we propose simple and efficient encoders/decoders that transform binary sequences into DNA base sequences (codewords), namely sequences of the symbols A, T, C and G, that satisfy all of the following properties: • Runlength constraint: the maximum homopolymer run in each codeword is at most ℓ, • GC-content constraint: the GC-content of each codeword is within [0.5 - ϵ; 0.5 + ϵ], • Error-correction: each codeword is capable of correcting a single deletion, or single insertion, or single substitution error. While various combinations of these properties have been considered in the literature, this work provides generalizations of codes constructions that satisfy all the properties with arbitrary parameters of ℓ and ϵ. Furthermore, for practical values of ℓ and ϵ, we show that our encoders achieve higher rates than existing results in the literature and approach capacity. Our methods have low encoding/decoding complexity and limited error propagation.

The subblock energy-constrained codes (SECCs) and sliding window-constrained codes (SWCCs) have recently attracted attention due to various applications in communcation systems such as simultaneous energy and information transfer. In a SECC, each codewod is divided into smaller non-overlapping windows, called subblocks, and every subblock is constrained to carry sufficient energy. In a SWCC, the energy constraint is enforced over every window. In this work, we focus on the binary channel, where sufficient energy is achieved theoretically by using relatively high weight codes, and study SECCs and SWCCs under more general constraints, namely bounded SECCs and bounded SWCCs. We propose two methods to construct such codes with low redundancy and linear-time complexity, based on Knuth's balancing technique and sequence replacement technique. For certain codes parameters, our methods incur only one redundant bit. We also impose the minimum distance constraint for error correction capability of the designed codes, which helps to reduce the error propagation during decoding as well.

In this paper, we first propose coding techniques for DNA-based data storage which account the maximum homopolymer runlength and the GC-content. In particular, for arbitrary $\ell,\epsilon > 0$, we propose simple and efficient $(\epsilon, \ell)$-constrained encoders that transform binary sequences into DNA base sequences (codewords), that satisfy the following properties: • Runlength constraint: the maximum homopolymer run in each codeword is at most $\ell$, • GC-content constraint: the GC-content of each codeword is within $[0.5 − \epsilon, 0.5 + \epsilon]$. For practical values of l and ε, our codes achieve higher rates than the existing results in the literature. We further design efficient $(\epsilon,\ell)$-constrained codes with error-correction capability. Specifically, the designed codes satisfy the runlength constraint, the GC-content constraint, and can correct a single edit (i.e. a single deletion, insertion, or substitution) and its variants. To the best of our knowledge, no such codes are constructed prior to this work.

The subblock energy-constrained codes (SECCs) have recently attracted attention due to various applications in communication systems such as simultaneous energy and information transfer. In a SECC, each codeword is divided into smaller subblocks, and every subblock is constrained to carry sufficient energy. In this work, we study SECCs under more general constraints, namely bounded SECCs and sliding-window constrained codes (SWCCs), and propose two methods to construct such codes with low redundancy and linear-time complexity, based on Knuth’s balancing technique and sequence replacement technique. For certain codes parameters, our methods incur only one redundant bit.

The subblock energy-constrained codes (SECCs) have recently attracted attention due to various applications in communication systems such as simultaneous energy and information transfer. In a SECC, each codeword is divided into smaller subblocks, and every subblock is constrained to carry sufficient energy. In this work, we study SECCs under more general constraints, namely bounded SECCs and sliding-window constrained codes (SWCCs), and propose two methods to construct such codes with low redundancy and linear-time complexity, based on Knuth’s balancing technique and sequence replacement technique. For certain codes parameters, our methods incur only one redundant bit.

We propose coding techniques that limit the length of homopolymers runs, ensure the GC-content constraint, and are capable of correcting a single edit error in strands of nucleotides in DNA-based data storage systems. In particular, for given $\ell, {\epsilon} > 0$, we propose simple and efficient encoders/decoders that transform binary sequences into DNA base sequences (codewords), namely sequences of the symbols A, T, C and G, that satisfy the following properties: (i) Runlength constraint: the maximum homopolymer run in each codeword is at most $\ell$, (ii) GC-content constraint: the GC-content of each codeword is within $[0.5-{\epsilon}, 0.5+{\epsilon}]$, (iii) Error-correction: each codeword is capable of correcting a single deletion, or single insertion, or single substitution error. For practical values of $\ell$ and ${\epsilon}$, we show that our encoders achieve much higher rates than existing results in the literature and approach the capacity. Our methods have low encoding/decoding complexity and limited error propagation.

Off-chip buses account for a significant portion of the total system power consumed in embedded systems. Bus encoding schemes have been proposed to minimize power dissipation, but none has been demonstrated to be optimal with respect to any measure. In this paper, we give the first provably optimal and explicit (polynomial-time constructible) families of memoryless codes for minimizing bit transitions in off-chip buses. Our results imply that having access to a clock does not make a memoryless encoding scheme that minimizes bit transitions more powerful

Preface to the Second Edition
About five years after the publication of the first edition, it was felt that an update of this text would be inescapable as so many relevant publications, including patents and survey papers, have been published. The author's principal aim in writing the second edition is to add the newly published coding methods, and discuss them in the context of the prior art. As a result about 150 new references, including many patents and patent applications, most of them younger than five years old, have been added to the former list of references. Fortunately, the US Patent Office now follows the European Patent Office in publishing a patent application after eighteen months of its first application, and this policy clearly adds to the rapid access to this important part of the technical literature. I am grateful to many readers who have helped me to correct (clerical) errors in the first edition and also to those who brought new and exciting material to my attention. I have tried to correct every error that I found or was brought to my attention by attentive readers, and seriously tried to
avoid introducing new errors in the Second Edition.
China is becoming a major player in the art of constructing, designing, and basic research of electronic storage systems. A Chinese translation of the first edition has been published early 2004. The author is indebted to prof. Xu, Tsinghua University, Beijing, for taking the initiative for this Chinese version, and also to Mr. Zhijun Lei, Tsinghua University, for undertaking the arduous task of translating this book from English to Chinese. Clearly, this translation makes it possible that a billion more people will now have access to it.
Kees A. Schouhamer Immink
Rotterdam, November 2004

This paper presents a practical writing/reading scheme in nonvolatile
memories, called balanced modulation, for minimizing the asymmetric component
of errors. The main idea is to encode data using a balanced error-correcting
code. When reading information from a block, it adjusts the reading threshold
such that the resulting word is also balanced or approximately balanced.
Balanced modulation has suboptimal performance for any cell-level distribution
and it can be easily implemented in the current systems of nonvolatile
memories. Furthermore, we studied the construction of balanced error-correcting
codes, in particular, balanced LDPC codes. It has very efficient encoding and
decoding algorithms, and it is more efficient than prior construction of
balanced error-correcting codes.

We present a novel technique for encoding and decoding constant weight binary vectors that uses a geometric interpretation of the codebook. Our technique is based on embedding the codebook in a Euclidean space of dimension equal to the weight of the code. The encoder and decoder mappings are then interpreted as a bijection between a certain hyper-rectangle and a polytope in this Euclidean space. An inductive dissection algorithm is developed for constructing such a bijection. We prove that the algorithm is correct and then analyze its complexity. The complexity depends on the weight of the vector, rather than on the block length as in other algorithms. This approach is advantageous when the weight is smaller than the square root of the block length.

Predetermined fixed thresholds are commonly used in nonvolatile memories for reading binary sequences, but they usually result in significant asymmetric errors after a long duration, due to voltage or resistance drift. This motivates us to construct error-correcting schemes with dynamic reading thresholds, so that the asymmetric component of errors are minimized. In this paper, we discuss how to select dynamic reading thresholds without knowing cell level distributions, and present several error-correcting schemes. Analysis based on Gaussian noise models reveals that bit error probabilities can be significantly reduced by using dynamic thresholds instead of fixed thresholds, hence leading to a higher information rate.

The prior art construction of sets of balanced codewords by Knuth is attractive for its simplicity and absence of look-up tables, but the redundancy of the balanced codes generated by Knuth's algorithm falls a factor of two short with respect to the minimum required. We present a new construction, which is simple, does not use look-up tables, and is less redundant than Knuth's construction. In the new construction, the user word is modified in the same way as in Knuth's construction, that is by inverting a segment of user symbols. The prefix that indicates which segment has been inverted, however, is encoded in a different, more efficient, way.

We consider the local rank-modulation scheme in which a sliding window going over a sequence of real-valued variables induces a sequence of permutations. Local rank- modulation is a generalization of the rank-modulation scheme, which has been recently suggested as a way of storing information in flash memory.
We study constant-weight Gray codes for the local rank- modulation scheme in order to simulate conventional multi-level flash cells while retaining the benefits of rank modulation. We provide necessary conditions for the existence of cyclic and cyclic optimal Gray codes. We then specifically study codes of weight 2 and upper bound their efficiency, thus proving that there are no such asymptotically-optimal cyclic codes. In contrast, we study codes of weight 3 and efficiently construct codes which are asymptotically-optimal. We conclude with a construction of codes with asymptotically-optimal rate and weight asymptotically half the length, thus having an asymptotically-optimal charge difference between adjacent cells.

Consider a communication channel that consists of several sub-channels transmitting simultaneously and asynchronously. The receiver acknowledges reception of the message before the transmitter sends the following message. Namely, pipelined utilization of the channel is not possible. The main contribution of the paper is a scheme that enables to transmit without an acknowledgement of the message, therefore enabling pipelined communication and providing a higher bandwidth. Moreover, the scheme allows for a certain number of transitions from a second message to arrive before reception of the current message has been completed, a condition that the authors call skew. They have derived necessary and sufficient conditions for codes that can tolerate a certain amount of skew among adjacent messages, (therefore, allowing for continuous operation), and detect a larger amount of skew when the original skew is exceeded. Potential applications are in on-chip, on-board and board to board communications, enabling much higher communication bandwidth

Efficient encoding algorithms are presented for two types of
constraints on two-dimensional binary arrays. The first constraint
considered is that of t-conservative arrays, where each row and each
column has at least t transitions of the form `0'→`1' or
`1'→`0.' The second constraint is that of two-dimensional DC-free
arrays, where in each row and each column the number of `0's equals the
number of `1's

In holographic storage, two-dimensional arrays of binary data is
optically recorded in a medium via an interference process. To ensure
optimum operation of a holographic recording system, it is desirable
that the patterns of 1s (light) and 0s (no light) in the recorded array
satisfy the following modulation constraint: in each row and column of
the array there are at least t transitions of the type 1→0 or
0→1, for a prescribed integer t. A two-dimensional array with this
property is said to be a conservative array of strength t. In general,
an n-dimensional conservative array of strength t is a binary array
having at least t transitions in each column, extending in any of the n
dimensions of the array. We present an algorithm for encoding
unconstrained binary data into an n-dimensional conservative array of
strength t. The algorithm employs differential coding and
error-correcting codes. Using n binary codes-one per dimension-with
minimum Hamming distance d⩾2t-3, we apply a certain transformation
to an arbitrary information array which ensures that the number of
transitions in each dimension is determined by the minimum distance of
the corresponding code

The problem of ranking can be described as follows. We have a set of combinatorial objects $S$, such as, say, the k-subsets of n things, and we can imagine that they have been arranged in some list, say lexicographically, and we want to have a fast method for obtaining the rank of a given object in the list. This problem is widely known in Combinatorial Analysis, Computer Science and Information Theory. Ranking is closely connected with the hashing problem, especially with perfect hashing and with generating of random combinatorial objects. In Information Theory the ranking problem is closely connected with so-called enumerative encoding, which may be described as follows: there is a set of words $S$ and an enumerative code has to one-to-one encode every $s \in S$ by a binary word $code(s)$. The length of the $code(s)$ must be the same for all $s \in S$. Clearly, $|code (s)|\geq \log |S|$. (Here and below $\log x=\log_{2}x)$.) The suggested method allows the exponential growth of the speed of encoding and decoding for all combinatorial problems of enumeration which are considered, including the enumeration of permutations, compositions and others.

In digital transmission systems, the transmission channel often does not pass d-c. This causes the well- known problem of baseline wander. One way to overcome this difficulty is to restrict the d-c content in the signal stream using suitably devised codes. It is shown that, for a d-c constrained code, the limiting efficiency is related to the number of allowable running digital sum states in a very simple way.

Modern electronics testing has a legacy of more than 40 years. The introduction of new technologies, especially nanometer technologies with 90nm or smaller geometry, has allowed the semiconductor industry to keep pace with the increased performance-capacity demands from consumers. As a result, semiconductor test costs have been growing steadily and typically amount to 40% of today's overall product cost. This book is a comprehensive guide to new VLSI Testing and Design-for-Testability techniques that will allow students, researchers, DFT practitioners, and VLSI designers to master quickly System-on-Chip Test architectures, for test debug and diagnosis of digital, memory, and analog/mixed-signal designs. KEY FEATURES * Emphasizes VLSI Test principles and Design for Testability architectures, with numerous illustrations/examples. * Most up-to-date coverage available, including Fault Tolerance, Low-Power Testing, Defect and Error Tolerance, Network-on-Chip (NOC) Testing, Software-Based Self-Testing, FPGA Testing, MEMS Testing, and System-In-Package (SIP) Testing, which are not yet available in any testing book. * Covers the entire spectrum of VLSI testing and DFT architectures, from digital and analog, to memory circuits, and fault diagnosis and self-repair from digital to memory circuits. * Discusses future nanotechnology test trends and challenges facing the nanometer design era; promising nanotechnology test techniques, including Quantum-Dots, Cellular Automata, Carbon-Nanotubes, and Hybrid Semiconductor/Nanowire/Molecular Computing. * Practical problems at the end of each chapter for students.

We derive the limiting efficiencies of dc-constrained codes. Given bounds on the running digital sum (RDS), the best possible coding efficiency η, for a K-ary transmission alphabet, is η = log2 λmax/log2 K, where λmax is the largest eigenvalue of a matrix which represents the transitions of the allowable states of RDS. Numerical results are presented for the three special cases of binary, ternary and quaternary alphabets.

Coding schemes in which each codeword contains equally many zeros and ones are constructed in such a way that they can be efficiently encoded and decoded.

The problem of delay-insensitive data communication is described. The notion of delay-insensitive code is defined, giving precise conditions under which delay-insensitive data communication is feasible. Examples of these codes are presented and analyzed. It appears that delay-insensitive codes are equivalent with antichains in partially ordered sets and with all unidirectional error-detecting codes.

Consider a communication channel that consists of several subchannels transmitting simultaneously and asynchronously. As an example of this scheme, we can consider a board with several chips. The subchannels represent wires connecting between the chips where differences in the lengths of the wires might result in asynchronous reception. In current technology, the receiver acknowledges reception of the message before the transmitter sends the following message. Namely, pipelined utilization of the channel is not possible. Our main contribution is a scheme that enables transmission without an acknowledgment of the message, therefore enabling pipelined communication and providing a higher bandwidth. However, our scheme allows for a certain number of transitions from a second message to arrive before reception of the current message has been completed, a condition that we call skew. We have derived necessary and sufficient conditions for codes that can tolerate a certain amount of skew among adjacent messages (therefore, allowing for continuous operation) and detect a larger amount of skew when the original skew is exceeded. These results generalize previously known results. We have constructed codes that satisfy the necessary and sufficient conditions, studied their optimality, and devised efficient decoding algorithms. To the best of our knowledge, this is the first known scheme that permits efficient asynchronous communications without acknowledgment. Potential applications are in on-chip, on-board, and board to board communications, enabling much higher communication bandwidth

Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1990. Includes bibliographical references (leaves 58-59). by Jeffrey F. Tabor. M.S.

Signalling off-chip requires significant current. As a result, a chip's power-supply current changes drastically during certain output-bus transitions. These current fluctuations cause a voltage drop between the chip and circuit board due to the parasitic inductance of the power-supply package leads. Digital designers often go to great lengths to reduce this "transmitted" noise. Cray, for instance, carefully balances output signals using a technique called differential signalling to guarantee a chip has constant output current. Transmitted-noise reduction costs Cray a factor of two in output pins and wires. Coding achieves similar results at smaller costs.

A new class of run-length-limited codes in introduced. These codes
are called two-dimensional or multitrack modulation codes.
Two-dimensional modulation codes provide substantial data storage
density increases for multitrack recording systems by operating on
multiple tracks in parallel. Procedures for computing the capacity of
these new codes are given along with fast algorithms for implementing
these procedures. Examples of two-dimensional codes are given to provide
a comparison between the encoding rates obtainable with multitrack and
traditional single-track codes

We derive a simple algorithm for the ranking of binary sequences of length n and weight w . This algorithm is then used for source encoding a memoryless binary source that generates O's with probability q and l's with probability p = 1 - q .

Runlength-limited sequences and arrays have found applications in
magnetic and optical recording. While the constrained sequences are well
studied, little is known about constrained arrays. In this
correspondence we consider the question of how to cascade two arrays
with the same runlength constraints horizontally and vertically, in such
a way that the runlength constraints will not be violated. We consider
binary arrays in which the shortest run of a symbol in a row (column) is
d<sub>1</sub>(d<sub>2</sub>) and the longest run of a symbol in a row
(column) is k<sub>1</sub>(k<sub>2</sub>). We present three methods to
cascade such arrays. If k<sub>1</sub>>4d<sub>1</sub>-2 our method is
optimal, and if k<sub>1</sub>⩾d<sub>1</sub>+1 we give a method which
has a certain optimal structure. Finally, we show how cascading can be
applied to obtain runlength-limited error-correcting array codes

A balanced code with r check bits and k information bits is a
binary code of length k+r and cardinality 2<sup>k</sup> such that each
codeword is balanced; that is, it has [(k+r)/2] 1's and [(k+r)/2] 0's.
This paper contains new methods to construct efficient balanced codes.
To design a balanced code, an information word with a low number of 1's
or 0's is compressed and then balanced using the saved space. On the
other hand, an information word having almost the same number of 1's and
0's is encoded using the single maps defined by Knuth's (1986)
complementation method. Three different constructions are presented.
Balanced codes with r check bits and k information bits with
k⩽2<sup>r+1</sup>-2, k⩽3×2<sup>r</sup>-8, and
k⩽5×2<sup>r</sup>-10r+c(r), c(r)∈{-15, -10, -5, 0, +5},
are given, improving the constructions found in the literature. In some
cases, the first two constructions have a parallel coding scheme

In a balanced code each codeword contains equally many 1's and
0's. Parallel decoding balanced codes with 2<sup>r</sup> (or 2<sup>r
</sup>-1) information bits are presented, where r is the number
of check bits. The 2<sup>2</sup>-r-1 construction given by D.E. Knuth
(ibid., vol.32, no.1, p.51-3, 1986) is improved. The new codes are shown
to be optimal when Knuth's complementation method is used

For n >0, d ⩾0, n ≡ d
(mod 2), let K ( n , d ) denote the minimal
cardinality of a family V of ±1 vectors of dimension
n , such that for any ±1 vector w of dimension
n there is a v ∈ V such that | v -
w |⩽ d , where v - w is the usual
scalar product of v and w . A generalization of a
simple construction due to D.E. Knuth (1986) shows that K ( n
, d )⩽[ n /( d +1)]. A linear algebra
proof is given here that this construction is optimal, so that
K ( n , d )-[ n /( d +1)] for all
n ≡ d (mod 2). This construction and its
extensions have applications to communication theory, especially to the
construction of signal sets for optical data links

A constant weight, w, code with k information bits and r check
bits is a binary code of length n=k+r and cardinality 2<sup>k</sup> such
that the number of 1s in each code word is equal to w. When w=[n/2], the
code is called balanced. This paper describes the design of balanced and
constant weight codes with parallel encoding and parallel decoding.
Infinite families of efficient constant weight codes are given with the
parameters k, r, and the “number of balancing functions used in
the code design,” ρ. The larger ρ grows, the smaller r
will be; and the codes can be encoded and decoded with VLSI circuits
whose sizes and depths are proportional to pk and log<sub>2</sub> p,
respectively. For example, a design is given for a constant weight w=33
code with k=64 information bits, r=10 check bits, and p=8 balancing
functions. This code can be implemented by a VLSI circuit using less
than 4,054 transistors with a depth of less than 30 transistors

degrees in computer science from the Technion—Israel Institute of Technology he visited the Mathematics of Communications Department at Bell Laboratories under the DIMACS Special Focus Program in Computational Information Theory and Coding. During

- M Sc
- D Ph

M.Sc. and Ph.D. degrees in computer science from
the Technion—Israel Institute of Technology, in
1994, 1998 and 2007, respectively.
In the summer of 2004, he visited the Mathematics of Communications Department at Bell Laboratories under the DIMACS Special Focus Program
in Computational Information Theory and Coding.
During 2007-2012, he held research positions with
the Claude Shannon Institute, University College
Dublin, with the School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore, with the
Coordinated Science Laboratory, University of Illinois at Urbana-Champaign,
Urbana, and with the Department of Electrical and Computer Engineering,
McGill University, Montréal. He is now a Senior Lecturer with the Institute
of Computer Science, University of Tartu.

Skachek is a recipient of the Permanent Excellent Faculty Instructor award, given by Technion

- Dr

Dr. Skachek is a recipient of the Permanent Excellent Faculty Instructor
award, given by Technion.

System-on-chip test architectures: nanometer design for testability

- L.-T Wang
- C E Stroud
- N A Touba

L.-T. Wang, C.E. Stroud, and N.A. Touba, "System-on-chip test architectures: nanometer design for testability," Elsevier Science Limited,
2008.