ArticlePDF Available

Performance assessment of DC-free multimode codes


Abstract and Figures

We report on a class of high-rate de-free codes, called multimode codes, where each source word can be represented by a codeword taken from a selection set of codeword alternatives. Conventional multimode codes will be analyzed using a simple mathematical model. The criterion used to select the "best" codeword from the selection set available has a significant bearing on the performance. Various selection criteria are introduced and their effect on the performance of multimode codes will be examined.
Content may be subject to copyright.
Transactions Papers
Performance Assessment of DC-Free
Multimode Codes
Kees A. Schouhamer Immink, Fellow, IEEE, and Levente P´
Abstract—We report on a class of high-rate dc-free codes, called
multimode codes, where each source word can be represented by
a codeword taken from a selection set of codeword alternatives.
Conventional multimode codes will be analyzed using a simple
mathematical model. The criterion used to select the “best”
codeword from the selection set available has a significant bearing
on the performance. Various selection criteria are introduced
and their effect on the performance of multimode codes will be
BINARY sequences with spectral nulls at zero frequency
have found widespread application in optical and mag-
netic recording systems. The dc-balanced or dc-free codes, as
they are often called, have a long history and their application
is certainly not confined to recording practice. Since the early
days of digital communication over cable, dc-balanced codes
have been employed to counter the effects of low-frequency
cut-off due to coupling components, isolating transformers,
etc. In optical recording, dc-balanced codes are employed to
circumvent or reduce interaction between the data written on
the disc and the servo systems that follow the track [1]. In
the literature, code implementations have been concentrated
on byte-oriented dc-free codes of rate 8/10 or 8/9. (see, for
example, [2]–[4]). For certain application it is desirable that
the code rate is much higher than 8/9. The construction of
such high-rate codes is far from obvious, as table look-up
for encoding and decoding is an engineering impracticality.
Two methods for high-rate code design have been described
in the literature [5], [6]. Both methods utilize the idea that the
correspondence between source words and the codewords is
as simple as possible. A serious drawback of both methods
is that the performance, in terms of suppression of low-
frequency components, is far from what could be obtained
according to the tenets of information theory [1], [7], but up
till now attempts to improve the performance failed. Recently,
however, the publications by Fair et al. on “guided scrambling”
Paper approved by E. Ayanoglu, the editor for Communication Theory
and Coding Application of the IEEE Communications Society. Manuscript
received May 7, 1996; revised August 2, 1996. This paper was presented
in part at the IEEE 3rd Symposium on Communications and Vehicular
Technology, Eindenhoven, The Netherlands, October 25–26, 1995.
The authors are with the Philips Research Laboratories, 5656 AA Eind-
hoven, The Netherlands.
Publisher Item Identifier: S 0090-6778(97)01970-3.
stimulated us to investigate the performance of their method
and its varieties. In our context, guided scrambling is a
member of a larger class of related coding schemes called
multimode code. In multimode codes, each source word can
be represented by a member of a selection set consisting of
codewords. The encoder opts for transmitting that codeword
that minimizes, according to a criterion to be defined, the low-
frequency spectral contents of the encoded sequence. There are
two key elements which need to be chosen judiciously: 1) the
mapping between the source words and their corresponding
selection sets and 2) the criterion used to select the “best”
word. The spectral performance of the code greatly depends
on both issues. We start with some preliminaries, followed by
a section providing the state of the art. Thereafter, we will
outline the new multimode schemes and analyze their spectral
The running digital sum of a sequence (RDS) plays a
significant role in the analysis and synthesis of codes whose
spectrum vanishes at the low-frequency end. Let
be a bipolar se-
quence. Note that in the sequel, we will denote the value of
by its logical equivalents ‘0’ or “1.” The (running) digital
sum is defined as
It is an elementary exercise to show that if is bounded, the
spectral density vanishes at zero frequency [1]. The number
of RDS values that the sequence assumes is often called the
digital sum variation and denoted by . The value of
should be as small as possible as it has a direct bearing on
the amount of power at the low-frequency end. Given the
parameter it is possible to compute the maximum value
of the rate, , of any code, irrespective of its complexity,
that translates arbitrary source input into sequences obeying
the given constraint. Results of computation, taken from [1],
are listed in Table I. It can be seen that the sum constraint is not
very expensive in terms of rate loss when is relatively large.
For instance, a sequence that takes at maximum sum
values has a capacity , which implies a rate loss
0090–6778/97$10.00 1997 IEEE
of less than 6%. The quantity called sum variance plays
an important role in the evaluation of the spectral properties
of a code. Before explaining the relevance of the parameter
, a few words are in order regarding the low-frequency
properties of dc-free codes.
If is finite, the spectral density is zero at zero frequency,
but it is more relevant to note that there is a region of
frequencies, close to the zero frequency, where the spectral
density is low. The width of this region, termed the notch
width, is of great engineering relevance. The width of the
spectral notch, can be quantified by a parameter called the
cutoff frequency,. According to the work by Justesen [8]
there is a simple approximate relationship between and the
sum variance, , of the encoded sequence, namely
Sequences of maximum entropy assuming at most RDS
values obey the following fundamental relationship between
the sum variance and the redundancy [1]
The above relationship can be employed to derive a simple
yard stick for measuring the performance of implemented
codes. The encoder efficiency is defined as
The encoder efficiency , as defined in (3), compares the
“redundancy-sum variance products” of the implemented code
and the maxentropic sequence with the same digital sum
variation as the implemented code. The efficiency will be
used in the sequel to measure the performance of dc-free codes.
Essentially, there are three basic methods for generating dc-
free sequences which are relevant to the ensuing discussion.
These methods are briefly reviewed below.
A. Monomode Codes
In monomode codes, there is a one-to-one relationship
between source words and codewords. By necessity, the code-
words have equal numbers of 1’s and 1’s. There are two
methods available for translating source words into codewords.
The first method uses an algebraic technique, called enumera-
tion [1], and in the second method, devised by Knuth [6], -bit
source words are translated into -bit codewords. The
translation is achieved by selecting a bit position within the
-bit word which defines two segments, each having one half
of the total disparity of the -bit word, where the disparity of
a codeword is defined as the difference between the numbers
of 1’s and 1’s in that codeword. A zero-disparity codeword,
i.e., a codeword with an equal number of 1’s and 1’s, is now
generated by the inversion of all the bits within one segment.
The position information which defines the two segments is
encoded in the bits.
B. Bimode Codes
Bimode codes ensure balanced transmission by providing
for each source word two alternative channel representations.
From the alternatives available, that codeword is transmitted
that minimizes the absolute value of the RDS after transmis-
sion of the new word. This selection criterion will be termed
MRDS selection criterion. An archetypical example of a bi-
mode code is the polarity switch code [5]. The encoder and
decoder circuits of the polarity switch code are very simple
as no look-up tables are required. Under polarity switch rules,
source symbols are supplemented by one symbol called
the polarity bit. The encoder has the option to transmit the -
bit words without modification or to invert all symbols. The
choice of a specific translation is made in such a way that
the running digital sum after transmission of the new word is
as close to zero as possible. The polarity bit is used at the
decoder site to identify whether the transmitted codeword has
been inverted or not, and can easily be reconstituted. Properties
of the polarity bit code have been described in [1]. The
performance of the polarity switch code can be summarized
as follows. The rate of the polarity bit code is
The sum variance of the code [1] is
so that the efficiency is
From the above, we conclude that polarity switch codes are a
far cry from the optimal situation.
It is not difficult to generalize the above principle of
bimode codes to multimode codes, which, as the name already
suggests, cater for more than two channel representations.
C. Multimode Codes
In multimode codes, each source word can be represented
by a member of a selection set, denoted by , consisting of
codewords. The MRDS selection criterion can be used to select
the “best” codeword. More sophisticated selection criteria will
be described in Section V. It should be appreciated that the
usage of multimode codes is not confined to the generation
of dc-free sequences. Provided that is large enough and
the selection sets contain sufficiently different codewords,
multimode codes can also be used to satisfy almost any
channel constraint with a suitably chosen selection method.
A basic element of multimode codes is the one-to- in-
vertible mapping between the source and its selection set
. Examples of such mappings are the guided scrambling
algorithm presented by Fair et al. [9], the dc-free coset
codes of Deng and Herro [10], and the scrambling using a
Reed–Solomon code by Kunisa et al. [11]. In our context, a
mapping is considered to be “good” if the sets contain suf-
ficiently distinct codewords. The guided scrambling algorithm
is briefly described below.
1) Guided Scrambling: The guided scrambling algorithm
uses selection sets of size , where is the number
of redundant bits. Guided scrambling is summarized below.
1) In the first step, called augmenting, the source word is
preceded by all the possible binary sequences of length
to produce the set . Hence
2) The selection set is obtained by
scrambling all vectors in . Let the scrambler poly-
nomial be denoted by
where denotes the register length of the scrambler. The
scrambler translates each vector
into using the recursion
3) The “best” codeword in is selected for transmission.
4) At the receiver’s site, the inverse operation
The source word is found by deleting the first bits.
In the guided scrambling algorithm described above, trans-
lation of source words into random-like channel represen-
tations is done in a fairly simple way. This basic algorithm
is, however, prone to worst case situations since there is a
probability that consecutive source words have representation
sets whose members all have the same polarity of the disparity.
In this vexatious situation, the RDS cannot be controlled,
and long-term low-frequency components can build up. This
flaw can be solved by a construction where each selection set
consists of pairs of words of opposite disparity. As a result,
there is always a codeword in the selection set that can control
the RDS. A simple method embodying this idea combines the
features of guided scrambling and the polarity bit code. The
improved algorithm using redundant bits is executed in
six steps. In Steps 1), 2), and 5) the original guided scrambling
principle is executed while Steps 3) and 4) embody the polarity
bit code.
1) The source word is preceded by all the possible binary
sequences of length to produce the
elements of the set . Hence:
2) The selection set is obtained by
scrambling all vectors in .
3) By preceding the vectors in with both a ‘one’ and a
“zero,” we get the set , with elements.
4) The selection set is obtained by scrambling (pre-
coding) the vectors in using the scrambler with
polynomial . This embodies the polarity bit prin-
5) The “best” codeword in is selected.
6) At the receiver end, the codeword is first descrambled
using the polynomial, then after removing the first
bit, it is descrambled. The original source word is
eventually reconstituted by removing the first
All simulations and analyses discussed below assume the
above structure where the selection set consists of pairs of
words of opposite disparity.
A precise mathematical analysis of the performance of
multimode codes is, considering the complexity of the code,
out of the question. We can either rely on computer simulation
to facilitate an understanding of the operation of the coding
system or try to define a simple mathematical model, which
embraces the essential characteristics of the code and is also
analytically tractable. We followed both approaches, and we
commence by describing the mathematical model.
A. The Random Drawing Model
The key characteristic of a multimode code is that each
source word can be represented by a codeword taken from a set
containing “random” alternatives. As the precise structure of
the encoder is extremely difficult to analyze, we assume, in our
mathematical model, that for each source block the channel
set is obtained by randomly drawing -bit words plus
their complementary -bit words. The precise structure
of the scrambler is ignored in our model. The “best” word in
the set, according to the MRDS criterion, is transmitted. The
MRDS criterion ensures that the state space of the encoder,
that is, the number of possible word-end running digital sum
(WRDS) values the encoded sequence may take, is finite.
However, if the codewords are relatively long, the number
of states and the resulting transition matrix are still too large
for a simple mathematical analysis. We therefore truncated
the state space by omitting those states that do not contribute
significantly to the sum variance.
B. Transition Probabilities of the Finite-State Machine
The implemented encoder schemes can be simply treated in
terms of Markov models. The set of values that WRDS take
prior to transmission of a codeword defines a set of states of a
finite-state machine. We will use the shorthand notation
to denote both the WRDS at the start of the th codeword and
to refer to the encoder state itself. We commence our analysis
with a computation of the state transition probabilities.
Assume the th codeword starts with RDS . Then
the multimode code can be cast into a Markov chain model
whose state transition probabilities matrix, , is given by
We make the following remarks concerning the state transition
1) For the sake of simplicity, only codes using codewords
of even length are considered.
2) It is assumed that at the start of the transmission WRDS
is set to . As a result, since the codeword length is
even, .
3) For reasons of symmetry, only the probabilities for
need to be calculated.
4) To reduce the computational load, we truncated the state
space. Only those states are considered that can be
reached from the , or the , state with
probability greater than , where is chosen suitably
small, say 10 6. Other values of have been tried
without, however, causing significant differences in the
results obtained. The remaining states will be termed
principal states.
We now introduce several notations. If WRDS is positive,
then, according to the simple MRDS criterion, the next code-
word will be of zero or negative disparity. Therefore, assuming
that the encoder occupies state , the set of possible next
states is . Let denote the
probability of a codeword pair having disparity and .
The probability of the next-state candidate in a draw being
otherwise. (6)
The next-state candidate in the th draw is denoted by
. According to the MRDS criterion, if the next state
is , then for all . The probability that during
a draw the next-state candidate is “worse” than , denoted
by , is given by
Now, the expression for the transition matrix is given by
The transition probabilities for each pair of WRDS states
can be numerically determined by invoking (7). In order to
make the analysis more tractable, those states are removed
that can be reached from the , or the , state
Fig. 1. Efficiency of random drawing algorithm using the MRDS selection
with probability less than . The remaining set of states, the
principal states, denoted by
, and the truncated transition probability matrix with
elements can easily be found. Thereafter, vector
of the stationary probabilities, with elements ,is
found by solving . The calculation of the variance
of the digital sum at the start of the codewords is now
straightforward. The computation of the sum variance within
the codewords is more complex, and therefore given in the
C. Computational Results
Using (13), we calculated the efficiency of the random
drawing algorithm for selected values of the codeword length
and redundancy. Fig. 1 shows the results. The connected points
have the same redundancy , and the th point on a
curve corresponds to a code having redundant bits, codeword
length , and selection sets of size . For comparison
purposes, we also plotted the efficiency of the polarity bit
code [see (4)]. By comparing the efficiency values at the
th point on each curve, we can see that these values are
approximately the same. The efficiency of the random coding
algorithm is practically independent of the codeword length
and is essentially determined by the number of redundant bits
used. It can be seen that codes with two or three redundant bits
are clearly more efficient than the polarity bit code. With an
increasing number of redundant bits, however, the efficiency
decreases. The decrease in performance, as will be explained
in the next section, is due to the shortcomings of the MRDS
The results, plotted in Fig. 1, reveal that using more than
two redundant bits does not lead to improved performance. The
reason that performance decreases with an increasing number
of redundant bits can easily be understood. A quick calculation
will make it clear that a large selection set contains with great
probability at least one zero-disparity word. On the basis of
Fig. 2. Simulation results for the random drawing algorithm with fixed
redundancy 1/128 with different selection criteria (a) MRDS, (b) MMRDS,
and (c) MSW.
the simple MRDS criterion one of the zero-disparity words
is randomly chosen and transmitted. As the sum variance of
zero-disparity codewords equals , [1] irrespective
of the rate of the code, we conclude that the efficiency will
asymptotically approach zero. More sophisticated selection
criteria, which take account of the running digital sum within
the codeword, and not only at the end of the word, may
result in increased performance. In order to describe these
more sophisticated selection criteria, we introduce the squared
weight,, of a codeword, defined as the sum of the squared
RDS values at each bit position of the codeword. The two
selection criteria examined are as follows:
1) modified MRDS (MMRDS) criterion: from the code-
words with minimal WRDS , the one with minimum
is selected;
2) minimum squared weight (MSW) criterion: the code-
word of minimal is selected from the selection set,
irrespective of the WRDS of the codeword.
Fig. 2 shows the simulation results obtained for redundancy
1/128. Simulations of codes with other values of the redun-
dancy produced similar results. From the curves, we infer the
The MRDS method wastes the opportunity offered by the
broader selection sets. By properly selecting the codeword
from the ones with minimal WRDS , the efficiency of the
MMRDS scheme tends to unity.
As indicated by the curve of the MSW criterion, the
best codewords do not necessarily minimize the WRDS .
Selecting the codeword with minimal squared weight
clearly results in more efficient codes.
Based on the above observations, we searched for a criterion
that is simple to implement while its efficiency approaches that
of the MSW criterion. The outcome is described in the next
A. The Minimum Threshold Overrun Criterion
Our objective, in this section, is to construct a selection
criterion which takes into account the RDS values within
Fig. 3. Simulation results for the random drawing algorithm having fixed
redundancy 1/128 with (a) the MSW criterion and (b) the MTO criterion. The
dotted line shows the results obtained for the implemented encoding scheme
using a scramblers with polynomial .
the codeword while having a structure that is also easy to
implement. The proposed selection scheme, termed minimum
threshold overrun (MTO) criterion, utilizes the parameter
“RDS threshold,” denoted by . The MTO penalty
is simply the number of times the absolute value of the
running digital sum within a word is larger than . As the
squaring operation needed for the MSW criterion is avoided,
the implementation of the MTO criterion is not more complex
than the MRDS method. The codeword with minimum penalty
is transmitted. If two or more codewords have the same
penalty, one of them is chosen randomly and transmitted.
This procedure does not seriously deteriorate the performance
as it is fairly improbable that two or more codewords in the
selection set have the same penalty value. Fig. 3, curve (b),
shows simulation results obtained with the MTO criterion.
Optimal values of the threshold were found by trial and
error. We can see that the MTO criterion is only slightly less
efficient than the MSW criterion. All results shown so far have
been obtained by a simulation program of the random drawing
algorithm. As a final check we also conducted simulations
with a full-fledged implementation using a scramblers with
polynomial . Experiments with other scrambler
polynomials did not reveal significant differences. The dotted
curve, Fig. 3, gives results on the basis of the MTO criterion.
The curve shows a nice agreement with results obtained with
the random drawing algorithm. As the proof of the pudding
we have computed the power spectral density (PSD) of two
typical examples. The results are displayed in Fig. 4.
Multimode codes have been mathematically analyzed by
introducing a simple random drawing model. We have pre-
sented alternative selection criteria and examined their effect
on the spectral efficiency. Multimode codes are excellent
candidate dc-free codes when both low-spectral content at
the low-frequency end and high rate are at a premium. For
given rate and proper selection criteria, the spectral content of
multimode codes is very close to the minimal content promised
Fig. 4. Spectra of encoded sequences generated by the polarity switch code (upper curve) and multimode code (lower curve). The redundancy is in both
cases 1/128. The multimode code has six redundant bits (codeword length is ) and it uses the MTO selection criterion.
by information theory.
In this Appendix, we will compute the sum variance of se-
quences encoded with the random drawing model. A codeword
with binary elements is translated into the -tuple
where .
Suppose the th codeword in the sequence,
, starts with initial RDS . The RDS at
the th symbol position of , denoted by , equals
The running sum variance at the th position, given , equals
where the operator averages over all codewords
that start with an initial RDS . As the source population
of codewords is the full set of vectors of nonpositive (nonneg-
ative) disparity, the expectations and
, are independent of the symbol positions
and . For the sake of convenience, we use the shorthand
notation and
. Substitution yields the running sum variance at
the th symbol position
The sum variance of a codeword starting with initial RDS ,
designated by , is found by averaging the running digital
sum variance over all symbol positions of the codeword or
The probability that a codeword starts with an RDS
equals the stationary probability , so that by taking
the probability into account that a codeword starts with RDS
and averaging over all initial states in , the following
expression is found for the sum variance :
The variance of the initial sum values, , equals
The quantity can be estimated by noting the periodicity, i.e.,
. Evaluating (8) yields
and after averaging, where the the probability of starting with
an initial RDS is taken into account, we obtain
so that with we find
Substitution in (9) yields
1) Computation of the Correlation: We next calculate the
correlation of the symbols at the th and
the th symbol position within the same codeword. It is
obvious that .If , some more work
is needed. In that case,
Assume a codeword to be of disparity . Then the prob-
ability that a symbol at position in the codeword equals
The probability that another symbol at position
within the same codeword equals 1is
and (11) yields the correlation for codewords of disparity
Using the above, we find that
The sum variance can be determined using (10) and (12)
The authors wish to thank I. Fair for his valuable remarks
that helped improve the contents of this paper.
[1] K. A. S. Immink, Coding Techniques for Digital Recorders. Engle-
wood Cliffs, NJ: Prentice-Hall International, 1991.
[2] S. Fukuda, Y. Kojima, Y. Shimpuku, and K. Odaka, “8/10 modulation
codes for digital magnetic recording,” IEEE Trans. Magn., vol. MAG-22,
pp. 1194–1196, Sept. 1986.
[3] A. X. Widmer and P. A. Franaszek, “A dc-balanced, partitioned-block,
8b/10b transmission code,” IBM J. Res. Develop., vol. 27, no. 5, pp.
440–451, Sept. 1983.
[4] H. Yoshida, T. Shimada, and Y. Hashimoto, “8-9 block code: A dc-free
channel code for digital magnetic recording,” SMPTE J., vol. 92, pp.
918–922, Sept. 1983.
[5] F. K. Bowers, U.S. Patent 2 957947, 1960.
[6] D. E. Knuth, “Efficient balanced codes,” IEEE Trans. Inform. Theory,
vol. IT-32, pp. 51–53, Jan. 1986. See also P. S. Henry, “Zero disparity
coding system,” U.S. Patent 4 309694, Jan. 1982.
[7] H. Hollmann and K. A. S. Immink, “Performance of efficient balanced
codes,” IEEE Trans. Inform. Theory, vol. 37, pp. 913–918, May 1991.
[8] J. Justesen, “Information rates and power spectra of digital codes,” IEEE
Trans. Inform. Theory, vol. IT-28, pp. 457–472, May 1982.
[9] I. J. Fair, W. D. Gover, W. A. Krzymien, and R. I. MacDonald, “Guided
scrambling: A new line coding technique for high bit rate fiber optic
transmission systems,” IEEE Trans. Commun., vol. 39, pp. 289–297,
Feb. 1991.
[10] R. H. Deng and M. A. Herro, “DC-free coset codes,” IEEE Trans.
Inform. Theory, vol. 34, pp. 786–792, July 1988.
[11] A. Kunisa, S. Takahashi, and N. Itoh, “Digital modulation method for
recordable digital video disc,” in Proc. 1996 IEEE Int. Conf. Consumer
Electron., June 1996, pp. 418–419.
Kees A. Schouhamer Immink (M’81–SM’86–
F’90) received the M.S. and Ph.D. degrees from the
Eindhoven University of Technology, Eindenhoven,
The Netherlands.
He joined the Philips Research Laboratories,
Eindhoven, in 1968, where he currently holds the
position of Research Fellow. He has contributed
to the design and development of a wide variety
of digital consumer-type audio and video-recorders
such as the compact disc, compact disc video, R-
DAT, DCC, and DVD. He holds 32 U.S. patents
and has written numerous papers in the field of coding techniques for optical
and magnetic recorders.
Dr. Immink is the Chairman of the IEEE Benelux Chapter on Consumer
Electronics, a Governor of the Audio Engineering Society (AES), and a
Governor of the IEEE Information Theory Society. He was named a Fellow
of the AES, SMPTE, and IEE. Furthermore, he was recognized and awarded
with the AES Silver Medal in 1992, the IEE Sir Thomson Medal in 1993,
the SMPTE Poniatoff Gold Medal in 1994, and the IEEE Ibuka Consumer
Electronics Award in 1996. He is a member of the Royal Netherlands
Academy of Arts and Sciences.
Levente P´
atrovics was born in Budapest, Hungary.
He received the M.Sc. degree in electrical engi-
neering and computer science in 1994 from the
Technical University of Budapest, Hungary. From
May to December 1995, he was working towards
the Ph.D. degree at Philips Research Laboratories,
Eindhoven, The Netherlands.
His research interests are in information theory,
particular construction, and analysis of constrained
codes. Currently, he is with Lufthansa Systems,
Hungaria Kft.
... Line codes for these applications are simpler than T x -constrained and RLL codes, since streams of codewords are only required to be balanced and to support self-clocking. Examples include the 8b/10b code [16], the 64b/66b code [17], and the 128b/132b code. We note that constrained codes preserving parity are studied in [18], and that constrained codes for deoxyribonucleic acid (DNA) storage are studied in [19]. ...
... A critical additional requirement in line codes, which appears in applications like optical recording, Flash memories, in addition to USB and PCIe standards, is balancing [10], [14], [22]. Examples of balanced line codes are the 8b/10b [16] and the 64b/66b [17] codes. Balanced line codes have zero average power at frequency zero, i.e., no DC power component, when the signal levels are −A and +A. ...
Full-text available
Line codes make it possible to mitigate interference, to prevent short pulses, and to generate streams of bipolar signals with no direct-current (DC) power content through balancing. Thus, they find applications in magnetic recording (MR) devices, in Flash devices, in optical recording devices, in addition to some computer standards. This paper introduces a new family of fixed-length, binary constrained codes, namely, lexicographically-ordered constrained codes (LOCO codes) for bipolar non-return-to-zero signaling. LOCO codes are capacity achieving, the lexicographic indexing enables simple, practical encoding and decoding, and this simplicity is demonstrated through analysis of circuit complexity. LOCO codes are easy to balance, and their inherent symmetry minimizes the rate loss with respect to unbalanced codes having the same constraints. Furthermore, LOCO codes that forbid certain patterns can be used to alleviate inter-symbol interference in MR systems and inter-cell interference in Flash systems. Experimental results demonstrate a gain of up to 10% in rate achieved by LOCO codes compared with practical run-length limited codes designed for the same purpose. Simulation results suggest that it is possible to achieve channel density gains of about 20% in MR systems by using a LOCO code to encode only the parity bits of a low-density parity-check code before writing.
... Line codes for these applications are simpler than T x -constrained and RLL codes, since streams of codewords are only required to be balanced and to support self-clocking. Examples include the 8b/10b code [18], the 64b/66b code [19], and the 128b/132b code [20]. We note that constrained codes preserving parity are studied in [21], and that constrained codes for deoxyribonucleic acid (DNA) storage are studied in [22]. ...
... A critical additional requirement in line codes, which appears in applications like optical recording, Flash memories, in addition to USB and PCIe standards, is balancing [12], [16], [25]. Examples of balanced line codes are the 8b/10b [18] and the 64b/66b [19] codes (the latter is not strictly DC-free). Balanced line codes have zero average power at frequency zero, i.e., no DC power component, when the signal levels are −A and +A. ...
Full-text available
Line codes make it possible to mitigate interference, to prevent short pulses, and to generate streams of bipolar signals with no direct-current (DC) power content through balancing. They find application in magnetic recording (MR) devices, in Flash devices, in optical recording devices, and in some computer standards. This paper introduces a new family of fixed-length, binary constrained codes, named lexicographically-ordered constrained codes (LOCO codes), for bipolar non-return-to-zero signaling. LOCO codes are capacity-achieving, the lexicographic indexing enables simple, practical encoding and decoding, and this simplicity is demonstrated through analysis of circuit complexity. LOCO codes are easy to balance, and their inherent symmetry minimizes the rate loss with respect to unbalanced codes having the same constraints. Furthermore, LOCO codes that forbid certain patterns can be used to alleviate inter-symbol interference in MR systems and inter-cell interference in Flash systems. Numerical results demonstrate a gain of up to 10% in rate achieved by LOCO codes with respect to other practical constrained codes, including run-length-limited codes, designed for the same purpose. Simulation results suggest that it is possible to achieve a channel density gain of about 20% in MR systems by using a LOCO code to encode only the parity bits, limiting the rate loss, of a low-density parity-check code before writing.
... After obtaining the compressed output F comp , we perform randomization to make the GC balance ratio between 0.5 ± α. In the coding theory, there is a scheme called guided scrambling [17], which is similar to the randomization step. It is applied with the verification step to satisfy the GC balance ratio. ...
Full-text available
In this paper, we propose a novel iterative encoding algorithm for DNA storage to satisfy both the GC balance and run-length constraints using a greedy algorithm. DNA strands with run-length more than three and the GC balance ratio far from 50\% are known to be prone to errors. The proposed encoding algorithm stores data at high information density with high flexibility of run-length at most $m$ and GC balance between $0.5\pm\alpha$ for arbitrary $m$ and $\alpha$. More importantly, we propose a novel mapping method to reduce the average bit error compared to the randomly generated mapping method, using a greedy algorithm. The proposed algorithm is implemented through iterative encoding, consisting of three main steps: randomization, M-ary mapping, and verification. It has an information density of 1.8616 bits/nt in the case of $m=3$, which approaches the theoretical upper bound of 1.98 bits/nt, while satisfying two constraints. Also, the average bit error caused by the one nt error is 2.3455 bits, which is reduced by $20.5\%$, compared to the randomized mapping.
... A signal is called DC-free when its mean amplitude is zero. Non-DC-free signals can cause transmission errors for electrotechnical reasons [91]. ...
Full-text available
The Internet of Things (IoT) brings comfort into the life of users. It is convenient to control the lights at home with an app without leaving the couch or open the front door with a remote control. This comfort, however, comes with security risks as the wireless communication between components often relies on proprietary protocols. Such protocols are designed under size and energy constraints whereby security is often only a secondary factor. Moreover, even when a default protocol such as IEEE 802.11 WLAN with enabled encryption is used, mobile devices such as smartphones can be located threatening the location privacy of users. This thesis is divided into two main parts. In the first part, we demonstrate how to passively locate a smartphone indoors using IEEE 802.11 WLAN and contribute a geolocation system with a mean accuracy of 0.58m. Subsequently, we analyze how a company can incentivize users with different levels of privacy-awareness to connect to a provided WLAN and give up their location privacy in exchange for certain benefits such as shopping discounts. We model this situation as a Bayesian Stackelberg game to find the company's best strategy. In the second part, we showcase the challenges that arise for security researchers when investigating proprietary wireless protocols. Software Defined Radios (SDRs) propose a generic way to analyze such protocols operating on frequencies like 433.92 MHz or 868.3 MHz where no default hardware such as a WLAN stick is available. SDRs, however, deliver raw signals that have to be demodulated and decoded before researchers can reverse-engineer the protocol format. Our main contribution to this process is an open source software called Universal Radio Hacker (URH) which is, to the best of our knowledge, the first complete suite for wireless protocol investigation with SDRs. URH splits down the protocol investigation process into the phases Interpretation, Analysis, Generation and Simulation. The goal of Interpretation phase is identifying the transmitted bits and bytes by demodulating the signal. Apart from letting users manually adjust demodulation parameters, we contribute a set of algorithms to automatically find these parameters and integrate them into URH. In Analysis phase, the protocol format is reverse-engineered from the demodulated bits. This is a time-consuming manual process that slows down a security analysis. To address this problem, we design and implement a modular system that automatically finds protocol fields such as addresses and checksums. In combination with the automatic modulation parameter detection this speeds up the security analysis of unknown wireless protocols. URH enables researchers to perform attacks on stateless and stateful protocols in the Generation and Simulation phase, respectively. In Generation, users can apply a fuzzing to arbitrary data ranges while the Simulation component of URH models protocol state machines and dynamically reacts to incoming messages from investigated devices. In both phases, the software automatically applies modulation and encoding to the bits that should be sent. We demonstrate three attacks on IoT devices that were found and executed with URH. The most complex attack involves opening an AES protected wireless door lock in real-time.
... Among various methodologies proposed for the design of constrained codes, the GS [16], [17] has been found an effective statistical coding technique that is suitable for the design of dc-free codes as well as the weakly constrained codes [18]. In this work, we propose to combine the GS scheme with the k constrained code design methods presented in Section II, to design high-rate spectrum shaping k constrained codes. ...
This paper proposes systematic code design methods for constructing efficient spectrum shaping codes with the maximum runlength limited constraint k, which are widely used in data storage systems for digital consumer electronics products. Through shaping the spectrum of the input user data sequence, the codes can effectively circumvent the interaction between the data signal and servo signal in high-density data storage systems. In particular, we first propose novel methods to design high-rate k constrained codes in the non-return-to-zero (NRZ) format, which can not only facilitate timing recovery of the storage system, but also avoid error propagation during decoding and reduce the system complexity. We further propose to combine the Guided Scrambling (GS) technique with the k constrained code design methods to construct highly efficient spectrum shaping k constrained codes. Simulation results demonstrate that the designed codes can achieve significant spectrum shaping effect with only around 1% code rate loss and reasonable computational complexity.
Conference Paper
Full-text available
Abstract— WiGig stands for Wireless Gigabit Alliance, whichwas founded to encourage the adoption of IEEE 802.11ad, anupdate to the network of IEEE 802.11 wireless that was intendedfor enabling a Multiple Gigabit Wireless System (MGWS) atbandwidth of 60 GHz., In this paper illustrating IEEE 802.11 adDirectional multi-Gigabits performance in multi-impairmentenvironments with MATLAB Simulink where AWGN phase,frequency offset, phase noise, DC offset, IQ imbalance andmemory less cubic nonlinearity impairment effect on the IEEE802.11 ad spectrum and constellation performance, modulationscheme used DBPSK design and implemented in MATLABSimulink tested under AWGN and Rayleigh fading channels (PDF) WiGig WiFi performance with multi-impairment environments in MATLAB Simulink. Available from: [accessed Jul 04 2022].
Ground-penetrating radar (GPR) is a widely popular sensing method with broad applications in non-destructive subsurface imaging. This paper presents a multistatic GPR for vehicle-mounted roadway and utility monitoring applications that employs several methods to improve performance compared to the state-of-the-art. The proposed system illuminates the subsurface with pseudo-random codes (m-sequences) that have nearideal autocorrelation properties. As a result, the received signal can be matched-filtered to provide pulse-compression, which improves both range-resolution and depth of scan compared to impulse-based GPRs. It also uses a highly-digital transmit and receive architecture based on direct FPGA-based transmit pulse generation and direct radio frequency (RF) sampling of the received echoes. Further, the analog front-end uses an 8×8 multistatic antenna array design with broadband antipodal Vivaldi elements to provide spatial diversity, which leads to improved object localization and reduced drift between scans. Experimental results from indoor and outdoor test-beds confirm the functionality of the proposed GPR system.
Visual receptive fields are characterised by their centre-surround organisation and are typically modelled by Difference-of-Gaussians (DoGs). The DoG captures the effect of surround modulation, where the central receptive field can be modulated by simultaneous stimulation of a surrounding area. Although it is well-established that this centre-surround organisation is crucial for extracting spatial information from visual scenes, the underlying law binding the organisation has remained hidden. Indeed, previous studies have reported a wide range of size and gain ratios of the DoG used to model the receptive fields. Here, we present an equation that describes a principle for receptive field organisation, and we demonstrate that functional Magnetic Resonance Imaging (fMRI) population Receptive Field (pRF) maps of human V1 adhere to this principle. We formulate and understand the equation through consideration of the concept of Direct-Current-free (DC-free) filtering from electrical engineering, and we show how this particular type of filtering effectively makes the DoG process frequencies of interest without misallocation of bandwidth to redundant frequencies. Taken together, our results reveal how this organisational principle enables the visual system to adapt its sampling strategy to optimally cover the stimulus-space relevant to the organism, restricted only by Heisenberg's uncertainty principle that imposes a lower bound on the simultaneous precision in spatial position and frequency. Since surround modulation has been observed in all sensory modalities, we expect these results will become a corner stone in our understanding of how biological systems in general achieve their high information processing capacity.
We present a systematic variable-to-fixed (VF) length scheme encoding binary information sequences into binary balanced sequences. The redundancy of the proposed scheme is larger than the redundancy of the best fixed-to-fixed (FF) length schemes in case of long codes, but it is smaller in case of short codes. The biggest advantage comes from the simplicity of the scheme: encoding only requires one to keep track of the sequence weight, while decoding requires only one extremely simple step, irrespective of the sequence length.
Full-text available
Since the early 1980s we have witnessed the digital audio and video revolution: the Compact Disc (CD) has become a commodity audio system. CD-ROM and DVD-ROM have become the de facto standard for the storage of large computer programs and files. Growing fast in popularity are the digital audio and video recording systems called DVD and BluRay Disc. The above mass storage products, which form the backbone of modern electronic entertainment industry, would have been impossible without the usage of advanced coding systems. Pulse Code Modulation (PCM) is a process in which an analogue, audio or video, signal is encoded into a digital bit stream. The analogue signal is sampled, quantized and finally encoded into a bit stream. The origins of digital audio can be traced as far back as 1937, when Alec H. Reeves, a British scientist, invented pulse code modulation \cite{Ree}. The advantages of digital audio and video recording have been known and appreciated for a long time. The principal advantage that digital implementation confers over analog systems is that in a well-engineered digital recording system the sole significant degradation takes place at the initial digitization, and the quality lasts until the point of ultimate failure. In an analog system, quality is diminished at each stage of signal processing and the number of recording generations is limited. The quality of analog recordings, like the proverbial 'old soldier', just fades away. The advent of ever-cheaper and faster digital circuitry has made feasible the creation of high-end digital video and audio recorders, an impracticable possibility using previous generations of conventional analog hardware. The general subject of coding for digital recorders is very broad, with its roots deep set in history. In digital recording (and transmission) systems, channel encoding is employed to improve the efficiency and reliability of the channel. Channel coding is commonly accomplished in two successive steps: (a) error-correction code followed by (b) recording (or modulation) code. Error-correction control is realized by adding extra symbols to the conveyed message. These extra symbols make it possible for the receiver to correct errors that may occur in the received message. In the second coding step, the input data are translated into a sequence with special properties that comply with the given "physical nature" of the recorder. Of course, it is very difficult to define precisely the area of recording codes and it is even more difficult to be in any sense comprehensive. The special attributes that the recorded sequences should have to render it compatible with the physical characteristics of the available transmission channel are called channel constraints. For instance, in optical recording a '1' is recorded as pit and a '0' is recorded as land. For physical reasons, the pits or lands should neither be too long or too short. Thus, one records only those messages that satisfy a run-length-limited constraint. This requires the construction of a code which translates arbitrary source data into sequences that obey the given constraints. Many commercial recorder products, such as Compact Disc and DVD, use an RLL code. The main part of this book is concerned with the theoretical and practical aspects of coding techniques intended to improve the reliability and efficiency of mass recording systems as a whole. The successful operation of any recording code is crucially dependent upon specific properties of the various subsystems of the recorder. There are no techniques, other than experimental ones, available to assess the suitability of a specific coding technique. It is therefore not possible to provide a cookbook approach for the selection of the 'best' recording code. In this book, theory has been blended with practice to show how theoretical principles are applied to design encoders and decoders. The practitioner's view will predominate: we shall not be content with proving that a particular code exists and ignore the practical detail that the decoder complexity is only a billion times more complex than the largest existing computer. The ultimate goal of all work, application, is never once lost from sight. Much effort has been gone into the presentation of advanced topics such as in-depth treatments of code design techniques, hardware consequences, and applications. The list of references (including many US Patents) has been made as complete as possible and suggestions for 'further reading' have been included for those who wish to pursue specific topics in more detail. The decision to update Coding Techniques for Digital Recorders, published by Prentice-Hall (UK) in 1991, was made in Singapore during my stay in the winter of 1998. The principal reason for this decision was that during the last ten years or so, we have witnessed a success story of coding for constrained channels. The topic of this book, once the province of industrial research, has become an active research field in academia as well. During the IEEE International Symposia on Information Theory (ISIT and the IEEE International Conference on Communications (ICC), for example, there are now usually three sessions entirely devoted to aspects of constrained coding. As a result, very exciting new material, in the form of (conference) articles and theses, has become available, and an update became a necessity. The author is indebted to the Institute for Experimental Mathematics, University of Duisburg-Essen, Germany, the Data Storage Institute (DSI) and National University of Singapore (NUS), both in Singapore, and Princeton University, US, for the opportunity offered to write this book. Among the many people who helped me with this project, I like to thank Dr. Ludo Tolhuizen, Philips Research Eindhoven, for reading and providing useful comments and additions to the manuscript. Preface to the Second Edition About five years after the publication of the first edition, it was felt that an update of this text would be inescapable as so many relevant publications, including patents and survey papers, have been published. The author's principal aim in writing the second edition is to add the newly published coding methods, and discuss them in the context of the prior art. As a result about 150 new references, including many patents and patent applications, most of them younger than five years old, have been added to the former list of references. Fortunately, the US Patent Office now follows the European Patent Office in publishing a patent application after eighteen months of its first application, and this policy clearly adds to the rapid access to this important part of the technical literature. I am grateful to many readers who have helped me to correct (clerical) errors in the first edition and also to those who brought new and exciting material to my attention. I have tried to correct every error that I found or was brought to my attention by attentive readers, and seriously tried to avoid introducing new errors in the Second Edition. China is becoming a major player in the art of constructing, designing, and basic research of electronic storage systems. A Chinese translation of the first edition has been published early 2004. The author is indebted to prof. Xu, Tsinghua University, Beijing, for taking the initiative for this Chinese version, and also to Mr. Zhijun Lei, Tsinghua University, for undertaking the arduous task of translating this book from English to Chinese. Clearly, this translation makes it possible that a billion more people will now have access to it. Kees A. Schouhamer Immink, Rotterdam, November 2004
A class of block coset codes with disparity and run-length constraints are studied. They are particularly well suited for high-speed optical fiber links and similar channels, where DC-free pulse formats, channel error control, and low-complexity encoder-decoder implementations are required. The codes are derived by partitioning linear block codes. The encoder and decoder structures are the same as those of linear block codes with only slight modifications. A special class of DC-free coset block codes are derived from BCH codes with specified bounds on minimum distance, disparity, and run length. The codes have low disparity levels (a small running digital sum) and good error-correcting capabilities
This article describes the 8–9 block code, a dc-free channel code for use in digital magnetic recording. The coding strategy is discussed, as well as results of investigation of the properties of the code by computer simulation and hardware experiment. This code can be applied to a digital VTR without major modification to the recording and playback circuits which were previously used for the 8–10 block code. Although further study is required for selecting the best channel code for the digital VTR, the 8–9 block code may be considered a reasonable compromise between the 8–10 block code and the 8-bit code without overhead.
It is difficult to record DC and low-frequency signals with the magnetic recording method. Furthermore, in high-density magnetic recording, the signal-to-noise ratio and the high-frequency output level are low and low-frequency crosstalk noise from the adjacent tracks is relatively high. A modulation code for high-density digital magnetic recording must have a large Tw and a DC-free characteristic. When we developed the R-DAT system, we developed two run-length limited 8/10 conversion rate modulation codes for use with the R-DAT. This paper discusses various possible modulation codes and confirms the superiority of one particular 8/10 modulation code.
This paper describes a byte-oriented binary transmission code and its implementation. This code is particularly well suited for high-speed local area networks and similar data links, where the information format consists of packets, variable in length, from about a dozen up to several hundred 8-bit bytes. The proposed transmission code translates each source byte into a constrained 10-bit binary sequence which has excellent performance parameters near the theoretical limits for 8B/10B codes. The maximum run length is 5 and the maximum digital sum variation is 6. A single error in the encoded bits can, at most, generate an error burst of length 5 in the decoded domain. A very simple implementation of the code has been accomplished by partitioning the coder into 5B/6B and 3B/4B subordinate coders.
Coding schemes in which each codeword contains equally many zeros and ones are constructed in such a way that they can be efficiently encoded and decoded.
This paper describes a modulation method suitable for high-density recording on an optical disc. This method, called 8-15 modulation method, is based on an 8-15 channel coding with longer minimum interval between transitions and an efficient DC component suppressing method. The maximum and minimum intervals between transitions of the 8-15 channel code achieve the theoretical limits for conversion of an 8-bit data word to a 15-bit codeword. The DC component suppressing method which adds several redundant bits to the 8-15 channel code, can suppress the low-frequency components of the channel bit stream. These features allow this method to extend the minimum interval between transitions and window margin by 5%, compared with EFMPlus. Additionally, this paper proposes a data format suitable for using the 8-15 modulation method and RS product code as the modulation method and error correcting code, respectively
The technique introduced has relatively simple encoding and decoding procedures which can be implemented at the high bit rates used in optical fiber communication systems. Because it is similar to the established technique of self-synchronizing scrambling but is also capable of guiding the scrambling process to produce a balanced encoded bit stream, the technique is called guided scrambling, (GS). The concept of GS coding is explained, and design parameters which ensure good line code characteristics are discussed. The performance of a number of guided scrambling configurations is reported in terms of maximum consecutive like-encoded bits, encoded stream disparity, decoder error extension, and power spectral density of the encoded signal. Comparison of guided scrambling with conventional line code techniques indicates a performance which approaches that of alphabetic lookup table codes with an implementation complexity similar to that of current nonalphabetic coding techniques.