Content uploaded by Kees Schouhamer Immink

Author content

All content in this area was uploaded by Kees Schouhamer Immink on Apr 12, 2019

Content may be subject to copyright.

326 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 51, NO. 3, MARCH 2003

Efficient dc-Free RLL Codes for Optical Recording

Kees A. Schouhamer Immink, Fellow, IEEE, Jin-Yong Kim, Sang-Woon Suh, and Seong Keun Ahn

Abstract—We will report on new dc-free runlength-limited

codes (DCRLL) intended for the next generation of DVD. The

efficiency of the newly developed DCRLL schemes is extremely

close to the theoretical maximum, and as a result, significant

density gains can be obtained with respect to prior art coding

schemes. With a newly developed DCRLL code we can

achieve a 9% higher overall rate than that of DVD’s EFMPlus.

Index Terms—Channel capacity, constrained code, dc-free

code, sequence, optical recording, runlength-limited (RLL)

sequence.

I. INTRODUCTION

THE design of codes for optical recording is essentially

the design of combined dc-free and runlength limited

(DCRLL) codes [1]. Eight to Fourteen Modulation (EFM),

invented by Immink and Ogawa in the early Eighties [2],

and EFMPlus [3] were adopted as the recording code for the

compact disc (CD) and DVD, respectively.

Binary sequences generated by a RLL encoder have

at least and at most , , “zero’s” between successive

“one’s”. The series of encoded bits is converted, via a modulo-2

integration operation, called precoding, to a corresponding

modulated signal formed by bit cells having a high or low signal

value, a “one” being represented in the modulated signal by a

change from a high to a low signal value or vice versa. A “zero”

is represented by the lack of change of the modulated signal.

Specifically, codes with minimum runlength parameter

have been widely employed in optical recording, while codes

with have been proposed for future systems [4]. For

that reason will will focus our design efforts on efficient RLL

codes with minimum runlength parameter and .

Thereafter we will discuss the development of a new DCRLL

coding arrangement that employs a highly efficient RLL inner

code which is extended by a second coding mechanism, such

as, for example, Guided Scrambling [5], used for spectral

shaping (and other) purposes. We start with the development of

the new RLL codes.

II. VERY EFFICIENT RLL CODING SCHEMES

Let the integers and denote the information word length

and codeword length, respectively. The maximum rate,

, of an RLL code, given values of and , is called the

Paper approved by V. K. Bhargava, the Editor for Coding and Communication

Theory of the IEEE Communications Society. Manuscript received May 2001;

revised January 2, 2002. This paper was presented in part at the International

Symposium on Information Theory (ISIT), Washington, DC, June 2001.

K. A. S. Immink is with Turing Machines Inc., 3016 DK Rotterdam, The

Netherlands (e-mail: immink@turing-machines.com).

J.-Y. Kim, S.-W. Suh, and S. K. Ahn are with the DCT Team, Multi-Media

Labs, LG Electronics Inc., Seocho-Gu, Seoul 137-724, Korea.

Digital Object Identifier 10.1109/TCOMM.2003.809752

TABLE I

CAPACITY AND AS A FUNCTION OF

Shannoncapacity, and it is denoted by . Table I tabulates

and for relevant values of . The

efficiency of an RLL code is usually measured by a quantity

called code efficiency,, defined by

(1)

For ease of presentation we will first focus on the design of RLL

codes with . Later we will extend the ideas to the design

of codes with .

Up till now, small codes with a rate exceeding

two-thirds have not been published. There are only two ap-

proaches for constructing a RLL code, whose rate is

larger than two-thirds. Firstly, we may relax the maximum

runlength to a value larger than 7. Note that a (1,7) code was

first put to practical use in the early seventies, and that since the

advent of hard-disk drives (HDDs), significant improvements

in signal processing for timing recovery circuits have made

it possible to employ codes with a much larger maximum

runlength . Secondly, on top of that we may endeavor to

design a more efficient code. The efficiency of the rate 2/3,

(1,7) code is , which reveals that we

can gain at most 1.9% in rate by an alternative, more efficient,

code redesign. If we fully relax the constraint, i.e. set ,

we can at most gain 3.97% in code rate. In other words, a viable

improvement in code rate of a encoder ranges from

1.9 to 3.97%.

In the sequel of this paper we will show how to create a

(1,14) code, whose rate is 3.85% better than the traditional rate

2/3, (1,7) code. We start, in the next subsection, with a simple

problem, namely finding integers and that improve the rate,

2/3, of the industry standard code.

A. Suitable Integers and for

We will start with a simple exercise, namely a search for pairs

of integers and that are suitable candidates for a coding rate

exceeding 2/3. Obviously, the “best” code is a code with a rate,

, that exactly equals the capacity for desired values

of and . One is tempted to ask if it is possible to choose

the integers and such that . The answer

—a sounding no— was given by Ashley and Siegel [6], who

0090-6778/03$17.00 © 2003 IEEE

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 51, NO. 3, MARCH 2003 327

TABLE II

INTEGERS AND SUCH THAT .THE

QUANTITY EXPRESSES THE CODE EFFICIENCY

showed that, besides a very few trivial exceptions, the capacity

is an irrational number. Thus, as the rate of a code ,

where and are integers, is rational, the capacity can only be

approached.

In order to obtain some feeling if there are many “practical”

pairs of such integers and , we wrote a one-line computer

program for searching integers and that satisfy the inequal-

ities , where for reasons of implemen-

tation we set . All pairs of integers found are shown in

Table II. Surprisingly there are just six and pairs whose

quotient is larger than 2/3.1Perusal of the table reveals that the

code rate is highly attractive as it is just 0.28%

below the Shannon capacity . The next better code of

rate 34/49 is far less attractive as it is much more complex and

adds a minute 0.2% to the density gain with respect to a rate

9/13code.Wethereforeconcentratedourattentiononarate9/13

code. The fact that the rate 9/13 is less than capacity does not

mean that a code with that rate can be practically constructed. In

the next subsection, we will show how a rate 9/13, (1,14) code

can be created using a new design technique.

B. Encoder Description

In this section, we will describe a finite-state encoder that

generates sequences satisfying the constraint (the con-

straint is ignored for a while for ease of presentation). We start

with a few definitions. A codeword is a binary string of length

that satisfies the constraint. The set of codewords, ,

is divided into four subsets , , , and . The four

subsets are characterized as follows. Codewords in start and

end with a “0”, codewords in start with a “0” and end with

a “1”, etc. The encoder has states, which are divided into two

state subsets of a first and second type. The encoder has states

of the first type and states of the second type. The

two types of coding states are characterized by the fact that all

codewords in the states of the first type must start with a “0”,

while codewords in the states of the second type are free to

start with a”1” or a “0”.

The encoder state-transition rules are now easily described.

Codewords that end with a ‘0’, i.e., codewords in subsets

and may enter any of the encoder states. Code-

words that end with a “1” may only enter the states of the first

type only (and not the states of the second kind). Note that, by

definition, the codewords in states of the first type start with a

“0”, and codewords in states of the second type may start with a

1We omitted trivial pairs, such as 18 and 26, etc., that are multiples of given

smaller pairs. This, by the way, does not mean the omitted pairs are irrelevant

for a specific code design.

Fig. 1. Codewords that end with a “0” may be followed by codewords in the

states of the first type and the states of the second type, while words that

end with a ‘1’ may only be followed by codewords in the states of the first

type.

“1”, which prohibits that a codeword ending with “1” may enter

states of the second type. The encoder concept is schematically

represented in Fig. 1. It is essential that the sets of codewords

that belong to a given state (of any type) do not have codewords

in common (i.e., sets of codewords associated with coding states

are disjoint). This attribute implies that any codeword can un-

ambiguously be identified to the state from which it emerged.

Then,as we will show,itispossibleto assign the same codeword

to more than one information word (the miraculous multiplica-

tionofcodewords).Thesliding-blockdecodercan,byobserving

boththe current and the next codeword—for identifying the next

state—, uniquely decide which of the information words was

actually transmitted. Codewords in subsets and can, as

codewords in these two subsets end with a ‘0’, be followed by

codewords in any of states, and can, thus, be as-

signed times to different information words. Sim-

ilarly, codewords in and can only be followed by the

states of the 1st kind, and can therefore be assigned times

to different information words. Given the above encoder model,

we can write down two necessary conditions of such a rate ,

code.

Let denote the size of . Then, following the above

arguments, there are at maximum codewords

leaving the states of the first type. For a rate code, there

should be at least codewords leaving the states of the

first type. Thus we can write down the first condition

(2)

Similarly, the second condition follows from the fact that there

should be a sufficient amount of codewords leaving the states.

We find

(3)

Note that the inequalities (2) and (3) are equal to the approx-

imate eigenvector equation, which plays an essential role in a

variety of code constructions, such as the state-splitting method

[7]. There is, however, quite a difference as the two inequalities

above imply a very specific encoder structure (including size),

while, in general, the approximate eigenvector merely gives a

328 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 51, NO. 3, MARCH 2003

TABLE III

VALUES OF AND THAT SATISFY CONDITIONS (2) AND (3)

TABLE IV

DISTRIBUTION OF THE VARIOUS SUBSETS AND STATES

loose upper bound to the encoder size of the code found by the

state-splitting method.

With a small computer we can, given and , easily find

integers and that satisfy the above two conditions. An ex-

ample of a rate 9/13 code will show the effectiveness of the new

construction.

C. Rate 9/13, Codes

Assume the construction of a rate 9/13 encoder. Then

, , and . Table III shows

values of , , and that satisfy Conditions (2)

and (3). After finding suitable values of and , the next step

in the code construction is the distribution of the various code-

words among the various states. In order to find such a distribu-

tion, a trial and error approach has been used. Table IV shows,

for and as an example (Note that the distribution

given is not unique, there are many other ways for allocating

the codewords to the states), how the codewords in the various

subsets can be allocated to the various states. From Table IV, we

discern that the subset of size 233 has 72 words in States

1 and 2, 87 words in State 3, and 2 words in State 5. Thus in

total: . Similarly, it can be verified

that the four row sums equal the number of codewords in each

of the four subsets. Codewords that end with a “0”, i.e., code-

words in and , can be assigned times to different

information words, while codewords that end with a “1”, i.e.

codeword in and , can be assigned times to

different information words. Thus, the total number of informa-

tion words that can be assigned to the codewords in State 1 is

. Similarly, it can be verified that from

any of the encoder states there at least 516 information

words that can be assigned to codewords, which shows that the

code can accommodate 9-bit information words. An enumera-

tion table such as Table IV suffices to construct a code by as-

signing codewords to the coding states and source words.

It can be verified with the procedure outlined above that a

13-state encoder of (code) size 520 can be created. The max-

imum size of any 13-bit code equals ,

and we therefore conclude that the above code is quite efficient

TABLE V

INTEGERS AND SUCH THAT .THE

QUANTITY EXPRESSES THE CODE EFFICIENCY

particularly considering that the encoder has a rel-

atively small number, 13, of states. For -bit codewords,

wefindthat a 13-state encoder achieves the maximum codesize,

321, possible. These codes are supposedly the most efficient in

existence in terms of relative performance. Such extremely ef-

ficient codes could up till now only be constructed with “large”

codewords, but as shown here also selected “small” codes can

have a rate which is very close to the channel capacity.

As the above code can accommodate more than the required

512 words, surplus ‘worst-case’ codewords can be deleted

for minimizing the constraint. After a judicious process of

deleting codewords that end or start with “long” runs of ‘0’s,

we constructed a 5-state (1,18) code, and a 13-state (1,14) code.

Note, in Table I, that the smallest possible for a rate 9/13

code equals 12.

A few words are in order about the decoder. A decoder

must observe both the current and the upcoming codeword to

uniquely decode the encoded sequence of codewords into a

sequence of information words. Single channel bit errors can

thus lead to at most two decoded -bit symbols. The decoder

comprises two look-up tables: the next-state look-up table and

the data look-up table. The next-state look-up table has the

next codeword as an input, and the state to which this word

belongs as an output. The data look-up table has the output of

the next-state look-up table and the current codeword as an

input, and the output of the data look-up table is the decoded

information word.

III. EFFICIENT CODES

Up till now we have concentrated on the design of efficient

codes, and as both code parameters, and ,

are of great practical interest for optical recording, we will now

repeat the exercise for the case .

A. Suitable Integers and for

RLL codes with minimum runlength parameter have

been widely published. Table I tabulates as a function

of , and from this table the reader can easily discern the head

room available for the design of a code of rate

. The rate 8/15 is, see Table I, 3.3% below channel capacity

. Table V shows values of and , where

and . The ‘ and ’ pairs are ordered

according to their quotient . Clearly, the quotients 11/20,

6/11, and 7/13 are suitable candidate rates for the creation of

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 51, NO. 3, MARCH 2003 329

small codes. Efficiency-wise speaking the code of rate

17/31 is also attractive, but the code is far too complex for cur-

rent implementation. Kim [8] has been granted a U.S. Patent on

an embodiment of a rate 7/13, (2,25) code, which operates with

a single merging bit (3PM principle [9]). In the next subsection,

we will describe in detail how the very efficient codes

with the above mentioned rates can be constructed.

B. Encoder Description

In this section we will describe a finite-state encoder that gen-

erates sequences that satisfy the constraint (note that the

constraint will be ignored for a while). We start with a few def-

initions. The encoder is assumed to have states, which are di-

vided into three state subsets of states of a first, second, and third

type. The state subsets are of size , , and ,

respectively. A codeword is a binary string of length that sat-

isfies the constraint. The set of codewords is divided

into nine subsets denoted by ,,,,

etc, where the two first symbols of the subset subscript denote

the first two symbols of the codeword, and the last two sym-

bols of the subset subscript denote the last two symbols of the

codeword. Thus, codewords in start and end with ‘00’;

codewords in start with ‘00’ and end with a “01”, etc.

The codewords in the various subsets are distributed over the

various states of the three types such that

• codewords in states of the first type start with “00”;

• codewords in states of the second type start with “01” or

“00”;

• codewords in states of the third type start with “10,” “01,”

or “00”.

The state-transition rules are now easily described. Codewords

that end with the string “00,” i.e., codewords in subsets ,

, and may enter any of the encoder states. Code-

words that end with a “10” may not be followed by codewords

in a state of the third type. Similarly, codewords that end with a

“1” may only be followed by codewords belonging to states of

the first type. The state sets of codewords from whicha selection

is to be made do not have codewords in common. As a result, it

is possible to assign the same codeword to differentinformation

words. For example, codewords that end with “00,” i.e., code-

words in subsets , , and , may enter any state

so that these codewords can be assigned times

to different information words. Codewords that end with “10”,

i.e.words in subsets , , and may enter statesof

thefirstandsecond type so that these codewordscanbeassigned

times to different information words. Similarly, code-

words that end with a “1”, i.e. words in the remaining subsets

, , and can be assigned times. Given the

above encoder model, it is straightforward to write down three

conditions for the existence of such a rate code. Define

and

TABLE VI

EXAMPLE OF THE DISTRIBUTION OF THE VARIOUS SUBSETS AND

STATES OF A RATE 6/11, CODE

TABLE VII

SURVEY OF NEWLY DEVELOPED CODES

Then the conditions are

(4)

(5)

(6)

In a similar vein as with the codes discussed pre-

viously, we have experimented with the selection of suitable

values of , , , , and . Many good codes have been

found. As a typical example, which is amenable for a hand

check, we will show results of a 9-state, code of rate 6/11.

Given the choice of the code rate, we use a small computer pro-

gram to find suitable values of , , and that satisfy con-

ditions (4)–(6). A possible distribution of the various codeword

sets, where we opted for , , and , is shown

in Table VI. Such a distribution table suffices to construct the

code. After judiciously barring worst-case codewords from the

coding table, we were able to construct a rate 6/11, (2,15) code.

Note, see Table I, that is the smallest value possible

for the given rate 6/11. Using the above construction methods,

we built a 9-state rate 11/20, (2,23) code, whose efficiency is

0.25% less than unity. In addition, we constructed a rate 7/13,

(2,11) code, whose efficiency is 1.1% less than unity.

Table VII summarizes the new RLL codes, and ,

we have found. As we can see, the efficiency of the majority of

the new codes is just a fewtenths of a percent below capacity.

The efficiency of the new construction technique can be fur-

ther exemplified by a second example, where the code size, ,

is not equal to a power of two. The ‘spare’ codewords can be

used as alternative channel representations for suppressing the

lf components. The codeword length equals . Table VIII

shows the efficiency, , as a function of the

number of encoder states, . It shows that the construction tech-

nique yields fine results as the codes obtained reach efficien-

cies that are only a tenth of a percent below capacity. Note that

330 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 51, NO. 3, MARCH 2003

TABLE VIII

CODE SIZE, , FOR , , AND SELECTED

VALUES OF THE NUMBER OF ENCODER STATES

the maximum size of a code with codewords of length

equals 453.

At this junction, we have completed the description of the

new RLL codes, and we are in the position to describe how we

can turn the newly developed RLL codes into DCRLL codes.

IV. GUIDED SCRAMBLING

In Guided Scrambling (GS), each information word can be

represented by a member of a selection set consisting of

, , codewords. The encoder generates the selection

set, and the “best” (according to a predefined penalty function)

codeword in the selection set is selected for transmission. The

penalty function weighs each element of the selection set ac-

cording to its spectral and other properties such as maximum

runlength and so on. The maximum runlength constraint, , im-

posed by the GS penalty function can be made smaller than that

of the inner RLL code. Naturally, the GS method cannot fully

guarantee the constraints, but the probability of occurrence

of such vexatious subsequences can be made extremely small.

Other (runlength) constraints, such as MTR, can be added to the

penalty function if required.

In the preferred GS format, user bits are multiplexed with

redundant bits, which are a part of the input of the channel

encoder. The redundant bits are used to generate a selection

set of size . In the proposed coding format, the channel

encoder input comprises redundant bits plus user bits that

from a superblock. The -bit super block is scram-

bled using a self-synchronizing (feedback register) scrambler

(see for more details [1, Chapter 13]). Then, under the rules of

the RLL code, the -bit scrambled super block

is translated into channel bits. The above scrambling/en-

coding step is repeated times for all possible combinations

of the redundant bits. The encoder transmits the sequence that

best matches the channel constraints such as lf content, -con-

straint, etc., discussed above.

The integers and are integers chosen such that

(7)

where is an integer that denotes the number of -bit infor-

mation words in a super block. In a practical environment of a

byte-oriented system, is a multiple of eight, i.e.

. Thus the overall rate of the code is

(8)

In the next subsection, we will select values of and , and

show results of computer simulations.

Fig. 2. Simulation results of a PSD function of a ( ,)

code of overall rate . The spectrum was computed on

the basis of 10 million channel bits. The straight line is a “best fit”

estimate of the low-frequency part of the spectrum. We simply discern that

dB.

A. Results and Comparison with Prior Art Methods

We have written a computer program to simulate the per-

formance of GS in conjunction with the newly developed RLL

codes. The power spectral density (PSD), , and other rel-

evant characteristics can easily be computed.

As a typical example, we will show results obtained with the

rate 9/13, (1,14) RLL code. Fig. 2 shows the spectrum, ,

versus (channel) frequency, , for , , and .

The overall code rate is . Note that the overall code is byte ori-

ented as is a multiple of eight. The scrambler

polynomial used in our simulations is . In the run-

length penalty function, we set the maximum “zero” runlength

to , which means that the code essentially behaves as a

(, ) code. The spectrum, , versus frequency

has a parabolic shape in the low-frequency range, which shows

as a straight line as a result of the logarithmic frequency axis

used. Using a computer simulation of the encoding process, we

compute the PSD of an encoded sequence. Then using a best-fit

(LMS) estimation technique involving the low-frequency com-

ponents of the spectrum, we derive an estimate of the low-fre-

quency performance. For example, in Fig. 2, we estimate that

dB. In similar vein, we estimated

as a function of the overall rate, . Results are shown in Fig. 3

for and . The maximum runlength in the GS

penaltyfunctionwassetto . Similar curvescan be plotted

for other values of , and, obviously, the more relaxed the con-

straint, the more lf suppression. In order to compare our results

with the maximum theoretical performance of DCRLL codes,

we invoked the algorithms derived by Braun and Janssen [10],

which compute the maxentropic performance of codes.

The maxentropic performance sets a theoretical limit to the per-

formance of any implemented DCRLL code. Fig. 3 shows that

the implemented codes operate very close to the best theoretical

performance. For the implemented codes are 2–3 dB,

(for , 1–2 dB) below the theoretical ceiling. As a further

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 51, NO. 3, MARCH 2003 331

Fig. 3. The two upper curves show the lf suppression, , as a function

of the overall code rate . The upper curve shows results for , and the

lower curve for . The maximum imposed runlength for both cases is

. As a comparison we plotted the theoretical ceiling, ,of

maxentropic ( , ) sequences [1]. The curve denoted by (1,7)PP

gives results of a prior art code [4].

Fig. 4. The two upper curves show the lf suppression, , as a function

of the overall rate . The upper curve is for , and the lower curve is

for . The maximum imposed runlength for both cases is .As

a comparison we plotted the theoretical ceiling, , of maxentropic

(, ) sequences.

comparison we plotted the performance of a prior art rate 2/3,

(1,7) code [4], which is extended with dc-control bits on data

sequence level.

We proceed with a second example. Fig. 4 shows the lf spec-

tral performance of the rate 6/11, (2,15) code in conjunction

with Guided Scrambling. Results are given for and .

As reported in the above case, the combination of an effi-

cientRLLcode and GS works quite satisfactorily as only 2–3 dB

can be gained with respect to the theoretical ceiling.

V. CONCLUSIONS

We have studied the construction of extremely efficient RLL

codes. We have shown that there is a very limited number of

pairs of integers and , whose quotient form a suitable

coding rate for and RLL codes that are more

efficient than prior art codes. Suitable values for the rate of a

code are 9/13 and 11/16, while for codes we

have 11/20, 7/13, and 6/11.

We have disclosed a novel technique for designing very ef-

ficient RLL codes, whose rate is only a few tenths below ca-

pacity. For example, we have constructed a 13-state rate 9/13,

(1,14) RLL code, whose rate is only 0.2% below channel ca-

pacity . In addition, we have constructed a new rate

6/11, (2,15) code, a rate 11/20, (2,23) code, and a rate 7/13,

(2,11) code.

Results of computer simulations have shown that the ar-

rangement of the newly developed RLL codes in conjunction

with Guided Scrambling (GS) is extremely efficient in terms

of overall rate and spectral performance as we have shown

that only a few dB in spectral performance can be gained with

respect to the theoretical ceiling. With a newly developed

code we achieved a 9% higher overall rate than that of DVD’s

EFMPlus.

REFERENCES

[1] K. A. S. Immink, Codes for Mass Data Storage Systems. Geldrop, The

Netherlands: Shannon Foundation, 1999.

[2] K. A. S. Immink and H. Ogawa, “Method for Encoding Binary Data,”

U.S. Patent 4501000, Feb. 19, 1985.

[3] K. A. S. Immink, “EFMPlus: the coding format of the multimedia com-

pact disc,” IEEE Trans. Consumer Electron., vol. 41, pp. 491–497, Aug.

1995.

[4] T. Narahara, S. Kobayashi, M. Hattori, Y. Shimpuku, G. van den Enden,

J. A. Kahlman, M. van Dijk, and R. Woudenberg, “Optical disc system

for digital video recording,” in Proc. Joint Int. Symp. on Optical Memory

and Optical Data Storage, Hawaii, July 11–15, 1999.

[5] I. J. Fair, W. D. Gover, W. A. Krzymien, and R. I. MacDonald, “Guided

scrambling: a new line coding technique for high bit rate fiber optic

transmission systems,” IEEE Trans. Commun., vol. 39, pp. 289–297,

Feb. 1991.

[6] J. J. Ashley and P. H. Siegel, “A note on the shannon capacity of

run-length-limited codes,” IEEE Trans. Inform. Theory, vol. IT-33, pp.

601–605, July 1987.

[7] R. L. Adler, D. Coppersmith, and M. Hassner, “Algorithms for sliding

block codes. An application of symbolic dynamics to information

theory,” IEEE Trans. Inform. Theory, vol. IT-29, pp. 5–22, Jan. 1983.

[8] M. J. Kim, “7/13 Channel coding and decoding method using RLL(2,25)

code,” U.S. Patent 6188336, Feb. 13, 2001.

[9] G. V. Jacoby, “Method and apparatus forencoding and recovering binary

digital data,” U.S. Patent 4 323931, Apr. 6, 1982.

[10] V. Braun and A. J. E. M. Janssen, “On the low-frequency suppression

performance of DC-free runlength-limited modulation codes,” IEEE

Trans. Consumer Electron., vol. 42, no. 4, pp. 939–945, Nov. 1996.