Content uploaded by Renfei Bu

Author content

All content in this area was uploaded by Renfei Bu on Jun 07, 2019

Content may be subject to copyright.

Decoding Criteria and Zero WER Analysis

for Channels with Bounded Noise and Offset

Renfei Bu

Applied Mathematics Dept., Optimization Group

Delft University of Technology

Delft, Netherlands

R.Bu@tudelft.nl

Jos H. Weber

Applied Mathematics Dept., Optimization Group

Delft University of Technology

Delft, Netherlands

J.H.Weber@tudelft.nl

Abstract—Data storage systems may not only be disturbed by

noise. In some cases, the error performance can also be seriously

degraded by offset mismatch. Here, channels are considered for

which both the noise and offset are bounded. For such channels,

Euclidean distance-based decoding, Pearson distance-based de-

coding, and Maximum Likelihood decoding are considered. In

particular, for each of these decoders, bounds are determined on

the magnitudes of the noise and offset intervals which lead to

a word error rate equal to zero. Case studies with simulation

results are presented conﬁrming the ﬁndings.

Index Terms—Flash memory, optical recording, maximum

likelihood decoding, bounded noise, offset mismatch, zero WER

I. INTRODUCTION

With the explosive growth of reliance on information, both

for home and personal use along with business and profes-

sional needs, more data are being generated, processed, and

stored. It is necessary to guarantee very high access speeds,

low power consumption, and, most importantly, reliable data

storage systems. In data storage systems, it is usually found

that noise (which leads to unpredictable stochastic errors) is

an important issue, but that also other physical factors may

hamper the reliability of the stored data. For example, in

Flash memories, the number of electrons of a cell decreases

with time and some cells become defective over time [1].

The amount of electron leakage depends on various physical

parameters, such as, the device’s temperature, the magnitude

of the charge, and the time elapsed between writing and

reading the data. In digital optical recording, ﬁngerprints and

scratches on the surface of discs result in offset variations of

the retrieved signal [2].

To address the physical-related offset issues, two approaches

are usually investigated and applied in storage systems. One

approach uses pilot sequences to estimate the unknown chan-

nel offset [3]. The method is often considered too expen-

sive with respect to redundancy. Other approaches are error

correcting techniques. Up to now, various coding techniques

have been applied to alleviate the detection in case of channel

mismatch, speciﬁcally rank modulation [4], balanced codes

[5] and composition check codes [6]. These methods are

often considered too expensive in terms of redundancy and

complexity.

Since the retrieved data value has been offset in channels,

a Euclidean distance measure will be biased or grossly inac-

curate. Immink and Weber [7] showed that decoders using

the Pearson distance have immunity to offset and/or gain

mismatch. Use of the Pearson distance requires that the set

of codewords satisﬁes certain special properties. Such sets

are called Pearson codes. In [8], optimal Pearson codes were

presented, in the sense of having the largest number of

codewords and thus minimum redundancy among all q-ary

Pearson codes of ﬁxed length n. Further, in [9] a decoder was

proposed based on minimizing a weighted sum of Euclidean

and Pearson distances. In [10], Blackburn investigated a max-

imum likelihood (ML) criterion for channels with Gaussian

noise and unknown gain and offset mismatch. In a subsequent

study, ML decision criteria were derived for Gaussian noise

channels when assuming various distributions for the offset in

the absence of gain mismatch [11].

The above-summarized research results are based on the

Gaussian noise model, which is the most satisfactory of reality

in many cases. An increasing number of studies focus on

another important class of non-Gaussian stochastic processes:

bounded noise, which is motivated by the fact that the

Gaussian stochastic process is an inadequate mathematical

model of the physical world because it is unbounded [12],

[13]. Moreover, in many relevant cases, especially in Flash

memory, the impact of parameters (such as charge leakage)

on the retrieved data value should not be arbitrarily large.

Consequently, not taking into account the bounded nature of

stochastic variations may lead to impracticable model-based

inferences. In this paper, we explore decoding criteria for

channels with bounded noise and bounded offset mismatch.

Speciﬁcally, we consider Euclidean distance-based decoding,

Pearson distance-based decoding, and Maximum Likelihood

decoding. Most importantly, we investigate, for each of these

decoders, under which constraints zero Word Error Rate

(WER) performance can be achieved. We should stress that

zero WER performance is achieved without assumptions of

speciﬁc distributions for the bounded noise and offset.

The remainder of this paper is organized as follows. We

ﬁrst review the channel with noise and offset and the classical

Euclidean and Pearson distance-based decoding criteria in Sec-

tion II. In Section III, we present a ML decoding method when

the noise and offset in the channel are bounded. Simulation

results for speciﬁc cases are given in Section IV. In the channel

978-1-5386-8088-9/19/$31.00 ©2019 IEEE

with bounded noise and offset, zero WER is achievable for all

detectors discussed in this paper. Conditions to achieve zero

WER for these decoders are derived in Section V. We conclude

the paper in Section VI.

II. PRELIMINARIES AND CHA NN EL MO DE L

We consider transmitting a codeword x= (x1, x2, . . . , xn)

from a codebook S ⊂ Rn, where n, the length of x, is a

positive integer. In many applications, the received vector may

not only be hampered by noise v= (v1, v2, . . . , vn), but also

by gain aand/or offset b. Hence,

r=a(x+v) + b1,

where 1= (1,1,...,1) is the real all-one vector of length n.

The gain and offset values aand bmay change from word

to word, but are constant for all transmitted symbols within a

codeword, while the noise values vary from symbol to symbol.

In the channel model under consideration in this paper, we

assume that there is no gain mismatch, i.e., a= 1, but there

is an offset b∈R, i.e.,

r=x+v+b1.(1)

The values viin the noise vector v= (v1, v2, . . . , vn)

are independently and identically distributed with probability

density function φ, leading to a probability density function

χ(v) = Πn

i=1φ(vi)for v.

A well-known decoding criterion upon receipt of the vector

ris to choose a codeword ˆx ∈ S which minimizes the

(squared) Euclidean distance between the received vector r

and codeword ˆx, i.e.,

δE(r,ˆx) =

n

X

i=1

(ri−ˆxi)2.(2)

It is known to be optimal with regard to handling Gaussian

noise.

The Pearson distance measure [7] is used in situations which

require resistance towards offset and/or gain mismatch. For

any vector u∈Rn, let

¯u =1

n

n

X

i=1

ui

denote the average symbol value and let

σu= n

X

i=1

(ui−¯u)2!1/2

denote the unnormalized symbol standard deviation. The Pear-

son distance between the received vector rand a codeword

ˆx ∈ S is deﬁned as

δP(r,ˆx) = 1 −ρr,ˆx,(3)

where ρr,ˆx is the well-known Pearson correlation coefﬁcient,

ρr,ˆx =

n

P

i=1

(ri−¯r)(ˆxi−¯

ˆx)

σrσˆx

.

A Pearson decoder chooses a codeword minimizing this dis-

tance. As shown in [7], a simpler Pearson distance-based

criterion leading to the same result in the minimization process

reads

δ0P(r,ˆx) =

n

X

i=1

(ri−ˆxi+¯

ˆx)2,(4)

if there is no gain mismatch, as assumed in this paper.

III. MAXIMUM LIKELIHOOD DEC OD IN G FO R NOISE AND

OFFS ET W IT H BOUNDED RANGES

We assume that the noise values are restricted to a certain

range. More speciﬁcally, φonly takes non-zero values on an

interval (−α, α), where α > 0. Hence, −α < vi< α for all

i. For a codeword ˆx = (ˆx1,...,ˆxn)∈ S, we deﬁne its noise

environment as

Uˆx ={u= (u1, . . . , un)∈Rn: ˆxi−α < ui<ˆxi+α}.(5)

For the offset bwe assume that it has a probability density

function ζwhich only takes non-zero values on an interval

(γ, η ). Hence, γ < b < η. Since the receiver can subtract

η+γ

21from rif the offset range is not symmetric around zero,

we may assume without loss of generality that the offset is

within the range (−β, β ), where β= (η−γ)/2, which we

will do throughout the rest of this paper. We deﬁne

Lr={r−t1:t∈(−β, β )}(6)

for a vector r∈Rn.

In order to achieve ML decoding, we need to choose the

codeword of maximum likelihood given the received vector.

Assuming all codewords are equally likely, this is equivalent to

maximizing the probability density value of the received vector

rgiven the candidate codeword ˆx. Denoting the probability

density function of v+b1by ψ, we ﬁnd with (1) that we

should thus maximize

ψ(r−ˆx) = Z∞

−∞

χ(r−ˆx −b1)ζ(b)db (7)

over all candidate codewords ˆx, where χand ζare the proba-

bility density functions of the noise and offset, respectively. χ

and ζcan be any distribution as long as they are restricted to

the indicated intervals. In Section IV, we will show simulation

results assuming speciﬁc distributions.

Note from (6) and (5) that a point r−t1of Lris in Uˆx if

and only if tsatisﬁes

ri−ˆxi−α<t<ri−ˆxi+α, ∀i= 1, . . . , n,

−β < t < β. (8)

From this observation, we ﬁnd that (7) equals

(Rt0(r,ˆx)

t1(r,ˆx)χ(r−ˆx −b1)ζ(b)db if t0(r,ˆx)> t1(r,ˆx),

0otherwise,

(9)

where

t0(r,ˆx) = min ({ri−ˆxi+α|i= 1, ..., n }∪{β}),

t1(r,ˆx) = max ({ri−ˆxi−α|i= 1, ..., n } ∪ {−β}).

(10)

IV. CAS E ST UD IE S

In this section, we consider several noise and offset distri-

butions. Simulated WER results are shown for the codebook

S∗={(0,0,0),(1,1,0),(1,0,1),(0,1,1)}

of length 3 and size 4, in combination with different decoders.

This simple codebook is used to demonstrate some important

WER characteristics. Codebook construction as such is beyond

the scope of this paper. The interested reader is referred to [14].

A. WER for Uniform Noise and Offset

The uniform distribution is the most-commonly used for

bounded random variables. Let the probability density function

of a random variable which is uniformly distributed on the

interval (τ1, τ2)be denoted by U(τ1, τ2). The uniform distri-

bution U(τ1, τ2)has probability density function

U(x) = 1

τ2−τ1if τ1< x < τ2,

0otherwise. (11)

Hence, for the noise we assume vi∼ U(−α, α)and for the

offset b∼ U(−β , β).

Note that ML decoding in this case is tantamount to

maximizing

max{0, t0(r,ˆx)−t1(r,ˆx)},(12)

i.e., choosing the codeword ˆx for which the part of the line

segment Lrthat is within Uˆx is largest.

Simulated WER results for the example code S∗and various

values of αand βare shown in Figs. 1-3 for Euclidean,

Pearson, and ML decoders, respectively.

In Fig. 1, we observe that the performance of the Euclidean

decoder gets worse with increasing values of αand/or β. In

Fig. 2, the curves for different values of βoverlap because of

the Pearson decoder’s intrinsic immunity to offset mismatch.

Note that the performance of the Euclidean decoder is close

to ML performance for β= 0.15 and that the performance of

the Pearson decoder is close to ML performance for β= 0.30

in Fig. 3.

Most interestingly, for Euclidean decoding, WER ap-

proaches zero if α≤1/2−β, while for Pearson decoding,

it happens when α < 1/4. WER approaches zero for the

ML decoding if α≤1/4or α≤1/2−β, i.e., α≤

max{1/4,1/2−β}. Indeed, we observe in Fig. 3 a zero WER

for α≤0.35 if β= 0.15, for α≤0.30 if β= 0.20, and for

α≤0.25 if β= 0.25 or β= 0.30. We will show in Section

V that, for all decoders under consideration, a WER of zero

is achieved if the magnitudes of the noise and offset intervals

satisfy certain conditions.

B. WER for Uniform Noise and Various Offset Distributions

In this subsection, we consider again uniform noise, but

various options for the offset distribution. In particular, vi∼

U(−0.3,0.3), while the offset is (i) uniform, i.e., b∼

U(−β, β ), as in the previous subsection, (ii) triangular, i.e.,

b∼ T (−β, 0, β), as speciﬁed next, or (iii) Gaussian with mean

zero and variance σ2, i.e., b∼ N(0, σ2). The last option is

0.15 0.2 0.25 0.3 0.35 0.4

10-7

10-6

10-5

10-4

10-3

10-2

10-1

WER

Euclidean Distance Based Decoding

= 0.15

= 0.2

= 0.25

= 0.3

Fig. 1. Simulated WER of Euclidean distance-based decoding for codebook

S∗on channels with uniform noise vi∼ U(−α, α)and uniform offset b∼

U(−β, β ).

0.15 0.2 0.25 0.3 0.35 0.4

10-7

10-6

10-5

10-4

10-3

10-2

10-1

WER

Pearson Distance Based Decoding

= 0.15

= 0.2

= 0.25

= 0.3

Fig. 2. Simulated WER of Pearson distance-based decoding for codebook

S∗on channels with uniform noise vi∼ U(−α, α)and uniform offset b∼

U(−β, β ).

0.15 0.2 0.25 0.3 0.35 0.4

10-7

10-6

10-5

10-4

10-3

10-2

10-1

WER

Maximum Likelihood Decoding

= 0.15

= 0.2

= 0.25

= 0.3

Fig. 3. Simulated WER of ML decoding for codebook S∗on channels with

uniform noise vi∼ U(−α, α)and uniform offset b∼ U(−β , β).

0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55

10-7

10-6

10-5

10-4

10-3

10-2

10-1

WER

Euclidean

Pearson

ML

Fig. 4. Simulated WER for codebook S∗on channels with uniform noise

vi∼ U(−0.3,0.3) and uniform offset b∼ U(−β , β).

0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55

10-7

10-6

10-5

10-4

10-3

10-2

10-1

WER

Euclidean

Pearson

ML

Fig. 5. Simulated WER for codebook S∗on channels with uniform noise

vi∼ U(−0.3,0.3) and triangular offset b∼ T (−β, 0, β).

included since it is the most important representative of un-

bounded distributions. The triangular distribution T(−β, 0, β)

has probability density function

T(x) = 1

β(1 −1

β|x|)if −β < x < β,

0otherwise. (13)

In Figs. 4-6 we present WER results for the example

code S∗for the three offset options under consideration. For

comparison purposes, the WER is presented as a function of

the standard deviation of the offset in each case.

In general, note that the WER of Pearson decoding has the

same constant value for all cases, since it does not depend

on the offset. It is close to ML performance in case of large

standard deviations. The performance of Euclidean decoding

is close to ML performance for small standard deviations. For

medium standard deviations, ML decoding clearly outperforms

both Euclidean and Pearson decoding in all three cases.

0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55

Standard Deviation

10-7

10-6

10-5

10-4

10-3

10-2

10-1

WER

Euclidean

Pearson

ML

Fig. 6. Simulated WER for codebook S∗on channels with uniform noise

vi∼ U(−0.3,0.3) and Gaussian offset b∼ N (0, σ2).

We also observe in Fig. 4 that the WERs of Euclidean and

ML decoders approach zero if the standard deviation β/√3of

the uniform offset distribution is at most 0.12, and in Fig. 5

that the WER approaches zero if the standard deviation β/√6

of the triangular offset distribution is at most 0.08. On the other

hand, we see in Fig. 6 that for Gaussian offset zero WER can

only be achieved by extremely small noise, as expected, due to

the unbounded nature of the Gaussian distribution. In the next

section we will analyse the zero WER constraints for different

detectors.

V. ZE RO ERROR ANALYS IS

In this section, we will show that, for all decoders under

consideration, a WER of zero is achieved if the magnitudes

of the noise and offset intervals satisfy certain conditions.

A. Euclidean Distance-Based Decoding

The Euclidean decoder can achieve zero WER for channels

with bounded noise and offset when α+βis sufﬁciently small,

as shown in the following result.

Theorem 1. If the noise and offset are restricted to the

intervals (−α, α)and (−β, β ), respectively, with

α+β≤min

s,c∈S,s6=c

n

P

i=1

(si−ci)2

2

n

P

i=1 |si−ci|

,(14)

then the Euclidean decoder achieves a WER equal to zero.

Proof. Assume that x∈ S is sent and r=x+v+b1is

received. Then, for all codewords ˆx 6=x, it holds that

δE(r,ˆx)−δE(r,x)

=

n

P

i=1

(ri−ˆxi)2−

n

P

i=1

(ri−xi)2

=

n

P

i=1

(ri−xi−ˆxi+xi)2−

n

P

i=1

(ri−xi)2

=

n

P

i=1

(ˆxi−xi)2−2

n

P

i=1

(ˆxi−xi)(ri−xi)

=

n

P

i=1

(ˆxi−xi)2−2

n

P

i=1

(ˆxi−xi)(vi+b)

≥2(α+β)

n

P

i=1 |ˆxi−xi| − 2

n

P

i=1 |ˆxi−xi||vi+b|

= 2

n

P

i=1 |ˆxi−xi|(α+β− |vi+b|)

>0,

where the fourth equality follows from ri=xi+vi+b, the

ﬁrst inequality follows from (14) and the last inequality from

the fact that |vi+b|≤|vi|+|b|< α +βfor all i. Hence, if

decoding is based on minimizing (2), the transmitted codeword

is always chosen as the decoding result, leading to a WER

equal to zero.

B. Pearson Distance-Based Decoding

Since Pearson distance based decoding features its immu-

nity to offset mismatch, zero WER performance only requires

a limited value of α, which is shown in the next theorem.

Theorem 2. If the noise and offset are restricted to the

intervals (−α, α)and (−β, β ), respectively, with

α < min

s,c∈S,s6=c

n

P

i=1

(si−¯s −ci+¯c)2

n−1

n4

n

P

i=1 |si−¯s −ci+¯c|

,(15)

then the Pearson decoder achieves a WER equal to zero.

Proof. Assume that x∈Sis sent, and r=x+v+b1is

received. Then, for all codewords ˆx 6=x, it holds that

δ0P(r,ˆx)−δ0P(r,x)

=

n

P

i=1

(ri−ˆxi+¯

ˆx)2−

n

P

i=1

(ri−xi+¯x)2

=

n

P

i=1

(ri−ˆxi+¯

ˆx −¯r)2−n

P

i=1

(ri−xi+¯x −¯r)2

=

n

P

i=1

(xi−¯x −ˆxi+¯

ˆx)2

+

n

P

i=1

2(xi−¯x −ˆxi+¯

ˆx)(ri−xi+¯x −¯r)

=

n

P

i=1

(xi−¯x −ˆxi+¯

ˆx)2+

n

P

i=1

2(xi−¯x −ˆxi+¯

ˆx)(vi−¯v)

>n−1

n4α

n

P

i=1 xi−¯x −ˆxi+¯

ˆx

−

n

P

i=1

2xi−¯x −ˆxi+¯

ˆx|vi−¯v|

=

n

P

i=1 xi−¯x −ˆxi+¯

ˆx(n−1

n4α−2|vi−¯v|)

≥0,

(16)

where the fourth equality follows by substituting ri=xi+

vi+band ¯r =¯x+¯v +b, the ﬁrst inequality from (15), and the

last inequality from the fact that |vi−¯v|<n−1

n2αfor all i.

Hence, if decoding is based on minimizing (4), the transmitted

codeword is always chosen as the decoding result, leading to

a WER equal to zero.

C. Maximum Likelihood Decoding

Finally, we show that zero WER for ML decoding is

achieved if αor α+βis sufﬁciently small.

Theorem 3. If the noise and offset are restricted to the

intervals (−α, α)and (−β, β ), respectively, with

α≤min

s,c∈S,s6=c

max

1≤i,j≤n{(si−ci)−(sj−cj)}

4

(17)

or

α+β≤min

s,c∈S,s6=c

max

i=1,...,n(|si−ci|)

2

(18)

then the ML decoder achieves a WER equal to zero.

Proof. Assume that x∈ S is sent and r=x+v+b1is

received. We will show that if (17) or (18) holds, then ψ(r−

ˆx)=0for all codewords ˆx 6=x. First of all, note that

t0(r,ˆx)−t1(r,ˆx)

= min ({ri−ˆxi+α|i= 1, . . . , n}∪{β})

−max ({ri−ˆxi−α|i= 1, . . . , n } ∪ {−β})

= min ({ri−ˆxi+α|i= 1, . . . , n}∪{β})

+ min ({−(ri−ˆxi) + α|i= 1, . . . , n }∪{β})

= min({2β}∪{ min

i=1,...,n{−|ri−ˆxi|} +α+β}

∪{ min

1≤i,j≤n{(ri−ˆxi)−(rj−ˆxj)}+ 2α}).

(19)

Next, we will show that if (17) or (18) holds, this expression

is negative whenever ˆx 6=x.

If (17) holds, then

min

1≤i,j≤n{(ri−ˆxi)−(rj−ˆxj)}+ 2α

= min

1≤i,j≤n{(ri−ˆxi)−(rj−ˆxj)} − 2α+ 4α

<min

1≤i,j≤n{(ri−ˆxi)−(rj−ˆxj)−(vi−vj)}+ 4α

= min

1≤i,j≤n{[(ri−ˆxi)−(rj−ˆxj)]

−[(ri−xi−b)−(rj−xj−b)]}+ 4α

= min

1≤i,j≤n{(xi−ˆxi)−(xj−ˆxj)}+ 4α

=−max

1≤i,j≤n{(ˆxi−xi)−(ˆxj−xj)}+ 4α

≤0.

(20)

where the ﬁrst inequality follows from the fact that vi−vj≤

|vi|+|vj|<2αand the second inequality from (17).

If (18) holds, then

min

i=1,...,n{−|ri−ˆxi|} +α+β

= min

i=1,...,n{−|ri−ˆxi|} − α−β+ 2(α+β)

<min

i=1,...,n{−|ri−ˆxi|−|vi+b|} + 2(α+β)

= min

i=1,...,n{−|ri−ˆxi|−|ri−xi|} + 2(α+β)

≤min

i=1,...,n{−|xi−ˆxi|} + 2(α+β)

=−max

i=1,...,n{|xi−ˆxi|} + 2(α+β)

≤0,

(21)

where the ﬁrst inequality follows from the fact that |vi+b| ≤

|vi|+|b|< α +βand the last inequality from (18).

Combining (19), (20), and (21) with (7) and (9), we ﬁnd

that indeed ψ(r−ˆx)=0for all codewords ˆx 6=x, while the

probability density value of the received vector rgiven the

transmitted codeword xis larger than 0, i.e., ψ(r−x)>0.

This implies that if decoding is based on maximizing (7), the

transmitted codeword is always chosen as the decoding result,

leading to a WER equal to zero.

For the codebook S∗, the bound on α+βfor a Euclidean

decoder in (14) is 1/2, the bound on αfor a Pearson decoder

in (15) is 3/16, and the bounds on αand α+βfor a ML

decoder in (17) and (18) are 1/4and 1/2, respectively.

Considering Figs. 1-3, results from Theorems 1-3 are con-

ﬁrmed. The zero WER of Pearson decoding is indeed achieved

if α < 3/16. However, the shown results suggest that this may

not be the best upper bound for the code under consideration.

In addition, for α= 0.3and the example code S∗, Theorems 1

and 3 give that, for both Euclidean and ML decoding, the

WER is equal to zero if the offset is restricted to the interval

(−β, β )with β≤0.5−0.3 = 0.2. This conﬁrms the results

from Figs. 4-5: for uniform offset, the zero WER is achieved

if standard deviation 0.2/√3≈0.12; for triangular offset, the

zero WER is achieved if standard deviation 0.2/√6≈0.08.

VI. DISCUSSION AND CONCLUSION

We have investigated Euclidean, Pearson, and ML decoders

for channels which suffer from bounded noise and offset

mismatch. In particular, it has been shown that the WER for

such decoders is equal to zero if the noise and offset ranges

satisfy certain conditions. The ﬁndings have been conﬁrmed

by simulation results.

Further investigations about how codebooks can be gen-

erated satisfying Theorems 1-3 given αand βwill be of

interest. Suppose α,βare ﬁxed, it would be very interesting to

fully explore the capacity of this channel and what rates can

be achieved for the three decoding schemes satisfying zero

WER conditions. Also, another interesting option for future

research is to include the possibility of gain mismatch as well

by considering various distributions for this phenomenon.

REFERENCES

[1] D. Ajwani, I. Malinger, U. Meyer, and S. Toledo, “Characterizing

the performance of ﬂash memory storage devices and its impact on

algorithm design,” in Proc. Int. Workshop on Experimental and Efﬁcient

Algorithms. Berlin, Heidelberg: Springer, May 2008, pp. 208–219.

[2] G. Bouwhuis, A. H. J. Braat, J. Pasman, G. van Rosmalen, and K. A. S.

Immink, Principles of Optical Disc Systems. Boston, MA, USA: Adam

Hilger, 1985.

[3] K. A. S. Immink, “Coding schemes for multi-level ﬂash memories

that are intrinsically resistant against unknown gain and/or offset using

reference symbols,” Electron. Lett., vol. 50, no. 1, pp. 20–22, Jan. 2014.

[4] A. Jiang, R. Mateescu, M. Schwartz, and J. Bruck, “Rank modulation for

ﬂash memories,” IEEE Trans. Inf. Theory, vol. 55, no. 6, pp. 2659–2673,

Jun. 2009.

[5] H. Zhou and J. Bruck, “Balanced modulation for nonvolatile memories,”

arXiv: 1209.0744, Sep. 2012.

[6] K. A. S. Immink and K. Cai, “Composition check codes,” IEEE Trans.

Inf. Theory, vol. 64, no. 1, pp. 249–256, Jan. 2018.

[7] K. A. S. Immink and J. H. Weber, “Minimum Pearson distance detection

for multilevel channels with gain and/or offset mismatch,” IEEE Trans.

Inf. Theory, vol. 60, no. 10, pp. 5966–5974, Oct. 2014.

[8] J. H. Weber, K. A. S. Immink, and S. R. Blackburn, “Pearson codes,”

IEEE Trans. Inf. Theory, vol. 62, no. 1, pp. 131–135, Jan. 2016.

[9] K. A. S. Immink and J. H. Weber, “Hybrid minimum Pearson and

Euclidean distance detection,” IEEE Trans. Commun., vol. 63, no. 9,

pp. 3290–3298, Sep. 2015.

[10] S. R. Blackburn, “Maximum likelihood decoding for multilevel channels

with gain and offset mismatch,” IEEE Trans. Inf. Theory, vol. 62, no. 3,

pp. 1144–1149, Mar. 2016.

[11] J. H. Weber and K. A. S. Immink, “Maximum likelihood decoding for

Gaussian noise channels with gain or offset mismatch,” IEEE Commun.

Lett., vol. 22, no. 6, pp. 1128–1131, Jun. 2018.

[12] M. Grigoriu, Applied Non-Gaussian Processes: Examples, Theory, Sim-

ulation, Linear Random Vibration, and MATLAB Solutions. Englewood

Cliffs, NJ: PTR Prentice Hall, 1995.

[13] A. D’Onofrio, Bounded Noises in Physics, Biology, and Engineering.

New York, NY: Springer, 2013.

[14] J. H. Weber, T. G. Swart, , and K. A. S. Immink, “Simple systematic

Pearson coding,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Barcelona,

Spain, Jul. 2016, pp. 385–389.