Conference PaperPDF Available

Decoding Criteria and Zero WER Analysis for Channels with Bounded Noise and Offset

Authors:

Abstract

Data storage systems may not only be disturbed by noise. In some cases, the error performance can also be seriously degraded by offset mismatch. Here, channels are considered for which both the noise and offset are bounded. For such channels, Euclidean distance-based decoding, Pearson distance-based decoding, and Maximum Likelihood decoding are considered. In particular, for each of these decoders, bounds are determined on the magnitudes of the noise and offset intervals which lead to a word error rate equal to zero. Case studies with simulation results are presented confirming the findings.
Decoding Criteria and Zero WER Analysis
for Channels with Bounded Noise and Offset
Renfei Bu
Applied Mathematics Dept., Optimization Group
Delft University of Technology
Delft, Netherlands
R.Bu@tudelft.nl
Jos H. Weber
Applied Mathematics Dept., Optimization Group
Delft University of Technology
Delft, Netherlands
J.H.Weber@tudelft.nl
Abstract—Data storage systems may not only be disturbed by
noise. In some cases, the error performance can also be seriously
degraded by offset mismatch. Here, channels are considered for
which both the noise and offset are bounded. For such channels,
Euclidean distance-based decoding, Pearson distance-based de-
coding, and Maximum Likelihood decoding are considered. In
particular, for each of these decoders, bounds are determined on
the magnitudes of the noise and offset intervals which lead to
a word error rate equal to zero. Case studies with simulation
results are presented confirming the findings.
Index Terms—Flash memory, optical recording, maximum
likelihood decoding, bounded noise, offset mismatch, zero WER
I. INTRODUCTION
With the explosive growth of reliance on information, both
for home and personal use along with business and profes-
sional needs, more data are being generated, processed, and
stored. It is necessary to guarantee very high access speeds,
low power consumption, and, most importantly, reliable data
storage systems. In data storage systems, it is usually found
that noise (which leads to unpredictable stochastic errors) is
an important issue, but that also other physical factors may
hamper the reliability of the stored data. For example, in
Flash memories, the number of electrons of a cell decreases
with time and some cells become defective over time [1].
The amount of electron leakage depends on various physical
parameters, such as, the device’s temperature, the magnitude
of the charge, and the time elapsed between writing and
reading the data. In digital optical recording, fingerprints and
scratches on the surface of discs result in offset variations of
the retrieved signal [2].
To address the physical-related offset issues, two approaches
are usually investigated and applied in storage systems. One
approach uses pilot sequences to estimate the unknown chan-
nel offset [3]. The method is often considered too expen-
sive with respect to redundancy. Other approaches are error
correcting techniques. Up to now, various coding techniques
have been applied to alleviate the detection in case of channel
mismatch, specifically rank modulation [4], balanced codes
[5] and composition check codes [6]. These methods are
often considered too expensive in terms of redundancy and
complexity.
Since the retrieved data value has been offset in channels,
a Euclidean distance measure will be biased or grossly inac-
curate. Immink and Weber [7] showed that decoders using
the Pearson distance have immunity to offset and/or gain
mismatch. Use of the Pearson distance requires that the set
of codewords satisfies certain special properties. Such sets
are called Pearson codes. In [8], optimal Pearson codes were
presented, in the sense of having the largest number of
codewords and thus minimum redundancy among all q-ary
Pearson codes of fixed length n. Further, in [9] a decoder was
proposed based on minimizing a weighted sum of Euclidean
and Pearson distances. In [10], Blackburn investigated a max-
imum likelihood (ML) criterion for channels with Gaussian
noise and unknown gain and offset mismatch. In a subsequent
study, ML decision criteria were derived for Gaussian noise
channels when assuming various distributions for the offset in
the absence of gain mismatch [11].
The above-summarized research results are based on the
Gaussian noise model, which is the most satisfactory of reality
in many cases. An increasing number of studies focus on
another important class of non-Gaussian stochastic processes:
bounded noise, which is motivated by the fact that the
Gaussian stochastic process is an inadequate mathematical
model of the physical world because it is unbounded [12],
[13]. Moreover, in many relevant cases, especially in Flash
memory, the impact of parameters (such as charge leakage)
on the retrieved data value should not be arbitrarily large.
Consequently, not taking into account the bounded nature of
stochastic variations may lead to impracticable model-based
inferences. In this paper, we explore decoding criteria for
channels with bounded noise and bounded offset mismatch.
Specifically, we consider Euclidean distance-based decoding,
Pearson distance-based decoding, and Maximum Likelihood
decoding. Most importantly, we investigate, for each of these
decoders, under which constraints zero Word Error Rate
(WER) performance can be achieved. We should stress that
zero WER performance is achieved without assumptions of
specific distributions for the bounded noise and offset.
The remainder of this paper is organized as follows. We
first review the channel with noise and offset and the classical
Euclidean and Pearson distance-based decoding criteria in Sec-
tion II. In Section III, we present a ML decoding method when
the noise and offset in the channel are bounded. Simulation
results for specific cases are given in Section IV. In the channel
978-1-5386-8088-9/19/$31.00 ©2019 IEEE
with bounded noise and offset, zero WER is achievable for all
detectors discussed in this paper. Conditions to achieve zero
WER for these decoders are derived in Section V. We conclude
the paper in Section VI.
II. PRELIMINARIES AND CHA NN EL MO DE L
We consider transmitting a codeword x= (x1, x2, . . . , xn)
from a codebook S Rn, where n, the length of x, is a
positive integer. In many applications, the received vector may
not only be hampered by noise v= (v1, v2, . . . , vn), but also
by gain aand/or offset b. Hence,
r=a(x+v) + b1,
where 1= (1,1,...,1) is the real all-one vector of length n.
The gain and offset values aand bmay change from word
to word, but are constant for all transmitted symbols within a
codeword, while the noise values vary from symbol to symbol.
In the channel model under consideration in this paper, we
assume that there is no gain mismatch, i.e., a= 1, but there
is an offset bR, i.e.,
r=x+v+b1.(1)
The values viin the noise vector v= (v1, v2, . . . , vn)
are independently and identically distributed with probability
density function φ, leading to a probability density function
χ(v) = Πn
i=1φ(vi)for v.
A well-known decoding criterion upon receipt of the vector
ris to choose a codeword ˆx ∈ S which minimizes the
(squared) Euclidean distance between the received vector r
and codeword ˆx, i.e.,
δE(r,ˆx) =
n
X
i=1
(riˆxi)2.(2)
It is known to be optimal with regard to handling Gaussian
noise.
The Pearson distance measure [7] is used in situations which
require resistance towards offset and/or gain mismatch. For
any vector uRn, let
¯u =1
n
n
X
i=1
ui
denote the average symbol value and let
σu= n
X
i=1
(ui¯u)2!1/2
denote the unnormalized symbol standard deviation. The Pear-
son distance between the received vector rand a codeword
ˆx ∈ S is defined as
δP(r,ˆx) = 1 ρr,ˆx,(3)
where ρr,ˆx is the well-known Pearson correlation coefficient,
ρr,ˆx =
n
P
i=1
(ri¯r)(ˆxi¯
ˆx)
σrσˆx
.
A Pearson decoder chooses a codeword minimizing this dis-
tance. As shown in [7], a simpler Pearson distance-based
criterion leading to the same result in the minimization process
reads
δ0P(r,ˆx) =
n
X
i=1
(riˆxi+¯
ˆx)2,(4)
if there is no gain mismatch, as assumed in this paper.
III. MAXIMUM LIKELIHOOD DEC OD IN G FO R NOISE AND
OFFS ET W IT H BOUNDED RANGES
We assume that the noise values are restricted to a certain
range. More specifically, φonly takes non-zero values on an
interval (α, α), where α > 0. Hence, α < vi< α for all
i. For a codeword ˆx = (ˆx1,...,ˆxn)∈ S, we define its noise
environment as
Uˆx ={u= (u1, . . . , un)Rn: ˆxiα < ui<ˆxi+α}.(5)
For the offset bwe assume that it has a probability density
function ζwhich only takes non-zero values on an interval
(γ, η ). Hence, γ < b < η. Since the receiver can subtract
η+γ
21from rif the offset range is not symmetric around zero,
we may assume without loss of generality that the offset is
within the range (β, β ), where β= (ηγ)/2, which we
will do throughout the rest of this paper. We define
Lr={rt1:t(β, β )}(6)
for a vector rRn.
In order to achieve ML decoding, we need to choose the
codeword of maximum likelihood given the received vector.
Assuming all codewords are equally likely, this is equivalent to
maximizing the probability density value of the received vector
rgiven the candidate codeword ˆx. Denoting the probability
density function of v+b1by ψ, we find with (1) that we
should thus maximize
ψ(rˆx) = Z
−∞
χ(rˆx b1)ζ(b)db (7)
over all candidate codewords ˆx, where χand ζare the proba-
bility density functions of the noise and offset, respectively. χ
and ζcan be any distribution as long as they are restricted to
the indicated intervals. In Section IV, we will show simulation
results assuming specific distributions.
Note from (6) and (5) that a point rt1of Lris in Uˆx if
and only if tsatisfies
riˆxiα<t<riˆxi+α, i= 1, . . . , n,
β < t < β. (8)
From this observation, we find that (7) equals
(Rt0(r,ˆx)
t1(r,ˆx)χ(rˆx b1)ζ(b)db if t0(r,ˆx)> t1(r,ˆx),
0otherwise,
(9)
where
t0(r,ˆx) = min ({riˆxi+α|i= 1, ..., n }∪{β}),
t1(r,ˆx) = max ({riˆxiα|i= 1, ..., n } ∪ {−β}).
(10)
IV. CAS E ST UD IE S
In this section, we consider several noise and offset distri-
butions. Simulated WER results are shown for the codebook
S={(0,0,0),(1,1,0),(1,0,1),(0,1,1)}
of length 3 and size 4, in combination with different decoders.
This simple codebook is used to demonstrate some important
WER characteristics. Codebook construction as such is beyond
the scope of this paper. The interested reader is referred to [14].
A. WER for Uniform Noise and Offset
The uniform distribution is the most-commonly used for
bounded random variables. Let the probability density function
of a random variable which is uniformly distributed on the
interval (τ1, τ2)be denoted by U(τ1, τ2). The uniform distri-
bution U(τ1, τ2)has probability density function
U(x) = 1
τ2τ1if τ1< x < τ2,
0otherwise. (11)
Hence, for the noise we assume vi∼ U(α, α)and for the
offset b∼ U(β , β).
Note that ML decoding in this case is tantamount to
maximizing
max{0, t0(r,ˆx)t1(r,ˆx)},(12)
i.e., choosing the codeword ˆx for which the part of the line
segment Lrthat is within Uˆx is largest.
Simulated WER results for the example code Sand various
values of αand βare shown in Figs. 1-3 for Euclidean,
Pearson, and ML decoders, respectively.
In Fig. 1, we observe that the performance of the Euclidean
decoder gets worse with increasing values of αand/or β. In
Fig. 2, the curves for different values of βoverlap because of
the Pearson decoder’s intrinsic immunity to offset mismatch.
Note that the performance of the Euclidean decoder is close
to ML performance for β= 0.15 and that the performance of
the Pearson decoder is close to ML performance for β= 0.30
in Fig. 3.
Most interestingly, for Euclidean decoding, WER ap-
proaches zero if α1/2β, while for Pearson decoding,
it happens when α < 1/4. WER approaches zero for the
ML decoding if α1/4or α1/2β, i.e., α
max{1/4,1/2β}. Indeed, we observe in Fig. 3 a zero WER
for α0.35 if β= 0.15, for α0.30 if β= 0.20, and for
α0.25 if β= 0.25 or β= 0.30. We will show in Section
V that, for all decoders under consideration, a WER of zero
is achieved if the magnitudes of the noise and offset intervals
satisfy certain conditions.
B. WER for Uniform Noise and Various Offset Distributions
In this subsection, we consider again uniform noise, but
various options for the offset distribution. In particular, vi
U(0.3,0.3), while the offset is (i) uniform, i.e., b
U(β, β ), as in the previous subsection, (ii) triangular, i.e.,
b∼ T (β, 0, β), as specified next, or (iii) Gaussian with mean
zero and variance σ2, i.e., b∼ N(0, σ2). The last option is
0.15 0.2 0.25 0.3 0.35 0.4
10-7
10-6
10-5
10-4
10-3
10-2
10-1
WER
Euclidean Distance Based Decoding
= 0.15
= 0.2
= 0.25
= 0.3
Fig. 1. Simulated WER of Euclidean distance-based decoding for codebook
Son channels with uniform noise vi∼ U(α, α)and uniform offset b
U(β, β ).
0.15 0.2 0.25 0.3 0.35 0.4
10-7
10-6
10-5
10-4
10-3
10-2
10-1
WER
Pearson Distance Based Decoding
= 0.15
= 0.2
= 0.25
= 0.3
Fig. 2. Simulated WER of Pearson distance-based decoding for codebook
Son channels with uniform noise vi∼ U(α, α)and uniform offset b
U(β, β ).
0.15 0.2 0.25 0.3 0.35 0.4
10-7
10-6
10-5
10-4
10-3
10-2
10-1
WER
Maximum Likelihood Decoding
= 0.15
= 0.2
= 0.25
= 0.3
Fig. 3. Simulated WER of ML decoding for codebook Son channels with
uniform noise vi∼ U(α, α)and uniform offset b∼ U(β , β).
0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55
10-7
10-6
10-5
10-4
10-3
10-2
10-1
WER
Euclidean
Pearson
ML
Fig. 4. Simulated WER for codebook Son channels with uniform noise
vi∼ U(0.3,0.3) and uniform offset b∼ U(β , β).
0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55
10-7
10-6
10-5
10-4
10-3
10-2
10-1
WER
Euclidean
Pearson
ML
Fig. 5. Simulated WER for codebook Son channels with uniform noise
vi∼ U(0.3,0.3) and triangular offset b∼ T (β, 0, β).
included since it is the most important representative of un-
bounded distributions. The triangular distribution T(β, 0, β)
has probability density function
T(x) = 1
β(1 1
β|x|)if β < x < β,
0otherwise. (13)
In Figs. 4-6 we present WER results for the example
code Sfor the three offset options under consideration. For
comparison purposes, the WER is presented as a function of
the standard deviation of the offset in each case.
In general, note that the WER of Pearson decoding has the
same constant value for all cases, since it does not depend
on the offset. It is close to ML performance in case of large
standard deviations. The performance of Euclidean decoding
is close to ML performance for small standard deviations. For
medium standard deviations, ML decoding clearly outperforms
both Euclidean and Pearson decoding in all three cases.
0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55
Standard Deviation
10-7
10-6
10-5
10-4
10-3
10-2
10-1
WER
Euclidean
Pearson
ML
Fig. 6. Simulated WER for codebook Son channels with uniform noise
vi∼ U(0.3,0.3) and Gaussian offset b N (0, σ2).
We also observe in Fig. 4 that the WERs of Euclidean and
ML decoders approach zero if the standard deviation β/3of
the uniform offset distribution is at most 0.12, and in Fig. 5
that the WER approaches zero if the standard deviation β/6
of the triangular offset distribution is at most 0.08. On the other
hand, we see in Fig. 6 that for Gaussian offset zero WER can
only be achieved by extremely small noise, as expected, due to
the unbounded nature of the Gaussian distribution. In the next
section we will analyse the zero WER constraints for different
detectors.
V. ZE RO ERROR ANALYS IS
In this section, we will show that, for all decoders under
consideration, a WER of zero is achieved if the magnitudes
of the noise and offset intervals satisfy certain conditions.
A. Euclidean Distance-Based Decoding
The Euclidean decoder can achieve zero WER for channels
with bounded noise and offset when α+βis sufficiently small,
as shown in the following result.
Theorem 1. If the noise and offset are restricted to the
intervals (α, α)and (β, β ), respectively, with
α+βmin
s,c∈S,s6=c
n
P
i=1
(sici)2
2
n
P
i=1 |sici|
,(14)
then the Euclidean decoder achieves a WER equal to zero.
Proof. Assume that x∈ S is sent and r=x+v+b1is
received. Then, for all codewords ˆx 6=x, it holds that
δE(r,ˆx)δE(r,x)
=
n
P
i=1
(riˆxi)2
n
P
i=1
(rixi)2
=
n
P
i=1
(rixiˆxi+xi)2
n
P
i=1
(rixi)2
=
n
P
i=1
xixi)22
n
P
i=1
xixi)(rixi)
=
n
P
i=1
xixi)22
n
P
i=1
xixi)(vi+b)
2(α+β)
n
P
i=1 |ˆxixi| − 2
n
P
i=1 |ˆxixi||vi+b|
= 2
n
P
i=1 |ˆxixi|(α+β− |vi+b|)
>0,
where the fourth equality follows from ri=xi+vi+b, the
first inequality follows from (14) and the last inequality from
the fact that |vi+b|≤|vi|+|b|< α +βfor all i. Hence, if
decoding is based on minimizing (2), the transmitted codeword
is always chosen as the decoding result, leading to a WER
equal to zero.
B. Pearson Distance-Based Decoding
Since Pearson distance based decoding features its immu-
nity to offset mismatch, zero WER performance only requires
a limited value of α, which is shown in the next theorem.
Theorem 2. If the noise and offset are restricted to the
intervals (α, α)and (β, β ), respectively, with
α < min
s,c∈S,s6=c
n
P
i=1
(si¯s ci+¯c)2
n1
n4
n
P
i=1 |si¯s ci+¯c|
,(15)
then the Pearson decoder achieves a WER equal to zero.
Proof. Assume that xSis sent, and r=x+v+b1is
received. Then, for all codewords ˆx 6=x, it holds that
δ0P(r,ˆx)δ0P(r,x)
=
n
P
i=1
(riˆxi+¯
ˆx)2
n
P
i=1
(rixi+¯x)2
=
n
P
i=1
(riˆxi+¯
ˆx ¯r)2n
P
i=1
(rixi+¯x ¯r)2
=
n
P
i=1
(xi¯x ˆxi+¯
ˆx)2
+
n
P
i=1
2(xi¯x ˆxi+¯
ˆx)(rixi+¯x ¯r)
=
n
P
i=1
(xi¯x ˆxi+¯
ˆx)2+
n
P
i=1
2(xi¯x ˆxi+¯
ˆx)(vi¯v)
>n1
n4α
n
P
i=1 xi¯x ˆxi+¯
ˆx
n
P
i=1
2xi¯x ˆxi+¯
ˆx|vi¯v|
=
n
P
i=1 xi¯x ˆxi+¯
ˆx(n1
n4α2|vi¯v|)
0,
(16)
where the fourth equality follows by substituting ri=xi+
vi+band ¯r =¯x+¯v +b, the first inequality from (15), and the
last inequality from the fact that |vi¯v|<n1
n2αfor all i.
Hence, if decoding is based on minimizing (4), the transmitted
codeword is always chosen as the decoding result, leading to
a WER equal to zero.
C. Maximum Likelihood Decoding
Finally, we show that zero WER for ML decoding is
achieved if αor α+βis sufficiently small.
Theorem 3. If the noise and offset are restricted to the
intervals (α, α)and (β, β ), respectively, with
αmin
s,c∈S,s6=c
max
1i,jn{(sici)(sjcj)}
4
(17)
or
α+βmin
s,c∈S,s6=c
max
i=1,...,n(|sici|)
2
(18)
then the ML decoder achieves a WER equal to zero.
Proof. Assume that x∈ S is sent and r=x+v+b1is
received. We will show that if (17) or (18) holds, then ψ(r
ˆx)=0for all codewords ˆx 6=x. First of all, note that
t0(r,ˆx)t1(r,ˆx)
= min ({riˆxi+α|i= 1, . . . , n}∪{β})
max ({riˆxiα|i= 1, . . . , n } ∪ {−β})
= min ({riˆxi+α|i= 1, . . . , n}∪{β})
+ min ({−(riˆxi) + α|i= 1, . . . , n }∪{β})
= min({2β}∪{ min
i=1,...,n{−|riˆxi|} +α+β}
∪{ min
1i,jn{(riˆxi)(rjˆxj)}+ 2α}).
(19)
Next, we will show that if (17) or (18) holds, this expression
is negative whenever ˆx 6=x.
If (17) holds, then
min
1i,jn{(riˆxi)(rjˆxj)}+ 2α
= min
1i,jn{(riˆxi)(rjˆxj)} − 2α+ 4α
<min
1i,jn{(riˆxi)(rjˆxj)(vivj)}+ 4α
= min
1i,jn{[(riˆxi)(rjˆxj)]
[(rixib)(rjxjb)]}+ 4α
= min
1i,jn{(xiˆxi)(xjˆxj)}+ 4α
=max
1i,jn{(ˆxixi)(ˆxjxj)}+ 4α
0.
(20)
where the first inequality follows from the fact that vivj
|vi|+|vj|<2αand the second inequality from (17).
If (18) holds, then
min
i=1,...,n{−|riˆxi|} +α+β
= min
i=1,...,n{−|riˆxi|} − αβ+ 2(α+β)
<min
i=1,...,n{−|riˆxi|−|vi+b|} + 2(α+β)
= min
i=1,...,n{−|riˆxi|−|rixi|} + 2(α+β)
min
i=1,...,n{−|xiˆxi|} + 2(α+β)
=max
i=1,...,n{|xiˆxi|} + 2(α+β)
0,
(21)
where the first inequality follows from the fact that |vi+b| ≤
|vi|+|b|< α +βand the last inequality from (18).
Combining (19), (20), and (21) with (7) and (9), we find
that indeed ψ(rˆx)=0for all codewords ˆx 6=x, while the
probability density value of the received vector rgiven the
transmitted codeword xis larger than 0, i.e., ψ(rx)>0.
This implies that if decoding is based on maximizing (7), the
transmitted codeword is always chosen as the decoding result,
leading to a WER equal to zero.
For the codebook S, the bound on α+βfor a Euclidean
decoder in (14) is 1/2, the bound on αfor a Pearson decoder
in (15) is 3/16, and the bounds on αand α+βfor a ML
decoder in (17) and (18) are 1/4and 1/2, respectively.
Considering Figs. 1-3, results from Theorems 1-3 are con-
firmed. The zero WER of Pearson decoding is indeed achieved
if α < 3/16. However, the shown results suggest that this may
not be the best upper bound for the code under consideration.
In addition, for α= 0.3and the example code S, Theorems 1
and 3 give that, for both Euclidean and ML decoding, the
WER is equal to zero if the offset is restricted to the interval
(β, β )with β0.50.3 = 0.2. This confirms the results
from Figs. 4-5: for uniform offset, the zero WER is achieved
if standard deviation 0.2/30.12; for triangular offset, the
zero WER is achieved if standard deviation 0.2/60.08.
VI. DISCUSSION AND CONCLUSION
We have investigated Euclidean, Pearson, and ML decoders
for channels which suffer from bounded noise and offset
mismatch. In particular, it has been shown that the WER for
such decoders is equal to zero if the noise and offset ranges
satisfy certain conditions. The findings have been confirmed
by simulation results.
Further investigations about how codebooks can be gen-
erated satisfying Theorems 1-3 given αand βwill be of
interest. Suppose α,βare fixed, it would be very interesting to
fully explore the capacity of this channel and what rates can
be achieved for the three decoding schemes satisfying zero
WER conditions. Also, another interesting option for future
research is to include the possibility of gain mismatch as well
by considering various distributions for this phenomenon.
REFERENCES
[1] D. Ajwani, I. Malinger, U. Meyer, and S. Toledo, “Characterizing
the performance of flash memory storage devices and its impact on
algorithm design,” in Proc. Int. Workshop on Experimental and Efficient
Algorithms. Berlin, Heidelberg: Springer, May 2008, pp. 208–219.
[2] G. Bouwhuis, A. H. J. Braat, J. Pasman, G. van Rosmalen, and K. A. S.
Immink, Principles of Optical Disc Systems. Boston, MA, USA: Adam
Hilger, 1985.
[3] K. A. S. Immink, “Coding schemes for multi-level flash memories
that are intrinsically resistant against unknown gain and/or offset using
reference symbols,” Electron. Lett., vol. 50, no. 1, pp. 20–22, Jan. 2014.
[4] A. Jiang, R. Mateescu, M. Schwartz, and J. Bruck, “Rank modulation for
flash memories,” IEEE Trans. Inf. Theory, vol. 55, no. 6, pp. 2659–2673,
Jun. 2009.
[5] H. Zhou and J. Bruck, “Balanced modulation for nonvolatile memories,”
arXiv: 1209.0744, Sep. 2012.
[6] K. A. S. Immink and K. Cai, “Composition check codes,” IEEE Trans.
Inf. Theory, vol. 64, no. 1, pp. 249–256, Jan. 2018.
[7] K. A. S. Immink and J. H. Weber, “Minimum Pearson distance detection
for multilevel channels with gain and/or offset mismatch,IEEE Trans.
Inf. Theory, vol. 60, no. 10, pp. 5966–5974, Oct. 2014.
[8] J. H. Weber, K. A. S. Immink, and S. R. Blackburn, “Pearson codes,
IEEE Trans. Inf. Theory, vol. 62, no. 1, pp. 131–135, Jan. 2016.
[9] K. A. S. Immink and J. H. Weber, “Hybrid minimum Pearson and
Euclidean distance detection,” IEEE Trans. Commun., vol. 63, no. 9,
pp. 3290–3298, Sep. 2015.
[10] S. R. Blackburn, “Maximum likelihood decoding for multilevel channels
with gain and offset mismatch,IEEE Trans. Inf. Theory, vol. 62, no. 3,
pp. 1144–1149, Mar. 2016.
[11] J. H. Weber and K. A. S. Immink, “Maximum likelihood decoding for
Gaussian noise channels with gain or offset mismatch,IEEE Commun.
Lett., vol. 22, no. 6, pp. 1128–1131, Jun. 2018.
[12] M. Grigoriu, Applied Non-Gaussian Processes: Examples, Theory, Sim-
ulation, Linear Random Vibration, and MATLAB Solutions. Englewood
Cliffs, NJ: PTR Prentice Hall, 1995.
[13] A. D’Onofrio, Bounded Noises in Physics, Biology, and Engineering.
New York, NY: Springer, 2013.
[14] J. H. Weber, T. G. Swart, , and K. A. S. Immink, “Simple systematic
Pearson coding,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Barcelona,
Spain, Jul. 2016, pp. 385–389.
Article
In many channels, the transmitted signals do not only face noise, but offset mismatch as well. In the prior art, maximum likelihood (ML) decision criteria have already been developed for noisy channels suffering from signal independent offset . In this paper, such ML criterion is considered for the case of binary signals suffering from Gaussian noise and signal dependent offset . The signal dependency of the offset signifies that it may differ for distinct signal levels, i.e., the offset experienced by the zeroes in a transmitted codeword is not necessarily the same as the offset for the ones. Besides the ML criterion itself, also an option to reduce the complexity is considered. Further, a brief performance analysis is provided, confirming the superiority of the newly developed ML decoder over classical decoders based on the Euclidean or Pearson distances.
Conference Paper
Maximum likelihood (ML) decision criteria have been developed for channels suffering from signal independent offset mismatch. Here, such criteria are considered for signal dependent offset, which means that the value of the offset may differ for distinct signal levels rather than being the same for all levels. An ML decision criterion is derived, assuming uniform distributions for both the noise and the offset. In particular, for the proposed ML decoder, bounds are determined on the standard deviations of the noise and the offset which lead to a word error rate equal to zero. Simulation results are presented confirming the findings.
Conference Paper
Full-text available
The recently proposed Pearson codes offer immunity against channel gain and offset mismatch. These codes have very low redundancy, but efficient coding procedures were lacking. In this paper, systematic Pearson coding schemes are presented. The redundancy of these schemes is analyzed for memoryless uniform sources. It is concluded that simple coding can be established at only a modest rate loss.
Article
Full-text available
The Pearson distance has been advocated for improving the error performance of noisy channels with unknown gain and offset. The Pearson distance can only fruitfully be used for sets of $q$-ary codewords, called Pearson codes, that satisfy specific properties. We will analyze constructions and properties of optimal Pearson codes. We will compare the redundancy of optimal Pearson codes with the redundancy of prior art $T$-constrained codes, which consist of $q$-ary sequences in which $T$ pre-determined reference symbols appear at least once. In particular, it will be shown that for $q\le 3$ the $2$-constrained codes are optimal Pearson codes, while for $q\ge 4$ these codes are not optimal.
Article
Full-text available
The performance of certain transmission and storage channels, such as optical data storage and nonvolatile memory (flash), is seriously hampered by the phenomena of unknown offset (drift) or gain. We will show that minimum Pearson distance (MPD) detection, unlike conventional minimum Euclidean distance detection, is immune to offset and/or gain mismatch. MPD detection is used in conjunction with (T) -constrained codes that consist of (q) -ary codewords, where in each codeword (T) reference symbols appear at least once. We will analyze the redundancy of the new (q) -ary coding technique and compute the error performance of MPD detection in the presence of additive noise. Implementation issues of MPD detection will be discussed, and results of simulations will be given.
Article
Full-text available
This paper presents a practical writing/reading scheme in nonvolatile memories, called balanced modulation, for minimizing the asymmetric component of errors. The main idea is to encode data using a balanced error-correcting code. When reading information from a block, it adjusts the reading threshold such that the resulting word is also balanced or approximately balanced. Balanced modulation has suboptimal performance for any cell-level distribution and it can be easily implemented in the current systems of nonvolatile memories. Furthermore, we studied the construction of balanced error-correcting codes, in particular, balanced LDPC codes. It has very efficient encoding and decoding algorithms, and it is more efficient than prior construction of balanced error-correcting codes.
Article
Besides the omnipresent noise, other important inconveniences in communication and storage systems are formed by gain and/or offset mismatches. In the prior art, a maximum likelihood (ML) decision criterion has already been developed for Gaussian noise channels suffering from unknown gain and offset mismatches. Here, such criteria are considered for Gaussian noise channels suffering from either an unknown offset or an unknown gain. Furthermore, ML decision criteria are derived when assuming a Gaussian or uniform distribution for the offset in the absence of gain mismatch.
Article
K.A.S. Immink and J.H. Weber recently defined and studied a channel with both gain and offset mismatch, modelling the behaviour of charge-leakage in flash memory. They proposed a decoding measure for this channel based on minimising Pearson distance (a notion from cluster analysis). The paper derives a formula for maximum likelihood decoding for this channel, and also defines and justifies a notion of minimum distance of a code in this context.
Article
The reliability of mass storage systems, such as optical data recording and non-volatile memory (Flash), is seriously hampered by uncertainty of the actual value of the offset (drift) or gain (amplitude) of the retrieved signal. The recently introduced minimum Pearson distance detection is immune to unknown offset or gain, but this virtue comes at the cost of a lessened noise margin at nominal channel conditions. We will present a novel hybrid detection method, where we combine the outputs of the minimum Euclidean distance and Pearson distance detectors so that we may trade detection robustness versus noise margin. We will compute the error performance of hybrid detection in the presence of unknown channel mismatch and additive noise.
Article
Coding schemes for storage channels, such as optical recording and non-volatile memory (Flash), with unknown gain and offset are presented. In its simplest case, the coding schemes guarantee that a symbol with a minimum value (floor) and a symbol with a maximum (ceiling) value are always present in a codeword so that the detection system can estimate the momentary gain and the offset. The results of the computer simulations show the performance of the new coding and detection methods in the presence of additive noise.