Page 1

158IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 28, NO. 2, FEBRUARY 2010

Analysis of Nonlinear Transition Shift and

Write Precompensation in Perpendicular

Recording Systems

Zheng Wu, Member, IEEE, Paul H. Siegel, Fellow, IEEE, Jack K. Wolf, Fellow, IEEE, and

H. Neal Bertram, Fellow, IEEE

Abstract—In high density perpendicular magnetic record-

ing channels, nonlinear transition shift (NLTS) is one of the

distortions that can degrade the system performance. Write

precompensation is a standard method used to combat the

negative effect of NLTS. In this paper, we present an analysis

of the bit-error-rate (BER) for perpendicular recording systems

with NLTS and write precompensation. Media jitter noise and

additive white Gaussian noise are also considered in the model.

A BER lower bound is derived, as well as a more easily

computed estimate of the bound. The write precompensation

values that numerically minimize the estimate of the BER lower

bound prove to be very close to those found using Monte-Carlo

channel simulation. We then apply these methods to the design of

multilevel precompensation schemes, for which the optimization

of precompensation values by Monte-Carlo channel simulation

is computationally infeasible. The results show that for higher

recording densities subject to increased ISI and noise, the use

of more complex precompensation schemes does not significantly

improve the system performance.

Index Terms—NLTS, write precompensation, jitter noise, per-

pendicular recording

I. INTRODUCTION

I

the system performance. Nonlinear transition shift (NLTS) in-

duced by demagnetization from previously written transitions

is one example. As in longitudinal recording, the NLTS in

a perpendicular recording channel can be measured by time

or frequency analysis of the read-back signal corresponding

to a carefully chosen input data pattern [1], [2]. The distor-

tion caused by NLTS can be reduced by the use of write

precompensation, whereby, for specific data patterns, deter-

ministic offsets are added to the timing of written transitions.

A simple and commonly used precompensation scheme is

dibit precompensation, which affects the second transition of

a pair of adjacent transitions. In practice, the timing offsets in

N A HIGH density perpendicular recording system, non-

linear effects can distort the read-back signal and degrade

Manuscript received 15 January 2009; revised 1 August 2009.

Z. Wu was with the Center for Magnetic Recording Research, University

of California, San Diego, La Jolla, CA 92093, USA. She is now with

Link A Media Devices, Santa Clara, CA 95051, USA (e-mail:zwu@link-

a-media.com).

P. H. Siegel, J. K. Wolf and H. N. Bertram are with the Center for

Magnetic Recording Research, University of California, San Diego, La Jolla,

CA, 92093, USA (e-mail: {psiegel,jwolf,nbertram}@ucsd.edu).

H. N. Bertram was also with Hitachi Global Storage Technologies, San

Jose, CA 95135, USA. He is now with Western Digital Corporation, San

Jose, CA 95138, USA

Digital Object Identifier 10.1109/JSAC.2010.100204.

write precompensation are optimized empirically in order to

minimize the bit-error-rate (BER).

There are very few theoretical results on optimal pre-

compensation of NLTS in recording channels to minimize the

BER because of the complex nature of the nonlinear effects.

Lim and Kavˇ ci´ c [3] presented a dynamic programmingmethod

to optimize write precompensation for a longitudinal record-

ing channel with partial erasure, NLTS and additive white

Gaussian noise (AWGN). Their objective was to minimize the

mean-squared error (MSE) between the output signal of the

noisy, nonlinear channel model and that of the noiseless, linear

channel model, rather than to minimize BER. They allowed the

use of a different precompensation value for each transition.

The optimization procedure and the resulting precompensation

scheme in [3] would be too complex to implement in a real

system. In practice, it is typical to use a small number of

different precompensation values corresponding to a specified

subset of data patterns. (We refer to a scheme with more

than one such precompensation value as a multilevel scheme.)

In a previous work [4], we compared the precompensation

values obtained by minimizing the BER to those obtained

by minimizing the MSE for two specific precompensation

schemes. The values were close, though not identical.

In this paper, we will present an analysis of the BER

for systems with NLTS and write precompensation. A lower

bound on the BER is derived, as well as a more easily

computed estimate of the lower bound. We evaluate these

numerically, and compare their BER performance predictions

to the results of Monte-Carlo simulation. We find that the

optimal precompensation values obtained using these BER

estimates are very close to those found using the Monte-Carlo

method. This motivates the application of similar analytic

techniques to the optimization of more complex multilevel

precompensation schemes, for which Monte-Carlo simulation

is computationally impractical.

The paper is organized as follows. Section II presents the

channel model and defines the order-1 and order-2 channel

approximations that we use in our analysis and simulations.

Section III gives the derivation of the pairwise error event

probability for the two channel approximations, as well as

upper and lower bounds on the BER. An easily computed es-

timate of the lower bound is also presented. Section IV shows

BER results obtained by numerical evaluation of the lower

bound estimate for two simple precompensation schemes.

These performance results, as well as the optimized pre-

0733-8716/10/$25.00 c ? 2010 IEEE

Page 2

WU et al.: ANALYSIS OF NONLINEAR TRANSITION SHIFT AND WRITE PRECOMPENSATION IN PERPENDICULAR RECORDING SYSTEMS 159

compensation values, are compared to results found by Monte-

Carlo simulation. In Section V, we use the BER analysis to

gain insight into the observed performance benefits of write

precompensation. We then extend the application of the BER

estimation techniques to the optimization of more elaborate

multilevel precompensation algorithms. Section VI concludes

the paper.

II. CHANNEL MODEL AND WRITE PRECOMPENSATION

We consider a channel model with NLTS, jitter noise and

AWGN, the same as in [4]. Let the channel transition response

be

s(t) = Vmaxerf

?0.954t

T50

?

, (1)

where erf(·) is the error function defined as

erf(x) =

2

√π

?x

0

e−t2dt

and T50is the width of the transition response from −Vmax/2

to Vmax/2.

Let {xi} be the input binary data sequence to the channel,

xi ∈ {−1,+1}. The induced transition sequence {di} is

defined by di=

2

, thus di∈ {−1,0,+1}. The channel

output z(t) can be written as

?

Here, δiis the net shift of the transition diwith respect to its

nominal location in the recording medium, ai is the random

position jitter for transition di, B is the channel bit spacing

(as well as the sampling period), and nW(t) is the electronics

noise. For di= 0, we set ai= 0, whereas for di?= 0, ai is

a zero-mean Gaussian random variable with variance σ2

jitter values for recorded transitions are mutually independent.

The electronics noise nW(t) is modeled as a zero-mean,

AWGN process. The variance of the discrete-time AWGN

samples nW(kB) is denoted by σ2

AWGN ratio to be SNRW= 10log10(V2

definition is taken from [5] where it was introduced in order

to facilitate the study of the separate effects of jitter noise and

AWGN in channels with NLTS and precompensation.

We can write the net shift as δi= τi+Δi, where τiis the

NLTS induced by previously recorded transitions, and Δi is

the precompensationvalue for the transition di. By convention,

for di= 0, we set δi= τi= Δi= 0. According to the model

proposed by Bertram and Nakamoto [6], [7], the NLTS of

a transition is determined by the distance from its intended

writing location to the actual locations of previously written

transitions, assuming a fixed head-media configuration. There-

fore, τiis a function of the transition sequence d0,...,di, the

net shifts of the previously recorded transitions δ0,...,δi−1,

and Δi. In perpendicular recording, the NLTS always shifts a

transition away from previous transitions. Referring to the sign

of δiin (2), τiis then non-positive and the precompensation

value Δi is non-negative. Given a head-media configuration,

a transition sequence and a precompensation scheme, the net

transition shifts of all the recorded transitions are uniquely

determined.

xi−xi−1

z(t) =

i

dis(t + δi+ ai− iB) + nW(t).(2)

J. The

W. We define the signal-to-

max/σ2

W). This SNR

The complexity of channel simulation can be reduced if we

approximate the channel output by truncating the Taylor series

expansion of the transition response. The order-1 channel

approximation is obtained by considering terms up to the first

derivative term in the expansion of the transition response:

?

+

di(δi+ ai)s?(t − iB) + nW(t),

z(t) ≈

i

xih(t − iB)

?

i

(3)

where h(t) = (s(t) − s(t − B))/2 is the dipulse response.

The order-2 channel approximation takes into account both

the first and second derivatives terms:

?

+

2

z(t) ≈

i

xih(t − iB) +

?

?

s??(t − iB) + nW(t).

i

di(δi+ ai)s?(t − iB)

i

di(δi+ ai)2

(4)

A discrete-time channel model and system diagram are

shown in Fig. 1. The discrete-time channel output is passed

through the equalizer before it enters the sequence detector.

In this paper, we consider a Viterbi detector that is matched

to the equalizer output target.

In practice, recording systems usually use only a small

number of precompensation levels corresponding to selected

patterns of recently recorded transitions. These transitions

play the most significant role in determining the NLTS value

of the transition being written. In [4], we considered two

precompensationschemes: the so-called dibit precompensation

scheme, as well as a two-level scheme. The dibit scheme

considers only the most recently recorded bit, and applies

precompensation only to the second transition in a pair of

adjacent transitions, i.e., Δi= Δ only if di?= 0 and di−1?= 0,

otherwise, Δi = 0. The dibit precompensation scheme can

be thought of as a one-level precompensation. The two-

level precompensation scheme considers two configurations

of transitions in the two preceding bit positions, namely a

transition only in the most recent position or transitions in

both of the preceding positions. In this paper, we examine

two more precompensation schemes that take into account

all possible bit configurations in the preceding two bits or

preceding three bits. We refer to these as the 2-bit look-back

scheme and 3-bit look-back scheme, respectively. The 2-bit

look-back scheme applies possibly different precompensation

values to three distinct bit patterns, while the 3-bit look-back

scheme may use up to seven different values for the seven

distinct bit patterns. Of course, as we increase the number of

patterns to which precompensationis applied, it becomes more

difficult to find the optimal set of precompensation values by

numerical optimization using Monte-Carlo simulation.

III. BIT-ERROR-RATE ANALYSIS

The performance analysis of a Viterbi detector for an ISI

channel can be found in several references, such as [8],

[9]. The analysis has been extended to magnetic recording

channels with jitter noise and modified versions of the Viterbi

algorithm [10]–[12]. In this section, we develop an estimate of

the BER at the Viterbi detector output when NLTS is present.

Page 3

160IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 28, NO. 2, FEBRUARY 2010

Fig. 1.(a) Discrete-time channel model; (b) Discrete-time system diagram

We first derive the pairwise error probability corresponding

to a given error event. Then, using various union bounds, we

develop upper and lower bounds on the BER. A more easily

computed estimate of the lower bound is then introduced.

A. Pairwise Error Probability Analysis

Let x(1)denote the recorded sequence and x(2)the detected

sequence. We denote by ξ1 and ξ2 the paths in the Viterbi

detector trellis that correspond to x(1)and x(2), respectively.

We say that ξ1is the correct path and ξ2is the incorrect path.

We assume that ξ1 and ξ2 diverge at time k and remerge at

time k + M. We refer to M as the length of the error event.

The pairwise error probability of the error event ?M =

(ξ1→ ξ2) is the probability that ξ2is chosen as the detected

path instead of ξ1, given that x(1)is the written sequence. We

should note that in our case, the pairwise error probability for

event (ξ1 → ξ2) may differ from that for event (ξ2 → ξ1)

because of the data-dependent noise and NLTS.

An error event occurs when the accumulated metric on the

incorrect path is smaller than that on the correct path. In a

conventional Viterbi detector, the squared-Euclidean distance

is used as the branch metric. Therefore, the pairwise error

probability can be expressed as follows:

Pr(ξ1→ ξ2|x(1)) =

k+M−1

?

where riis the equalized channel output sample at time i when

x(1)is transmitted, and y(1)

i

and y(2)

outputs at time i corresponding to the input data sequences

x(1)and x(2), respectively. Assume that the equalization target

of the system is g(D) = g0+ g1D + ··· + gJDJ, where D

is a unit delay operator. We can then write the branch output

labels as y(1)

i

Pr(

i=k

(ri− y(1)

i)2>

k+M−1

?

i=k

(ri− y(2)

i)2|x(1)),(5)

i

are the noiseless channel

=?J

l=0glx(1)

i−land y(2)

i

=?J

l=0glx(2)

i−l.

The noise value at time i is given by ni= ri− y(1)

some straightforward calculation, we can express the pairwise

error probability as

i. After

Pr(ξ1→ ξ2|x(1)) =

k+M−1

?

Pr(

i=k

2ni(y(1)

i

− y(2)

i) < −

k+M−1

?

i=k

(y(1)

i

− y(2)

i)2??x(1)).

(6)

In the above equation, the outputs y(1)

pletely determined by the data sequence x(1)and the specific

error event. Therefore, the pairwise error probability is equal to

the probability that a linear combination of the noise samples

ni,i = k,··· ,k+M−1 is smaller than a deterministic value.

If we can find the distribution of the linear combination of the

noise samples, the probability can be calculated directly. We

now discuss how to calculate this probability for order-1 and

order-2 approximations of the channel with NLTS.

1) Order-1 Channel Approximation : In the order-1 channel

approximation, the noise at time i can be written as

i

and y(2)

i

are com-

ni=

?

j

x(1)

i−j˜hj−

?

j

x(1)

i−jgj+

?

j

d(1)

i−j(δi−j+ai−j)˜s?j+wi,

(7)

where˜hjand˜s?jdenote the convolution of the FIR equalizer

taps with the samples of the channel dipulse response and

the first derivative of the transition response, respectively. The

equalized sample of the AWGN at time k is denoted by wk.

In perpendicular recording systems,˜hj and˜s?j will vanish

when j goes to +∞ and −∞. Therefore, we can assume

that sequences {˜hj} and {˜s?j} have finite length. Thus,

ni, i = k,··· ,k + M − 1 are nonzero-mean, jointly dis-

tributed Gaussian random variables given x(1). Equation (6)

can be calculated by an evaluation of the Q-function, the tail

Page 4

WU et al.: ANALYSIS OF NONLINEAR TRANSITION SHIFT AND WRITE PRECOMPENSATION IN PERPENDICULAR RECORDING SYSTEMS 161

probability of the standard Gaussian density:

Pr(ξ1→ ξ2|x(1)) = Q(λTλ + 2λTμ

2

?

(y(1)

k

λTΣλ

)

(8)

where

y(2)

random vector n = (nk,nk+1,...,nk+M−1)Tgiven x(1),

and Σ is the covariance matrix of n given x(1). In order

to distinguish the noise mean and variance of the order-1

channel approximation from those of the order-2 channel

approximation, we add the subscript 1 to the notation for the

former. The mean and the variance can be expressed as:

column

k+M−1− y(2)

vectorλis

− y(2)

k,y(1)

k+1

−

k+1,...,y(1)

k+M−1)T, μ is the mean of the

μ1

Σ1

=

=

H · x(1)− G · x(1)+ S?· D(1)· δ(1),

σ2

(9)

JS?· (D(1))2· S?T+ σ2

WFFT.(10)

Here, H, G, S?and F are Toeplitz matrices formed by

the sequences {˜hi}, {gi}, {˜s?

sequence. For example, if˜hi= 0 for i > B or i < −A, H is

an M ×(M +A+B) matrix with each row equal to a shifted

version of the sequence˜hi, written in reverse order:

⎛

⎜

0

···

The matrices G, S?and F are constructed similarly. Since

the sequences {˜hi} and {gi} are generally of different length,

the data vectors multiplied by H and G will also generally

have different lengths. For example, the data bits involved in

multiplication with H are from time index k−B to k+M −

1 + A, while the data bits involved in multiplication with G

are from time index k − J to k + M − 1. To simplify the

notation, we omit all the indices of the data bits in equations

(9) and (10).

The matrix D(1)is a diagonal matrix whose diagonal

elements are the transition values di. We use the superscript

‘(1)’ to emphasize that here the transitions are for the recorded

sequence x(1). The column vector δ(1)contains the net tran-

sition shifts for each transition di, given that x(1)is recorded.

The size of D(1)and the length of δ(1)are determined by the

range of˜s?.

We can see from equations (9) and (10), that both the mean

and the covariance matrix depend on the transmitted data.

However, only the mean is affected by NLTS terms.

2) Order-2 Channel Approximation: In the order-2 channel

approximation, the noise at time i can be written as

?

+

d(1)

i−j

2

i} and the equalizer coefficient

H =

⎜

⎜

⎝

˜hB

0

...

˜hB−1

˜hB

...

···

˜hB−1

˜h−A

···

···

˜hB

0

···

0

0

˜h−A

···

˜hB−1

···

...

˜h−A

0

···

⎞

⎟

(11)

⎟

⎟

⎠

ni=

j

x(1)

i−j˜hj−

?

(δi−j+ ai−j)2

j

x(1)

i−jgj+

?

˜s??j+ wi,

j

d(1)

i−j(δi−j+ ai−j)˜s?j

?

j

(12)

where {˜s??i} is the sequence of equalized second derivative

samples of the transition response.

Because of the second derivative term in equation (12), the

noise is no longer Gaussian in nature. The joint distribution

of the noise is complicated and the exact pairwise error

probability cannot be calculated easily.

Therefore, we approximated the pairwise error probability

using the Q-function as in equation (8). Of course, the mean

and the covariance matrix of the noise are different from those

corresponding to the order-1 channel approximation. They

have the following form

μ2=H · x(1)− G · x(1)+ S?D(1)δ(1)

+σ2

2S??D(1)1 + S??D(1)Q(1)δ(1)/2

Σ2=σ2

+ σ4

J

(13)

J(S?+ S??Q(1))(D(1))2(S?+ S??Q(1))T

JS??(D(1))2S??T/2 + σ2

WFFT.(14)

Here, H, G, S?, D(1)and F are the same as in equations (9)

and (10). Matrix S??comes from the second derivative term.

It is a Toeplitz matrix in which each row is a shifted version

of the sequence {˜s??i}, in reverse order. The matrix S??can

be made to have the same size as H and S?since we can

always find A and B such that˜hj= 0,˜s?j= 0 and˜s??j= 0

when i < −A,or i > B. The vector 1 in (13) represents

the all-ones column vector. Matrix Q(1)in equations (13) and

(14) is a diagonal matrix whose diagonal elements are the net

transition shifts δifor the recorded sequence x(1).

Comparing the noise mean and covariance matrix in the

order-1 and order-2 channel approximations, we see that the

NLTS affects only the noise mean in the order-1 channel

approximation, while in the order-2 channel approximation,

NLTS affects both the noise mean and the covariance matrix.

Similarly, the jitter noise variance appears only in the covari-

ance matrix calculation in the order-1 channel approximation

while in the order-2 channel approximation, it appears both

in the mean and the covariance matrix calculation. In both

channel approximations, the AWGN noise variance appears

only in the covariance matrix calculation.

B. Upper and Lower Bounds on Bit-Error-Rate

For the Viterbi detector, the union bound on the probability

that an error event occurs at time k can be expressed by the

summation of the probabilities of all the possible error events

that start at time k and end at time k + M for any given

recorded data sequence x(1). If we group the error events

according to the error event lengths, the union bound for the

sequence error probability can be written as

PE<

∞

?

M=Mmin

?

x(1)

Pr(x(1))

?

x(2)such that

?M=(ξ1→ξ2)∈EM

Pr(?M|x(1))

(15)

where Mmin is the minimum length of all the error events,

and EM is the set of error events of length M.

The union bound on the bit error probability can be derived

from PE. Denote by Nb(?M) the number of erroneous bits

corresponding to the error event ?M, i.e., the number of bits by

which sequence x(1)and x(2)differ. The bit error probability

is the probability that the bit belongs to the set of incorrect bits

of an error event. The union bound for the bit error probability

Page 5

162 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 28, NO. 2, FEBRUARY 2010

is thus given by

Pb<

∞

?

M=Mmin

?

x(1)

Pr(x(1))

?

x(2)such that

?M=(ξ1→ξ2)∈EM

Pr(?M|x(1))Nb(?M)

(16)

A lower bound on the BER can be obtained by limiting the

summation in the union bound to a set of mutually disjoint

error events. The collection of minimum-length error events

form such a set. In a minimum-length error event, the two

paths ξ1 and ξ2 diverge at time k and remerge after the

shortest possible time. This occurs when x(2)differs from

x(1)only in the bit at time k. The minimum error event

length is therefore determined by the number of trellis states;

specifically, Mmin = J + 1. For a given recorded sequence

x(1), there is only one such error event because the inputs

are binary and there is only one sequence x(2)that can

differ from x(1)only at time k. Therefore, the minimum-

length error events are pairwise disjoint since they correspond

to different recorded sequences. It follows that, for each

minimum-length error event ?Mmin, the number of erroneous

bits is Nb(?Mmin) = 1. In the scenarios we considered, the

BER is dominated by single-bit errors [4] corresponding to

minimum-length error events. We therefore expect the lower

bound based upon such events to be fairly good.

Assuming that the recorded data sequences are equiproba-

ble, the lower bound on the bit error probability obtained by

minimum-length error events can thus be written as follows:

1

2LE

x(1)

Pb>

?

Pr(?Mmin= (ξ1→ ξ2)|x(1))

(17)

where LE is the effective calculation length of the recorded

sequence. The effective calculation length is the length of

the span within the recorded sequence that figures in the

computation of the pairwise error probability. For example,

suppose we have two sequences that agree in positions k−T1

to k+T2. If the pairwise error probabilities corresponding to a

single-bit error at time k with respect to these two sequences

differ, then we would need to use an effective calculation

length LE> T2− T1+ 1.

Clearly, the value of LE is determined by the ISI channel

memory, the memory of the noise correlation, the data-

dependent noise memory, and the memory of the NLTS. In

our model of NLTS, the data bits are written sequentially and

the net transition shift of the current transition is affected by

previous transition positions, which in turn have been affected

by the positions of transitions preceding them. Thus, the NLTS

may have, in effect, unbounded memory. This implies that LE

would have to be very large, making the computation of the

lower bound impractical. Consequently, in our calculations we

use a smaller value for LE, resulting in only an estimate of

the lower bound.

By considering only single-bit error events in the lower

bound, we can take advantage of other computational sim-

plifications. For example, the vector λ in this situation is

λ = 2x(1)

we can evaluate in advance vectors such as |λ|T·H that, when

k(g0,g1,··· ,gJ)T. Since x(1)

k

is either +1 or −1,

TABLE I

NORMALIZED NLTS FOR FOUR TRANSITION PATTERNS

Transition patterns...001 (1)...011 (1)...010 (1) ...000 (1)

NLTS / B20%12%8%0

suitably truncated, are used in the calculation of the pairwise

error probabilities.

IV. SIMULATION RESULTS

To compare the BER estimates derived in the previous

section to the Monte-Carlo simulation results presented in [4],

we calculated the estimate of the lower bound for both the

order-1 and order-2 channel approximations, using the same

channel parameters. We use a minimum mean-squared error

(MMSE) equalizer design with monic constraint [13]. The

equalizer is a 21-tap FIR filter and the equalization target has

3 taps. The equalizer output serves as the input to a Viterbi

detector matched to the target.

For the NLTS calculation, we set the medium to soft-

underlayer spacing to 20nm, and the medium thickness is set

to 10nm. The channel bit spacing is 16nm, corresponding to

a linear density of about 1.59 × 106bits/inch. The remanent

magnetization to head field gradient ratio is set to 1.5. With

these parameters, the NLTS of the isolated dibit pattern is

about 20% (absolute value) of the channel bit spacing. In Table

I, we list the absolute value of the normalized NLTS values

for several input patterns.

In Fig. 2, the BER lower bound calculated by equation

(17) is shown for the dibit precompensation scheme using

the order-1 channel approximation. The x-axis represents the

precompensation value normalized by the channel bit spacing.

The data sequence x(1)we considered in the calculation is

from time k−TCto k+TC, where TCis shown in the legend

of Fig. 2. Therefore, the effective calculation length that we

use is given by LE= 2TC+1. We can see that the curves for

LE = 11 and LE = 15 are almost identical. For LE = 31,

because of the computational complexity, we calculated only

one point where no precompensation was used. The result

is very close to the results corresponding to LE = 11 and

LE= 15.

The BER generated by Monte-Carlo simulation is also

shown in the figure. We can see that the lower bound curve for

the order-1 channel approximation is very close to the Monte-

Carlo simulation result. The optimal precompensation value

that minimizes the BER can also be deduced from the lower

bound estimate.

Fig. 3 shows the estimate of the lower bound for the

order-2 channel approximation for dibit precompensation. The

channel parameters are the same as in Fig. 2. The Monte-Carlo

simulation results are also shown.

We can see that the lower bound is not as tight as for

the order-1 channel approximation because of the inaccuracy

introduced by the Gaussianity assumption in calculating the

pairwise error probability. However, the optimal precompen-

sation value obtained by using the estimate is the same as

the one obtained by means of Monte-Carlo simulation. (We

note that, again, the difference between results obtained by

setting LE= 11 and LE= 15 is very small.) Moreover, the