Page 1

On iterative equalization, estimation, and decoding

R. Otnes†and M. T¨ uchler‡

† Norwegian Defence Research Establishment, PO box 25, N-2027 Kjeller, Norway

‡ Institute for Communications Engineering, Munich University of Technology, Arcisstr. 21, D-80290 M¨ unchen, Germany

Abstract—We consider the problem of coded data transmission

over an inter-symbol interference (ISI) channel with unknown and

possibly time-varying parameters. We propose a low-complexity

algorithm for joint equalization, estimation, and decoding using

an estimator, which is separate from the equalizer. Based on exist-

ing techniques for analyzing the convergence of iterative decoding

algorithms, we show how to find powerful system configurations.

This includes the use of recursive precoders in the transmitter. We

derive novel a-posteriori probability equalization algorithms for

imprecise knowledge of the channel parameters. We show that the

performance loss implied by not knowing the parameters of the

ISI channel is entirely a loss in signal-to-noise ratio for which a

suitably designed iterative receiver algorithm converges.

I. INTRODUCTION

Many practical communication systems encounter the prob-

lem of data transmission over a channel with unknown and pos-

sibly time-varying parameters, such as the signal-to-noise ratio

(SNR), the delays and phases in a multi-path channel, or the

fading amplitude and phase in a wireless channel. We assume

a baseband symbol-spaced receiver front-end, where the trans-

mit filter, the channel, and the receive filter are approximated

by a discrete-time linear filter with length M channel impulse

response(CIR).Thedataisassumedtobeprotectedbyanerror-

correction code (ECC). A standard approach to the arising de-

tection problem in the receiver splits the global problem into

the three tasks estimation, equalization, and decoding. Estima-

tion is performed blindly or non-blindly using algorithms such

as least-squares- (LS), least-mean-square- (LMS) estimation,

recursive-least-square- (RLS), or Kalman estimation (KEST)

[1]. The effects of ISI are addressed by equalization, which

can be a linear equalizer (LE), a decision feedback equalizer

(DFE) [2], or a method minimizing the sequence- or bit-error

rate (BER), e.g., the BER-optimal BCJR algorithm [3] maxi-

mizing the a-posteriori probabilities (APPs) of the data. A wide

range of decoding algorithms for ECCs exist, among which we

focus on APP decoding for convolutional codes.

An optimal detection approach would be joint decoding and

equalization, which treats the ECC encoder and the ISI channel

as a concatenated code. However, the computational burden is

most often prohibitive, especially when encoder and ISI chan-

nel are separated by an interleaver. A successful approach to

approximately perform joint decoding and equalization is iter-

ative (Turbo) equalization and decoding [4], which has been

studied quite extensively [5–7] when SNR and CIR are pre-

cisely known to the receiver. In case they are not known and/or

possibly time-varying, some methods attempt to perform esti-

mation and equalization simultaneously (jointly), e.g., by ex-

tending the equalizer trellis [8–10]. Others exploit paramet-

ric dependencies specifying how the channel varies, e.g., in

the context of joint estimation and equalization alone [11,12]

or in the context of Turbo equalization [13]. However, these

approaches rely on trellis-based equalization algorithms, i.e.,

their computational complexity is intractable for large M or

given higher-order signal constellations. On the other hand,

since (nearly) joint estimation and equalization is performed,

they perform very well, in particular for fast-varying channels.

We propose in this paper a much simpler algorithm for (sub-

optimal) joint equalization, estimation, and decoding.

estimator is separated out from the equalizer, such that we

can freely choose equalization, estimation, and decoding algo-

rithms with varying complexity. The estimator is allowed to in-

corporate reliability information communicated between equal-

izer and decoder, e.g., as in adaptive equalization, where hard-

decisions from the equalizer are used to track a time-varying

channel, or as in [14,15], where hard-decisions from the de-

coder feedback are used for re-estimation. We avoid these hard-

decisions and perform soft channel estimation, which has al-

ready been applied in the context of LMS and RLS estimation

[16–18] or KEST [19]. It is shown there that soft channel esti-

mation outperforms approaches using hard-decisions, since the

latter suffer from error propagation.

We extend these results and show that the equalizer, which

processes imperfect channel parameters from the estimator,

must address their noisiness. We outline a novel APP equal-

ization algorithm for imprecise knowledge of the CIR, which

canbeextendedtootherequalizationalgorithms. Basedonthat,

powerful concatenated codes are designed using the EXIT chart

tool [20]. We investigate the application of recursive precoding

[21] in the transmitter when the CIR must be estimated and

show why it significantly improves the performance of iterative

equalization and decoding without increase in complexity.

The

II. SYSTEM DEFINITION

Consider the communication system in Fig. 1. A block of K

data bits is encoded with the “outer” encoder of rate R=K/N

to N code bits c = (c1,...,cN), cn∈ F2. The interleaver per-

mutes the bits in c to x = (x1,...,xN). Inserting T training

bits tninto the stream of N code bits xnaccording to a pre-

defined schedule yields a block ˜ x = (˜ x1,..., ˜ x˜

where˜ N =N+T. The following rate-1 mapper maps q-tuples

˜ xk= (˜ xqk−q+1,..., ˜ xqk), k = 1,...,˜ N/q, of bits ˜ xnto sym-

bols yk∈ C from the 2q-ary signal constellation S (with av-

erage power P) using the bijective function S(·). The mapper

includes a precoder defined by the state-space equations

N) of bits ˜ xn,

sk+1= skA + ˜ xkB,

yk= skCT+ ˜ xkD,yk= S(yk),

where yk= (yqk−q+1,...,yqk). The length-m vector skis the

precoder state at time step k. The dimensions of A,B,C,D

are m×m, q×m, q×m, q×q, respectively. The precoder state

is initially zero, i.e., s0=(00 ··· 0).

0-7803-7802-4/03/$17.00 (C) 2003 IEEE

Page 2

Inter−

leaver

cn

Mapper

yk

xn

tn

k

z

n

Le

n

L

leaver

Deinter−

Inter−

leaver

estimate

data

n

L

n

Ld

wk

ISI

data

Encoder

000

001

101

010

100

011

111

110

Precoder

( )x

Channel

estimator

( )x

c ( )

c ( )

Decoder

Channel Transmitter

8PSK with Gray mapping

training information

Equalizer

Receiver

Fig. 1. Coded data transmission system applying iterative equalization, estimation, and decoding in the receiver.

We use the example alphabet S depicted in Fig. 1, an 8PSK

constellation, to illustrate our results. We apply precoding to

account for the fact that the “inner” encoder in a serially con-

catenated system should have rate-1 [22,23] (no extra redun-

dancy) and be recursive [22–24] to approach the capacity of the

channel as close as possible. Here, the “inner” encoder is the

cascade of the mapper and the ISI channel. Transmitted over

the ISI channel are˜ N/q symbols ykyielding the total rate

Rtot= R · N/˜ N = K/(N+T)

of the communication system. The T training bits tnare chosen

such that T/q fixed training symbols are among the˜ N/q sym-

bols yk, i.e., the tndepend on the data bits xnwhen precoding

is applied. Independent and identically distributed (i.i.d) com-

plex noise samples wkwith probability density function (PDF)

f(w) = 1/(πσ2) · exp(−|w|2/σ2), w ∈ C are added at the re-

ceiver front end. Received are the symbols

?M−1

where hk is the CIR at time step k. The equalizer and the

decoder communicate log-likelihood ratios (LLRs) as reliabil-

ity information [25]. The equalizer outputs the LLRs Le(xn),

which are used after deinterleaving as a-priori LLRs L(cn) =

ln(Pr{cn= 0}/Pr{cn= 1)} on the cnby the decoder. An

APP-based equalizer outputs Le(xn)=L(xn|z,L(x))−L(xn),

where L(xn|z,L(x)) is the a-posteriori LLR defined by

L(xn|z,L(x))=lnPr{xn=0|z,L(x)}

Pr{xn=1|z,L(x)}

= ln

?

where z=(z1,...,z˜

the set of valid code words c interleaved to x. In turn, the de-

coder outputs the LLRs Ld(cn), which are used after interleav-

ingasa-prioriLLRsL(xn)onthexnbytheequalizer. AnAPP-

based decoder would compute Ld(cn) = L(cn|L(c))−L(cn).

An overview of how to compute Le(xn) for other types of

equalizers is presented for example in [7].

The minimal trellis describing a general length M ISI chan-

nel has qM−1states, where the state at time step k is given by

(yk−1,...,yk−M+1) [3]. An APP equalizer may decode the pre-

code using the same trellis without complexity overhead when

skis part of the channel state. This is achieved for example

with the parameter choice m = q, A = CT, B = D, such that

yk= yk−1A+˜ xkB holds. We restrict ourselves to such pre-

coders to keep decoding for the precode as simple as possible.

Using a linear equalizer and precoding requires an extra pre-

decoding operation. This scenario is not addressed here due to

space limitations and we focus entirely on APP equalization.

(1)

zk=wk+

l=0hk,lyk−l,

hk=(hk,0···hk,M−1),

?

x∈X:xn=0f(z|x)Pr{x}

x∈X:xn=1f(z|x)Pr{x},

N/q), L(x)=(L(x1)···L(xN)), and X is

(2)

III. ESTIMATION

The equalizer needs estimatesˆhk of hk for each k. Let

H = (h1···h˜

vectors containing the CIRs and its estimates for all time steps.

For simplicity of the derivations, we assume that the channel

noise variance σ2is known to the receiver with sufficient accu-

racy. We propose to use a separate estimator calculating theˆhk

from zk, the known bits tn, and possibly L(yn|z,L(x)) from

the equalizer or L(cn|L(c)) from the decoder produced in the

previous iteration. The estimator does not need to use the less-

reliable extrinsic LLRs Le(xn) or Ld(cn), which are manda-

tory for iterating between equalizer and decoder. Suitable soft

estimation algorithms using LLRs have been derived in [16,17]

(LMS, RLS estimation) and [19] (KEST). They all transfer the

bit-oriented LLRs into “soft symbols” (means) of the symbols

ykand not into hard-decisions from S. For example, without

precoder, i.e., when yk=˜ xk, the soft symbols are given by

?

where cond stands for L(yn|z,L(x)) or L(cn|L(c)) and

Pr{yk=i|cond}=?q

ing E{yk|L(cn|L(c))} without precoder is simple, but comput-

ing E{yk|L(cn|L(c))} given a memory-m precoder is cumber-

some. In the results section we avoid this parameter constella-

tion and leave a satisfactory solution to this problem open.

In the first iteration, the estimator has only the training bits

tnavailable and it may calculate estimatesˆH with little reli-

ability. The equalizer can support the estimator in this case

by producing intermediate hard-decisions or LLRs on the yk

within some delay. However, this delay could be as large as the

entire block, e.g., for APP equalizers. The precomputation of

approximate hard-decisions or LLRs may be a solution, but it

should be avoided, since wrong hard-decisions or incorrect (so-

called inconsistent [26]) LLRs cause estimation errors followed

by errors in equalization, then in decoding, and finally in the en-

tire iterative receiver algorithm. Therefore, we propose to wait

until the iterative decoding algorithm offers consistent LLRs,

i.e., first after one iteration. However, there is another threat

of inconsistency. The equalizer has to compute Le(xn) from

ˆH. Assuming that they are correct, i.e., thatˆH = H, causes

erroneous LLRs Le(xn), sinceˆH is a distorted version of H.

One way to overcome this deficiency is to merge estimator

and equalizer as in [8–13] to perform estimation and equaliza-

tion jointly. However, this usually causes a complexity prob-

lem, since these joint estimator-equalizers are trellis-based re-

quiring more than the qM−1states of the conventional APP

N/q) andˆH = (ˆh1···ˆh˜

N/q) be length M˜ N/q

E{yk|cond} =

∀i=(i1···iq)∈Fq

2

S(i) · Pr{yk=i|cond},

j=1Pr{ykq−q+j=ij|cond}. We note

that computing E{yk|L(yn|z,L(x))} in general and comput-

0-7803-7802-4/03/$17.00 (C) 2003 IEEE

Page 3

equalizer. Even for reduced-state approximations, it is the ad-

ditional need for estimation, which causes the joint estimator-

equalizer trellis to have more states than that of an APP equal-

izerknowing Hprecisely. Also, simplerequalization strategies,

e.g., linear equalization, cannot be applied. This is why we sep-

arate out the estimator and instead specify the statistics

µk,l=Eˆ H|H{dk,l} and νk,k?,l=Eˆ H|H{dk,ld∗

of the mismatch dk,l=ˆhk,l−hk,lbetween estimate and true

value over the PDF ofˆH given H. This knowledge is used

by the equalizer computing output LLRs Le(xn) for imprecise

(noisy) knowledge of the CIR. Thus, we need to derive a new

instance of the APP equalization algorithm.

Both steps outlined above, i.e., waiting for valid LLRs and

performing equalization such that the LLRs Le(xn) are con-

sistent, might degrade the receiver performance for early it-

erations. However, as shown later, properly designed decod-

ing algorithms do converge after a sufficient number of itera-

tions and outperform systems with a better initial performance.

For example, in a receiver using one-time equalization and de-

coding, a DFE outperforms a LE due to its non-linear (hard-

decision-based) processing of past equalized symbols, but the

output LLRs are inconsistent. The LE outputs poor but con-

sistent LLRs in early iterations, but it assures convergence of

the iterative receiver algorithm, whereas a DFE-based system

performs initially better but fails after convergence [7].

k?,l}−µk,lµ∗

k?,l

IV. EQUALIZATION

We derive here an APP equalization algorithm for imprecise

channel knowledge. Other algorithms are treated in [17]. For a

known CIR, the expressions f(z|x) in (2) are proportional to

f(z|x)∼

where the ykare computed given the hypothesis x. Obviously,

f(z|x) can be factored into K terms depending on at most M

symbols yk, which yields that the APP rule (2) can be per-

formed on a qM−1-state trellis [27]. Note that any factor of

f(z|x) not depending on x can be neglected for the APP rule.

A general algorithm for APP equalization given the impre-

cisely known CIRsˆhkis still unknown. That is, we need a

rule to compute the quantity f(z|x,ˆH). The imperfectness

of the estimates is specified via their statistics µk,land νk,k?,l.

The equalizer incorporating this extra knowledge is expected to

produce less reliable output LLRs Le(xn), which is a wanted

effect, since it assures that the LLRs Le(xn) are consistent.

Computing f(z|x,ˆH) requires a detailed derivation, which is

beyond the scope of this paper. We present here only partial

results from [17] for the case that theˆhk,lat different taps l

are mutually independent. Let rl= (h1,l,...,h˜

(ˆh1,l,...,ˆh˜

estimates for each time step k. Invoking the independence as-

sumption yields f(rl,rl?)=f(rl)f(rl?) for any l?=l?. For sim-

plicity of the derivation, we assume that the estimator is unbi-

ased, i.e., µk,l=0 for all k and l and that the correlations νk,k?,l

are identical for each tap, i.e., νk,k?,0=νk,k?,1= ... =νk,k?,M−1

? ˜

N/q

k=1exp?−|zk−hk[yk···yk−M+1]T|2/σ2?,

N/q,l) and ˆ rl=

N/q,l) be the l-th tap coefficients of the CIR and its

for each k and k?. We assume that the noise distortingˆH is

complex Gaussian, which yields the following PDF f(ˆ rl|rl):

f(ˆ rl|rl) = exp?−(ˆ rl− rl)Σ−1(ˆ rl− rl)H?/(π

where νk,k?,lis the entry in the k-th column and k?-th row of Σ.

We rewrite f(z|x,ˆH) to f(z,ˆH|x)/f(ˆH), where f(z,ˆH|x)=

?f(z,ˆH,H|x)dH. Factoring f(z,ˆH,H|x) yields finally:

f(z|x,ˆH)∼

where f(ˆH|H)f(H) =?M−1

the channel taps, e.g., a uniform, Rayleigh, or Rice distribution.

This PDF also governs the correlations of the taps over time.

We consider here only one example, the time-invariant chan-

nel with h1=h2= ... =h˜

tion on the channel taps, i.e., f(hk,l)=(πc2)−1for |hk,l|≤c,

and 0 elsewhere, where c is large, we can solve (3) and find

f(z|x,ˆH)∼exp(−(z−ˆheY)A−1(z−ˆheY)H)

det(IM+YYH/(¯ ωσ2))

where A=σ2I˜

The weight ωkis the sum of the˜ N/q entries in the k-th col-

umn of Σ−1, and ¯ ω =

ˆhe= (ˆhe,0···ˆhe,M−1) is the weighted sum of all˜ N/q esti-

matesˆhk, i.e.,ˆhe=1/¯ ω ·? ˜

the k-th trellis section (at time step k) are computed using the

estimatesˆhk. Instead,ˆheshould be used for all sections, which

is intuitively correct as the channel is time-invariant.

The above expression for f(z|x,ˆH) can be approximated by

replacing YYHwith its average P˜ N/q·IMand YHY with its

average PM·I˜

f(z|x,ˆH) ≈ exp(−?z−ˆheY?2/(σ2+PM/¯ ω)),

which yields an APP rule implementable on a qM−1-state trel-

lis usingˆheand the increased effective channel noise variance

σ2+PM/¯ ω. Thus, the APP equalizer accounts for the imper-

fectly known channel parameters by decreasing the reliability

of the output LLRs Le(xn) and it utilizes a diversity effect by

averaging over the˜ N/q available estimatesˆhk,lusing suitable

weights ωk. With this framework it is also possible to solve (3)

for other channel tap distributions [17].

Consider the following example.

˜ N/q = 58 symbols yk (T/q = 15 training symbols plus

N/q = 43 data symbols) over a time-invariant length M = 3

ISI channel. An RLS-based estimator with forgetting factor

λ=0.99 attempts to estimate the CIR using the known symbols

yk, k = 1,...,15, and the means E{yk|cond}, k = 16,...,58.

An analytical expression of the tap error variance νk,k,l per

˜

N/q·det(Σ)),

?

f(z|x,H)f(ˆH|H)f(H)dH,

(3)

l=0f(ˆ rl|rl)f(rl). The PDF f(rl)

containspossiblyavailableinformationaboutthedistributionof

N/q. Assuming a uniform distribu-

,

N/q+YHY/¯ ω, Iiis the i×i identity matrix, and

y1

y2

...yM

...

0y1

...yM−1

...

...

00...y1

...

Y =

y˜

N/q

y˜

N/q−1

...

y˜

N/q−M+1

.

? ˜

N/q

k=1ωk. The effective estimate

N/q

k=1ωkˆhk. Thus, it is suboptimal

to perform APP equalization on a trellis, where the metrics in

N/qover all q˜

N/qpossible sequences x:

Suppose we transmit

0-7803-7802-4/03/$17.00 (C) 2003 IEEE

Page 4

tapˆhk,lfor this estimator was derived in [16]. The variance

νk,k,lis majorly affected by the SNR P/σ2and the average en-

ergy E¯ y= 1/43 ·?58

the scaled tap error variance 15νk,k,lfor different E¯ yat 5 dB

P/σ2. The diversity effect by averaging over theˆhkis small,

since the estimates are strongly correlated. This is because λ

approaches 1, for which RLS estimation turns into non-time-

sequential LS estimation algorithm. The effective estimateˆhe

is merely a combination ofˆh15andˆh58obtained using only the

training or all 58 symbols, respectively. The weights ω15and

ω58depend on the reliability ofˆh15andˆh58, i.e., on ν15,15,l

and ν58,58,l. Computing the sum ¯ ω, e.g., ¯ ω = 54 (E¯ y= 0.1),

¯ ω=72 (E¯ y=0.6), ¯ ω=190 (E¯ y=1.0), reveals that the effective

noise variance σ2+PM/¯ ω = 0.32+3/¯ ω, i.e., 0.37 (E¯ y=0.1),

0.36 (E¯ y=0.6), 0.33 (E¯ y=1.0), has not risen significantly.

This analysis can be redone for other estimation algorithms,

other estimator parameters, and finally a time-varying channel,

too. Because of space limitations we omit results here.

k=16|E{yk|cond}|2of the soft-symbols.

Fig. 2 shows the profile of the normalized weights ωk/¯ ω versus

5 1015 2025 30 35 4045 50 55

0

0.5

1

5 10 15 20 2530 35 4045 50 55

0

0.5

1

5 1015202530 3540 45 50 55

0

0.5

1

time index k

normalized tap weights

scaled tap error variance

training

data symbols

Ey=0.1

Ey=1.0

Ey=0.6

Fig. 2. Weight distribution for RLS estimation.

V. SYSTEM DESIGN

The LLRs communicated between equalizer and decoder

can be modelled as outcomes of the random variables (r.v.’s)

Λemodelling the LLRs Le(xn) and Λdmodelling the LLRs

Ld(cn). The outcomes of Λeand Λdare distributed with the

PDFs fd(l|c) conditioned on cn=c and fe(l|x) conditioned on

xn= x, respectively, which both vary for each iteration. Un-

fortunately, analyzing these PDFs is extremely difficult. As

a simplification one could observe only a single parameter

of the PDFs after each iteration, e.g., the mutual information

Id= I(Λd;C) ∈ [0,1] between Λdand the r.v. C, whose out-

comes are the bits cn[20]. Similarly, Ie=I(Λe;X) is defined

on fe(l|x). The evolution of Idand Ieover the iterations is

the trajectory of the decoding algorithm. Both Idand Iecan

be calculated either via histograms of the output LLRs Le(xn)

or Ld(cn) as in [20] or, when fd(l|c) and fe(l|x) satisfy cer-

tain constraints, via a time average of a function of the output

LLRs[23], e.g., Id=1−1

N

?N

n=1log2(1+exp(−µ(cn)Ld(cn))),

where µ(0) = +1 and µ(1) = −1. To predict the behavior of

the decoding algorithm without actually running the algorithm,

equalizer and decoder are analyzed separately via their transfer

functions Ie= Te(Iin) and Id= Td(Iin), which map any mu-

tual information Iin∈ [0,1] specifying a particular input LLR

distribution to Ieor Id. Since the input PDF, which is fd(l|c)

or fe(l|x) due to the feedback, is not accessible, a Gaussian

distribution yielding the same Iinis used [20]. This analysis

is accurate, i.e., the decoders output the same Idor Iewhen

fed with LLRs distributed either with fd(l|c) and fe(l|x) or the

Gaussian PDF at the same Iin, only for large N or a finite num-

ber of iterations, respectively [20].

We use properties of this method, called extrinsic informa-

tion transfer (EXIT) chart, to select system parameters such

as the outer code and the number of training symbols T. Let

Ad=?1

for any transfer function in the EXIT chart. Given an APP-

baseddecoderfortherate-R outercode, wehaveapproximately

Ad= 1−R [23], a property, which was proven under some-

what simplified conditions in [22]. We say that the iterative de-

coding algorithm converges whenever Te(i)>Td(i)−1, for all

i∈[0,1−?), where ? is small. This implies that Ae>(1−Ad)

or Ae>R. In fact, Aecan be related to the capacity of the un-

derlying communication channel [22,23]. We use the latter law

to select system parameters by precomputing some Te(i) for a

number of parameter constellations and pick those for which

Aeis maximal.

VI. RESULTS

We encoded 510 data bits to N =1024 code bits cnusing a

rate-1/2 convolutional code with generator [1+D21+D+D2].

The cnare S-random interleaved (S=17) and partitioned into

8 blocks of 128 bits. Transmitted are 8 frames of 43+T/3

symbols ykmapped from T training and 128 data bits using

either no precoder, i.e., yk=x?

?1

The length M =3 ISI channel is assumed to be time-invariant

and given by the impulse response [0.407 0.815 0.407]. The

receiver estimates the channel for each frame of 43+T/3 re-

ceived symbols using RLS estimation. We chose this example

for simplicity. In fact, the derived algorithms exhibit their pow-

erfulness especially in scenarios where the CIR is time-varying.

Because of space limitation we have to omit results here.

Fig. 3 depicts the equalizer transfer function Te(i) for both

precoder types and two receiver strategies (one-time estimation,

iterative estimation) at 8 dB Eb/N0defined as P/(qσ2Rtot) for

four different T/3 ∈ {7,15,31,63}. We pick T/3 = 15, since

Aeis maximal for both receiver strategies, i.e., this T/3 is the

best trade-off between estimate-reliability and rate-loss. Iter-

ative estimation outperforms one-time estimation in particular

for short T, since it utilizes the 128 data bits for estimation as

well. Note that Aeis the largest achievable rate R of the outer

code such that decoding convergence is achieved. For example,

0Td(i)di be the area under Td(i) and Aebe the area un-

derTe(i). Moreover, thelaw?1

0T(i)di=1−?1

0T−1(i)diholds

k, or the precoder given by

A = CT=

1

0

0

1

0

0

0

0

?

and B = DT=

?10

1

0

0

0

1

0

0

?

.

0-7803-7802-4/03/$17.00 (C) 2003 IEEE

Page 5

01

0

0.5

1

Td(i)

Te(i)

7 1531 63

0

0.5

1

T/3

Ae

equalizer (no precoding)

equalizer (precoding)

decoder

Example: T/3=15

no marker: ¯ η and h[k] are perfectly known,

×: ¯ η and h[k] are estimated once using the T/3 training symbols,

◦: ¯ η and h[k] are estimated iteratively

Fig. 3. Achievable performance of 3 system configurations at 8 dB Eb/N0.

for 8 dB Eb/N0we find that decoding convergence is possible

up to R < 0.63 using iterative estimation with T/q = 15. To

achieve this rate, Td(i) should be matched to Te(i), e.g., using

irregular codes [23]. Observe that Aeis indifferent from the

chosen precoder, but the error shoulder after decoding conver-

gence disappears with increasing N when a precoder is used

since Te(1)=1. We note that specifying Te(i) is cumbersome

when the CIR is unknown and possibly time-varying. To de-

sign a concatenated system, e.g., to select a suitable Td(i), a

representative Te(i) averaged over all possible CIRs based on

f(H) could be obtained. Also, the transmitter could adjust its

parameters using feedback from the receiver. Fig. 4 shows the

BER performance of the systems described above for both pre-

coders and both estimation strategies. For iterative estimation,

the estimator incorporates the LLRs produced by the equalizer

in the previous iteration, i.e., L(yn|z,L(x)). We see that, first,

iterative estimation closely approaches the performance when

the CIR is known to the receiver, and, second, no significant

error shoulder occurs with precoding. Again, we note that the

achievable gains compared to one-time estimation are larger for

time-varying channels, but we have to omit results here.

46 10 12

10

−4

10

−2

10

0

one−time equal. and decod.

Eb/N0

BER without precoder

46 10 12

10

−4

10

−2

10

0

after 5 iterations

Eb/N0

468 1012

10

−4

10

−2

10

0

BER with precoder

468 1012

10

−4

10

−2

10

0

h[k] is known

one−time estimation

iterative estimation

Fig. 4.

with (bottom plots) and without (upper plots) a precoder in the transmitter.

BER performance of iterative equalization, estimation, and decoding

REFERENCES

[1] S. Haykin, Adaptive Filter Theory, 3rd Ed. Upper Saddle River, New

Jersey: Prentice Hall, 1996.

[2] J. Proakis, Digital Communications, 3rd Ed. McGraw-Hill, 1995.

[3] L.R. Bahl et al., “Optimal decoding of linear codes for minimizing sym-

bol error rate,” IEEE Trans. on IT, vol. 20, pp. 284–287, Mar 1974.

[4] C. Douillard et al., “Iterative correction of intersymbol interference:

Turbo equalization,” European Trans. on Telecomm., vol. 6, pp. 507–511,

Sep/Oct 1995.

[5] A. Anastasopoulos and K. Chugg, “Iterative equalization/decoding for

TCM for frequency-selective fading channels,” in Proc. 31th Asilomar

Conf. on Signals, Systems & Comp., vol. 1, pp. 177–181, Nov 1997.

[6] A. Glavieux, C. Laot, and J. Labat, “Turbo equalization over a frequency

selective channel,” in Proc. 2nd Intern. Symp. on Turbo codes, Brest,

France, pp. 96–102, Sep 1997.

[7] M. T¨ uchler, R. Koetter, and A. Singer, “Turbo equalization: principles

and new results,” IEEE Trans. on Comm., pp. 754–767, May 2002.

[8] L. Davis, I. Collings, and P.Hoeher, “JointMAPequalizationandchannel

estimation for frequency-selective and frequency-flat fast-fading chan-

nels,” IEEE Trans. on Comm., vol. 49, pp. 2106–2114, Dec 2001.

[9] X. Cheng and P.Hoeher, “Blind turbo equalization for wireless DPSK

systems,” in Proc. Intern. ITG Conf. on Sourse and Channel Coding,

pp. 371–378, Jan 2002.

[10] M. Peacock, I. Collings, and I. Land, “Design rules for adaptive turbo

equalization in fast-fading channels,” in Proc. Intern. Conf. Communi-

cation Systems & Networks, (Valencia, Spain), Sep 2002.

[11] K. Chugg and A. Polydoros, “MLSE for an unknown channel - part I:

optimality considerations,” IEEE Trans. on Comm., vol. 44, pp. 836–846,

July 1996.

[12] E. Baccarelli and R. Cusani, “Combined channel estimation and data

detection using soft statistics for frequency-selective fast-fading digital

links,” IEEE Trans. on Comm., vol. 46, pp. 424–427, April 1998.

[13] A. Anastasopoulos and K. Chugg, “Adaptive soft-in soft-out algorithms

for iterative detection with parameteric uncertainty,” IEEE Trans. on

Comm., vol. 48, pp. 1638–164, Oct 2000.

[14] N. Nefedov and M. Pukkila, “Turbo equalization and iterative (turbo) es-

timation techniques for packet data transmission,” in 2nd Intern. Symp.

on Turbo Codes, Brest, France, pp. 423–426, 2000.

[15] P.Strauch, C. Luschi, and A. Kuzminskiy, “Iterative channel estimation

for EPGRS,” in IEEE Vehicular techn. Conf. Fall, 2000, (Boston), Sep

2000.

[16] M. T¨ uchler, R. Otnes, and A. Schmidbauer, “Performance evaluation of

soft iterative channel estimation in turbo equalization,” in Proc. IEEE In-

tern. Conf. on Comm., pp. 53–60, May 2002.

[17] C. Kuhn, “Iterative Kanalsch¨ atzung, Entzerrung, und Dekodierung f¨ ur

den EDGE Standard,” Master’s thesis, Munich University of Technology,

2002, email to: michael.tuechler@ei.tum.de.

[18] R.Otnes and M. T¨ uchler, “Soft iterative channel estimation for turbo

equalization: comparison of channel estimation algorithms,” in Proc.

IEEE Intern. Conf. on Communication Systems, (Singapore), Nov 2002.

[19] S. Song, A. Singer, and K. Sung, “Turbo equalization with an unknown

channel,” in Proc. IEEE Intern. Conf. on Acc, Speech, and Signal Proc.,

May 2002.

[20] S. ten Brink, “Convergence behaviour of iteratively decoded parallel con-

catenated codes,” IEEE Trans. on Comm., vol. 49, pp. 1727–1737, Oct

2001.

[21] I. Lee, “The effect of a precoder on serially concatenated coding systems

with an ISI channel,” IEEE Trans. on Comm., vol. 49, pp. 1168–1175,

July 2001.

[22] A. Ashikhmin, G. Kramer, S. ten Brink, “Extrinsic information transfer

functions: A model and two properties,” in Proc. CISS, Princeton, March

2002.

[23] M. T¨ uchler and J. Hagenauer, “Exit charts of irregular codes,” in Proc.

CISS, Princeton, March 2002.

[24] S. Benedetto et al., “Serial concatenation of interleaved codes: perfor-

mance analysis design, and iterative decoding,” IEEE Trans. on Infor-

mation Theory, vol. 44, pp. 909–926, May 1998.

[25] J. Hagenauer, E. Offer, and L. Papke, “Iterative decoding of binary block

and convolutional codes,” IEEE Trans. on Information Theory, pp. 429–

445, March 1996.

[26] M. T¨ uchler, “Design of serially concatenated systems depending on the

block length,” in Proc. ICC 2003, Anchorage, U.S.A., May 2003.

[27] S. Aji and R. McEliece, “The generalized distributive law,” IEEE Trans.

on Information Theory, vol. 46, pp. 325–343, March 2000.

0-7803-7802-4/03/$17.00 (C) 2003 IEEE