Content uploaded by Wai Ho Mow
Author content
All content in this area was uploaded by Wai Ho Mow on Jan 29, 2014
Content may be subject to copyright.
DOI 10.1007/s11063-006-9020-y
Neural Processing Letters (2006) 24:179–192 © Springer 2006
Soft-Decoding SOM for VQ Over Wireless Channels
CHI-SING LEUNG1,, HERBERT CHAN1,2andWAIHOMOW
2
1Department of Electronic Engineering, City University of Hong Kong, Kowloon Tong, Hong
Kong. e-mail: eeleungc@cityu.edu.hk
2Department of Electrical and Electronic Engineering, Hong Kong University of Science and
Technology, Hong Kong, Hong Kong
Abstract. A self-organizing map (SOM) approach for vector quantization (VQ) over
wireless channels is presented. We introduce a soft decoding SOM-based robust VQ (RVQ)
approach with performance comparable to that of the conventional channel optimized
VQ (COVQ) approach. In particular, our SOM approach avoids the time-consuming index
assignment process in traditional RVQs and does not require a reliable feedback channel for
COVQ-like training. Simulation results show that our approach can offer potential perfor-
mance gain over the conventional COVQ approach. For data sources with Gaussian distri-
bution, the gain of our approach is demonstrated to be in the range of 1–4 dB. For image
data, our approach gives a performance comparable to a sufficiently trained COVQ, and is
superior with a similar number of training epoches. To further improve the performance, a
SOM–based COVQ approach is also discussed.
Key words. Soft-decoding, self-organizing map, vector quantization, wireless channels
1. Introduction
Vector quantization (VQ) is a robust and efficient source coding method in data
compression [1, 2]. In the past, the vector quantizer and the channel signalling
scheme are often designed and implemented separately. It means that the code-
book of VQ is usually designed for a noiseless channel, namely, the so-called
source-optimized VQ (SOVQ) approach. The traditional hard decoding algorithm
for VQ over a noisy channel produce an estimation of the transmitted codevector
based on the received signal. As the estimation is one of the codevectors, the hard
decoding output [3, 4] is suboptimal with respect to the minimum mean square
error (MMSE) criterion. Therefore, the intensity of impulsive noise in the received
data is very high when the communication channel is noisy.
To reduce the effect of channel noise on received data, the soft decoding scheme
[3, 4] could be used. In soft decoding, the estimation output is a weighted sum
of all codevectors and the weighting factors are the unquantized matched filter
outputs of the receiver. Even with soft decoding, the resultant distortion may still
be large if the codebook and the channel signalling scheme are designed sepa-
rately [5, 6]. To further reduce the distortion we should properly create the map-
ping from the VQ codebook to the signal constellation [5–10]. This problem can
Corresponding author.
180 CHI-SING LEUNG ET AL.
be formulated as an assignment problem. Many heuristic algorithms have been
proposed to addresss this so-called index assignment problem [5–10]. These algo-
rithms may be classified into two categories, namely, the robust VQ (RVQ) [5–7]
and the channel-optimized VQ (COVQ) [8–10]. However, most of the existing algo-
rithms require intensive computation, tremendous storage overhead and channel
state information. These incur various practical difficulties in applying the two
aforementioned approaches to wireless applications. Specifically, the characteris-
tics of a wireless mobile channel [11] is often fast time-varying due to unpredict-
able movement, obstacles, and weather. Also, the channel state information with
sufficient accuracy is typically unavailable for VQ training.
In the RVQ approach, a codebook is trained from the data source. Afterwards,
the issue of sensitivity to channel noise for VQ is formulated as an assignment
problem from the codebook to the signal constellation [5–7]. In these approaches,
the objective function is a function of the system symbol error rate. Zeger and
Gersho [5] introduced an algorithm to construct the assignment of codevectors to
codewords of a binary error-correcting code. The algorithm iteratively exchanges
the positions of two codevectors so as to successively reduce the cost function.
This technique was then applied to encode speech data in [15]. Instead of using
a deterministic rule to exchange the positions of two codevectors, Farvardin [6]
used a simulated annealing (SA) method to reduce the cost function. In addition
to using the average distortion as a cost criterion, Potter and Chaing [7] used a
minimax distortion as the cost criterion. In these algorithms, we need to calculate
the distances between all pairs of distinct code vectors and assume the knowledge
of the expression of conditional symbol error probabilities. Hence, this approach is
not suitable for fast time-varying situations, for example, a wireless channel and a
time varying data source.
In the COVQ approach [8–14], the codebook is typically trained by using a
Linde-Buzo-Gray (LBG)-like algorithm over a noise channel [8]. Hence, after
training, the codebook is optimized for the noise channel. In [9, 10], COVQ was
further investigated under various models of noise and channel impairments. In
[12, 13], the performance of COVQ with a turbo code was investigated.1Also, the
use of COVQ for transmission of the speech data was investigated in [14].
In the COVQ approach, we assume that a reliable feedback channel for training
is available and the channel variation is slow enough such that the channel signal-
to-noise ratio (SNR) is almost the same in both the VQ training and operating
modes. The number of required training epoches in a COVQ is very large. These
limitations incur the practical difficulties of applying COVQ for wireless appli-
cations. Moreover, although the objective function of COVQ is well organized,
1The channel gain range of the turbo-COVQ is very narrow. This is because the bit error rate curve
of the turbo code is very sharp. That means that, the turbo code can correct most bit errors when the
channel noise is less than a certain value. When the channel noise is greater than this value, the turbo
code cannot correct bit errors and hence there is a sudden drop in the signal-to-reconstruction error
ratio (SRER) curve in the Figure 2 of [12].
SOFT-DECODING SOM FOR VQ OVER WIRELESS CHANNELS 181
the trained codebook is a sub-optimum one. This is because the training of the
codebook is a highly nonlinear optimization ones. Hence, it is of practical interest
to find a VQ method that can achieve a performance comparable with COVQ, but
without requiring so much knowledge of the wireless channel.
An alternative form of RVQ is the self-organizing map (SOM) approach [16]
originally proposed in [17] for hard decoding over additive white Gaussian chan-
nels. This approach provides channel robustness by preserving a neighborhood
structure. It avoids the undesirable time-consuming index assignment process.
However, a problem of the SOM approach as presented in [17] is that the VQ
decoder uses the hard decision method and is not a MMSE decoder.
This paper proposes the soft decoding SOM-based robust VQ approach. We
investigates the performance of soft decoding SOM approaches over a fast time-
varying wireless channel which is modelled as an independent Rayleigh fading
channel [11]. A key advantage of the soft decoding SOM is that although it does
not need any channel state information during training, its performance is com-
parable to that of COVQ that takes full advantages of channel state information.
Under the assumption of the availability of accurate channel state information, we
shall show how a hybrid of the SOM and COVQ approaches, namely, the soft
decoding SOM-COVQ approach, can further improve the performance.
Simulation results show that our approaches are generally superior (or at least
comparable) to the conventional COVQ approach. For data sources with Gaussian
distribution, our approaches can result in a channel gain in the range of 1–4 dB
compared with the conventional COVQ. For image data, our approaches are better
than the conventional COVQ approach with a similar number of training epoches,
and are comparable to a sufficiently trained COVQ.
The rest of this paper is organized as follows. In Section 2, some background
information is presented. Section 3 describes our SOM based approaches. Simula-
tion results are presented in Section 4. Section 5 concludes the paper.
2. Background
A VQ model over a wireless channel is shown in Figure 1. We use the common
independent Rayleigh fading channel to model the wireless channel [11]. Given an
codebook Y={c1,... ,cM}in k, for an input x, the output of the VQ process is
an index i∗, where ci∗∈Yis the closest codevector to the input x. In other words,
the codebook Ypartitions the space kinto Mregions ={1,... ,
M}.
The mapper interfaces the source coder to the channel. It takes in the source
coder’s output i∗and produces channel signal si∗∈S={s1,... ,sM}. The set of
Figure 1. The VQ system over a wireless channel.
182 CHI-SING LEUNG ET AL.
Figure 2. The signal sets of 16 QPSK and 16 QAM.
channel signals S={s1,... , sM}is called the signal constellation. Two common
modulation methods (Figure 2), namely quadrature phase shift keying (QPSK)
and quadrature amplitude modulation (QAM), are considered here. The channel
is assumed to be an independent Rayleigh fading channel. The received signal is
given by
r=asi∗+n, (1)
where ais the Rayleigh-distributed fading factor that randomly scales the signal
according to the Raleigh distribution, and the noise nis a two-dimensional
Gaussian random vector with a covariance matrix equal to
Cn=σ2
n10
01
.
Denoting the variance of aas σ2
a, the SNR of the channel can be expressed as
SNR =σ2
a
2σ2
n
.
The symbol detector makes symbol decisions based on the conditional probability
density values (likelihood values) p(r|si)’s, where i=1,... ,M. In case of hard decod-
ing, the output ˆ
xis given by ˆ
x=ci, where p(r|si)>p(r|si)for all i= i.
When the estimated codevector ciis not equal to the transmitted codevector,
a symbol error occurs. In such a case, the estimated signal siis usually close to
the transmitted signal si∗but the estimated codevector cimay not be close to the
transmitted codevector ci∗. The distortion from cito ci∗depends on the associa-
tion between the codebook Y={c1,... , cM}and S={s1,... ,sM}. If the association
is not created in a proper manner, the noise in the received data is impulsive. Such
impulsive noise cannot be removed by linear or nonlinear filters easily.
SOFT-DECODING SOM FOR VQ OVER WIRELESS CHANNELS 183
In case of hard decision, the objective function that measures the sensitivity to
channel noise is given by
D(Y,) =
M
i=1
M
j=1
P(sj|si)i
p(x)x−cj2dx, (2)
where p(x) is the density function of xand P(sj|si)is the conditional error prob-
ability. In [5–7], in order to remove the effect of channel noise, an index assign-
ment procedure is carried out. It assigns each codevector to a signal of the signal
constellation. During training, we need to know the symbol error probabilities
P(sj|si)’s of the channel. The main drawback of this approach is that the assign-
ment procedure must be carried out again when a new codebook is used or the
characteristics of the channel (symbol error probabilities) is time-varying.
In the COVQ approach [8], a codebook is trained for minimizing the objective
function given by
D(Y,) =
M
i=1i
Ep(r|sj)x−ˆ
x2p( x)d x, (3)
where the expectation is operated on the channel noise and the transmitted
signals (codevectors). In this approach, the reconstruction vector ˆ
xat the receiver
is a weighted sum of all current codevectors, given by M
i=1P(si|r) ci. In this
approach, if the channel noise level changes or the data source characteristics
change, we need to retrain the codebook. Hence, the approach is not suitable
for adaptive environment. Another drawback is that we need to have an addi-
tional reliable channel for training otherwise we can feedback the reconstruction
information ˆ
xto the transmitter for updating the codebook.
3. The SOM approach
3.1. hard som approach
In the SOM learning scheme [16, 17], before training, a neighborhood struc-
ture, represented by a graph G={V,E}, is imposed on a codebook, where
V={v1,... ,v
M}is a set of vertices and Eis the set of edges. Each vertex is asso-
ciated with a codevector.
DEFINITION 1. If ciis defined to be a neighbor of cj, two corresponding verti-
ces viand vjare joined by an edge with weighted value equal to 1.
DEFINITION 2. The neighborhood distance between ciand cjis the length of
the shortest path between viand vjin the graph G. A codevector ciis a level-
uneighbor of cjif the the neighborhood distance between the two codevectors is
184 CHI-SING LEUNG ET AL.
Figure 3. Neighborhood structure. (a) An 1-D circular structure. (b) A 2-D grid structure.
less than or equal to u. The collection of level-uneighbors of a codevector ciis
denoted as Ni(u). The order uGof a topological order is the longest neighborhood
distance in G.
Figure 3(a) shows a 1-D circular structure and Figure 3(b) shows a two
dimensional grid structure. Given a neighborhood structure, the learning algorithm
is summarized as follows.
1. Given the tth training vector x(t), calculate the distances di=x(t) −ci(t)’s
from x(t) to all training codevectors.
2. Find the closest codevector ci∗(t), where di∗<d
i∀i= i∗.
3. Update the codebook as follows:
ci(t +1)=ci(t ) ∀ci(t ) ∈ Ni∗(ut)(4)
ci(t +1)=ci(t ) +αt(x(t)−ci(t)) ∀ci(t ) ∈Ni∗(ut)(5)
The parameter utcontrols which codevectors should be updated at the training
iteration t. In practice, it decreases to zero during the learning process [19, 20]. To
ensure the trained SOM has the ordering property, the initial value u0should be
large enough such that at the beginning of the training process most codevectors
can be updated [17]. In our experiment, u0=uG
4gives out a satisfactory result [17].
The learning rate αtis a gain that controls the percentage of updates of codevec-
tors. In the theoretical proof of the convergence and ordering property of a one-
dimensional SOM, the analysis [21] usually assumes that the gain αtsatisfies the
Robins–Monroes rules [18], i.e.,tαtshould be infinite while tα2
tshould be
finite. However, with this setting, the convergence is very slow. In practice [19, 20],
the learning rate αtis either a small constant or a decreasing sequence to zero. In
this paper, the learning rate αtdecreases linearly from α0(<1)to 0.
SOFT-DECODING SOM FOR VQ OVER WIRELESS CHANNELS 185
4 2 0 2 4
4
2
0
2
4
(a) (b)
initial codevectors trained codevectors
–
–
–– 42 0 2 4
4
2
0
2
4
–
–
––
Figure 4. The trained codevectors in a typical run of SOM. To indicate the neighbor structure of the
SOM, we use an edge to join two codevectors if they are defined as neighbors to each other.
The ordering-preservation property of a trained SOM is that, when two
codevectors are neighbors to each other in the graph, after training, their
Euclidean distance in the data space is usually small [16, 17]. Figure 4 graphically
illustrates the ordering-preservation property.
In communication systems, if the symbol detector makes a wrong estimation
(i.e., the estimated signal symbol ˆ
sis not equal to the transmitted signal si∗), the
estimated signal symbol ˆ
susually is a neighbor of the transmitted signal si∗in
the signal space. Hence, we can simply use the neighborhood structure to create
the index assignment between the codebook and signal constellation. Given a sig-
nal constellation, we use its neighborhood structure as the neighborhood structure
of the SOM. For example, if the signal constellation is 16 QPSK, we can use the
1-D circular structure shown in Figure 3(a) as the neighborhood structure of the
codebook. If the signal constellation is 16 QAM, we can use the two dimensional
grid structure shown in Figure 3(b) as the neighborhood structure of the code-
book.
With the above SOM assignment scheme, when an error event occurs (the
estimated signal siis usually a neighbor of the transmitted signal si∗), the order-
ing-preservation property of SOM ensures that the estimated codevector ciis
also usually close to the transmitted codevector ci∗. Hence, an error event in
the receiver only causes a small increase in the overall root mean squared error
(RMSE) in the received data.
In summary, the SOM assignment scheme provides channel robustness by preserving
a neighborhood structure. The SOM assignment scheme avoids the undesirable time-
consuming index assignment process used in other RVQ approaches [5–7]. Also, unlike
the COVQ approach, the scheme does not require a reliable channel for training the
codebook over a noise channel and avoids the time-consuming COVQ training process
[8, 13].
186 CHI-SING LEUNG ET AL.
3.2. soft decoding
In the hard decoding approach, the output ˆ
xis given by
ˆ
x=ci,(6)
where p(r|si)>p(r|si)for all i= iand p(r|si)’s are conditional likelihood values.
The output of the hard decoding is taken from a finite codebook. Such a deci-
sion rule causes an irreversible loss of likelihood values. That means that, the hard
decoding rule does not utilize all the conditional likelihood values provided by the
channel. Hence, it is not a MMSE estimator.
To further improve the SOM approach, we should utilize the detector’s outputs
(the density likelihood values p(r|sj)’s). In this case, the output is given by,
ˆ
x=
M
i=1
P(si|r) ci(7)
=M
i=1p(r|si)P(si)ci
M
i=1p(r|si)P(si)(8)
where P(si|r) is the conditional probability that the transmitted signal (codevec-
tor) is si(ci) given the received signal, and P(si)is the a prior probability that siis
transmitted. In soft decoding (7), the output is a weighted sum of the codevectors.
The decision rule utilizes all likelihood values provided by the channel. Hence, the
decision is the optimal in the statistical sense.
3.3. som-covq approach
As the COVQ approach is a highly nonlinear optimization algorithm. The initial
codebook will affect the performance. So, it is interesting to investigate whether a
good initial codebook for COVQ can produce a better COVQ’s codebook. Since
the SOM training method can produces a codebook with a good neighborhood
structure, it is also interested to investigate the hybrid model of the SOM and
COVQ approaches. That is, we use the SOM training method to train a codebook
YSOM which has good resistance to channel noise. Afterwards, we use the trained
codebook YSOM as the initial codebook of the COVQ algorithm to get a new bet-
ter codebook. For distinguishing with the conventional COVQ, we call the hybrid
approach as SOM-COVQ.
3.4. complexities of som and covq training
For SOM, we sequentially and repeatedly present the samples to SOM encoder.
Equation (5) is used for updating the codevectors. Note that for SOM we no need
to utilize any channel information for training. Hence, the training complexity for
each training example in a training epoch is equal to O(kM).
SOFT-DECODING SOM FOR VQ OVER WIRELESS CHANNELS 187
In the cases of COVQ and SOM-COVQ, we need to train the codebooks over
the noisy channel. In COVQ and SOM-COVQ, we need to present all samples to
the transmitter and then collect all the soft decoding outputs at the receiver. After-
wards, we update the codebook. It should be noticed that the complexity of COVQ
and SOM are the same for each training sample in a training epoch although the
COVQ only update one time in a training epoch. This is because for COVQ in a
training epoch we need to reconstruct all the estimated output vectors based on
the soft decoding rules (7). Hence, the complexity for each training example in a
training epoch is still equal to O(kM).
In summary, the complexities of SOM and COVQ per sample are the same. Hence, the
number of required training epoches for convergence determine the training efficient.
4. Simulation
The performance of various data protection schemes, COVQ, SOM, and SOM-
COVQ, are investigated. Also, the performance without any data protection (LBG
trained codebook) is presented. It is well known that the soft decoding VQ is bet-
ter than the hard decoding VQ [3, 4]. Also, the COVQ is a soft decoding method.
Hence, to have a fair comparison, the soft decoding technique are applied to all
the schemes. Two analog sources: Gaussian source and image data, are used. The
fast time-varying wireless channel is modelled as an independent Rayleigh fading
channel and the fading factor is assumed to be perfectly estimated at the receiver.
4.1. gaussian source and sufficient trained covq
In this section, we investigate the SRER performance of different algorithms when
the codebooks are well trained. Three Gaussian sources with different dimensions
are considered. Each source contains 1024 k-dimensional samples, where k=2,3,4.
The signal constellation is 16 QAM. Since there are 16 signals in the constellation,
we set the number of codevectors equal to 16. That means, the code rates are 2,
4
3, and 1 bits/per dimension.
For SOM, we sequentiaedy and repeatedly present the samples to SOM encoder.
Equation (7) is used for updating the codevectors. The number of training epoches
for SOM is 10 because we find that SRER does not significant change (less than
2%) after 10 training epoches.
In the cases of COVQ and SOM-COVQ, we need to train the codebooks over
the noisy channel. For COVQ and SOM-COVQ, when we set the number of train-
ing epoches as 10, the codebooks do not convergent well. Hence, we increase the
number of training epoches to 40 such that the codebooks of COVQ and SOM-
COVQ are well trained.
The performance, in terms of SRER (reconstruction data) versus SNR in the
channel, is summarized in Figure 5. From the figure, the performance of those
three data protection schemes is better than that of the simple LBG algorithm.
188 CHI-SING LEUNG ET AL.
0 5 10 15 20 25 30
0
1
2
3
4
5
6
7
8
9
10
SNR in the channel (dB)
0 5 10 15 20 25 30
SNR in the channel (dB)
0 5 10 15 20 25 30
SNR in the channel (dB)
SRER (dB)
SRER (dB)
SRER (dB)
SOM
SOM COVQ
COVQ
LBG
0
1
2
3
4
5
6
7
SOM
COVQ
LBG
0
1
2
3
4
5
SOM
COVQ
LBG
(a) (b) (c)
2-D Gaussian Source 3-D Gaussian Source 4-D Gaussian Source
–
SOM COVQ
–
SOM COVQ
–
Figure 5. The signal-to-reconstruction error ratio (SRER) in dB.
The performance of the two SOM approaches (SOM and SOM-COVQ) is better
than that of the COVQ approach. Compared with SOM and COVQ, SOM-COVQ
further improve the SRER performance. This implies that further performance
improvement can be achieved by using a better initial codebook for COVQ.
For a fixed SRER in the reconstruction data, the two SOM approaches can
achieve about 1–4dB channel gains. For example, in the 2-D Gaussian cases, to
achieve 7 dB in the SRER, the SNR’s of the two SOM approaches should be
around 10 dB. In the COVQ case, to achieve the same SRER, the SNR in the
channel should be around 14 dB.
For a fixed SNR in the channel, the two SOM approaches can achieve about
1–2 dB in SNR gains. For example, in the 2-D Gaussian cases, when SNR in the
channel is equal to 10 dB, the SRERs of the two SOM approaches are around 7 dB
while the SRER of the COVQ approach is around 5.5 dB only.
4.2. image data and insufficient training covq
In this section, we consider the performance of difference scheme for real data.
We use three images (256 ×256) as the data source to compare the performance
of LBG, SOM, COVQ and SOM-COVQ. Images: Lena, Pepper, and Baboon, are
considered. Each image is divided into a number of 4 ×2 blocks. Each block is
regarded as an 8-D input vector and there are 24576 samples. The codebook size
is equal to 256 and hence the compression rate is 1 bit per dimension.
With 256 codevectors, we can choose some more common modulation schemes,
such as 16 QPSK or 8 QPSK, in wireless communications. With 256 codevectors,
we can use two QPSK symbols of 16 QPSK to represent a codevector. For the
SOM codebook, the Cartesian product of two 1-D circular graphs is used as the
neighborhood structure. To demonstrates the training efficient of SOM, the num-
ber of training epoches for SOM is equal to 10 only and the codebook is used for
all channel SNR values.
In COVQ and SOM-COVQ for a large codebook, we find that if we only pres-
ent each sample one time in a training epoch, the convergence, in terms of training
epoches and reconstruction error, is very poor. This is because the COVQ training
may not be able to capture enough noise statistics of the channel. So, we consider
SOFT-DECODING SOM FOR VQ OVER WIRELESS CHANNELS 189
the re-transmissions of the training samples in each epoch. Therefore, in COVQ
and SOM-COVQ, there are two training parameters: one is the number Nof
training epoches, and the other is the number Tof re-transmissions of the training
examples in each epoch. Of course, in terms of distortion, it is desirable to have N
and Tsufficiently large so that distortion is small and does not further decrease
significantly. Note that the additional bandwidth/power/delay requirement associ-
ated with the COVQ is proportional to the value of N×T. In practice, the min-
imum sufficient values of Nand Tshould be used such that the distortion does
not further decrease significantly. In the LBG and SOM approaches, there is no
re-transmission at each training epoches.
Figure 6 shows the RMSE performance at two channel SNRs (i.e., 15 and
29 dB). Other values of SNR have similar performance. From the figure, we can
easily observe that the RMSE performance of SOM (10 training epoches) is com-
parable to that of the sufficient trained COVQ and SOM-COVQ.
For COVQ, with a small value of T(Figure 6(a) and (b), T=2), the convergence
is very poor. Especially, for a noisy channel (Figure 6(a), T=2 at SNR =15 dB),
the COVQ training process may not converge even after performing a large num-
ber of training epoches. When a large value of Tis used, the convergence becomes
better (Figure 6(a) T=32 and Figure 6(b) T=8 and 32). To sum up, the COVQ
can achieve a performance similar to the SOM performance only when many train-
ing epoches are employed.
For example, at SNR =29 dB, the required values of Tand Nfor a good con-
vergence are equal to 2-to-8 and 16, respectively. That means, the COVQ need
about 32–96 training epoches. However, the SOM needs 10 epoches only. Clearly,
the computational load of the SOM is much lower than that of COVQ.
From the figure, the SOM-COVQ can further improve the RMSE and the
convergence. Of course, we need some additional computations. For example, at
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
16
18
20
22
24
26
28
30
32
N The number of training epoches in COVQ N The number of training epoches in COVQ
RMSE
RMSE
SOM N=10
COVQ T=2
COVQ T=8
COVQ T=32
SOM COVQ T=32
SOM COVQ T=8
SOM COVQ T=2
LBG N=20
8
10
12
14
16
18
20
SOM N=10
COVQ T=2
COVQ T=8
COVQ T=32
SOM COVQ T=32
SOM COVQ T=8
SOM COVQ T=2
LGB N=20
(a) (b)
SNR =15 dB SNR =29 dB
–
–
––
–
–
Figure 6. The effect of insufficient training. In the SOM approach, the number of training epoches is
equal to 10. In the LBG approach, the number of training epoches is equal to 20. In the COVQ or
SOM-COVQ, the effective number of training epoches is equal to N×T, where Tis the number of
re–transmission in each epoch.
190 CHI-SING LEUNG ET AL.
(a) LBG RMSE=15.24 (b)SOM RMSE=9.93
(c) COVQ RMSE=9.13 (d)SOM–COVQ RMSE=8.85
Figure 7. The reconstructed images at SNR = 29 dB with T=32 and N=32.
SNR =29dB, in the SOM-COVQ the required values of Tand Nfor a very good
RMSE are about 8 and 2, respectively. That means, the SOM-COVQ need about
16 training epoches only.
In summary, these observations suggest that the COVQ approach is beneficial,
only if many training epoches are affordable; otherwise the performance may actu-
ally degrade. This justifies that the SOM as an excellent RVQ. Note that in COVQ
and SOM-COVQ, a different codebook for a different channel SNR is required.
Finally, reconstructed images “Lena” at SNR = 29 dB with T=32 and N=32 are
shown in Figure 7. Compared with the conventional LBG without enhanced chan-
nel robustness, the SOM approach significantly improves the reconstruction quality
of images. Also, the visual performance of the SOM is comparable to that of the
sufficiently trained COVQ and SOM-COVQ.
5. Concluding remarks
In this paper, we study the performance of soft-decoding SOM and COVQ over
wireless channels. From our simulations, although the COVQ training has a well-
defined objective function to be minimized and takes full advantages of the
SOFT-DECODING SOM FOR VQ OVER WIRELESS CHANNELS 191
perfect channel state information available during training, its performance is only
comparable to that of our SOM approach which does not assume any channel
state information. This is because the COVQ training is a nonlinear optimization
algorithm. Even though full channel information are utilized, the trained codebook
is not the global optimum solution. It should be noticed that in wireless mobile
communications, the channel SNR is typically fast time-varying. Hence it is often
realistic to assume that the channel SNR during operation is the same as the
SNR during training. The resultant SNR mismatch will degrade the performance
of COVQ.
On the other hand, some advantages of the SOM approach are that it does not
require any channel information during training and that it does not need a reli-
able feedback channel for training. These implementation advantages make SOM
a practically attractive VQ solution for wireless communications. In addition, our
simulation results show that the performance of SOM is comparable to that of
COVQ. Moreover, the convergence rate of the COVQ training is very slow, namely,
COVQ needs more training epoches. Such a requirement may not be allowed for
a time-varying environment.
Our current study is based on some simple modulation schemes. It is interesting
to investigate our SOM approach for more sophisticated coding schemes, such as,
the trellis coded modulation and the space time coding scheme [22, 23].
Acknowledgement
This research was supported by RGC Competitive Earmarked Research Grant,
Hong Kong. (Project No.: CityU 1122/01E).
References
1. Nasrabadi, N. M. and King, R. A.: Image coding using vector quantization: A review,
IEEE Transactions on Communications,36 (1988), 957–971.
2. Gray, R. M.: Vector quantization, IEEE Acoust., Speech, Signal Processing Mag., (1984)
pp. 4–29.
3. Skoglund, M.: Soft decoding for vector quantization over noisy channels with memory,
IEEE Transcations on Information Theory,45 (1999), 1293–1307.
4. Skoglund, M. and Ottosson, T.: Soft multiuser decoding for vector quantization over a
CDMA channel, IEEE Transcations on Communications,46 (1996), 327–337.
5. Zeger, K. A. and Gersho, A.: Pseudo-gray coding, IEEE Transaction on Communications,
38 (1990), 2147–2158.
6. Farvardin, N.: A study of vector quantization for noisy channels, IEEE Transactions on
Information Theory,36 (1990), 799–809.
7. Potter, L. C. and Chiang, D. M.: Minimax nonredundancy channel coding, IEEE
Transactions on Communciations, March 1995, pp. 804–811.
8. Farvardin, N. and Vaishampayan, V.: On the performance and complexity of channel-
optimzed vector quantizers, IEEE Transactions on Information Theory,37 (1991), 155–160.
9. Alajaji, F. and Phamdo, N.: Soft-decision COVQ for Rayleigh-fading channels, IEEE
Transcations on Communications Letters,2(1998), 162–164.
192 CHI-SING LEUNG ET AL.
10. Phamdo, N. and Alajaji, F.: Soft-decision demodulation design for COVQ over white,
colored, and ISI Gaussian channels, IEEE Transcations on Communications,48 (2000),
1499–1506.
11. Patzold, M.: Mobile fading channels, Chichester, England; New York: Wiley, J.: 2002.
12. Ho, K. P.: Soft-decoding vector quantizer using reliability information from turbo-codes,
IEEE Communications Letters,3(7) (1999), 208–210.
13. Zhu, G. C. and Alajaji, F. I.: Soft-decision COVQ for turbo-coded AWGN and Rayleigh
fading channels, IEEE Communications Letters,5(2001), 257–259.
14. Xiao, H. and Vecetic, B. S.: Combined low bit rate speech coding and channel coding
over a Rayleigh fading channel, In: Proc. Global Telecommunications Conference, Vol. 3,
1997, pp. 1524–1528.
15. Ruoppila, V. T. and Ragot, S.: Index assignment for predictive wideband LSF quantiza-
tion Ruoppila, In: Proc. IEEE Workshop on Speech Coding 2000, pp. 108–110.
16. Kohonen, T.: Self-organization and associative memory, third edition. Berlin: Springer,
1993.
17. Leung, C. S. and Chan, L. W.: Transmission of Vector Quantized Data over a Noisy
Channel, IEEE Trans. on Neural Networks,8(1997), 582–589.
18. Robbins, H. and Monro, S.: A stochastic approximation method, Ann Math Stat.,22
(1951), 400–407.
19. Flanagan, J. A.: Self-organisation in Kohonen’s SOM, Neural Networks,9(7) (1996),
1185–1197.
20. Liou, C. Y. and Tai, W. P.: Conformal self-organization for continuity on a feature map,
Neural Networks,12 (1999), 893–905.
21. Sum, J. and Chan, L. W.: Convergence of one-dimensional self-organizing map, In:
Proceedings, ISSIPNN ’94, pp. 81–84.
22. Tarokh, V., Naguib, A., Seshadri, N., and Calderbank, A. R.: Space-time codes for high
data rate wireless communication: performance criteria in the presence of channel esti-
mation errors, mobility, and multiple paths, IEEE Transactions on Communications,47
(1999), 199–207.
23. Liu, Y. J., Oka, I., and Biglieri, E.: Error probability for digital transmission over non-
linear channels with application to TCM, IEEE Transactions on Information Theory,36
(1190), 1101–1110.