ArticlePDF Available

Abstract and Figures

In this paper, a new image compression algorithm is proposed based on the efficient construction of wavelet coefficient lower trees. The main contribution of the proposed lower-tree wavelet (LTW) encoder is the utilization of coefficient trees, not only as an efficient method of grouping coefficients, but also as a fast way of coding them. Thus, it presents state-of-the-art compression performance, whereas its complexity is lower than the one presented in other wavelet coders, like SPIHT and JPEG 2000. Fast execution is achieved by means of a simple two-pass coding and one-pass decoding algorithm. Moreover, its computation does not require additional lists or complex data structures, so there is no memory overhead. A formal description of the algorithm is provided, while reference software is also given. Numerical results show that our codec works faster than SPIHT and JPEG 2000 (up to three times faster than SPIHT and fifteen times faster than JPEG 2000), with similar coding efficiency
Content may be subject to copyright.
Movement Epenthesis Generation Using NURBS-Based Spatial Interpolation ...............................................
Z.-J. Chuang, C.-H. Wu, and W.-S. Chen 1313
Joint Prediction Algorithm and Architecture for Stereo Video Hybrid Coding Systems ......................................
........................................................................................ L.-F. Ding, S.-Y. Chien, and L.-G. Chen 1324
Progressive Coding of 3-D Objects Based on Overcomplete Decompositions ................................................
..................................................................................... I. Tosic, P. Frossard, and P. Vandergheynst 1338
VLSI Design of a Wavelet Processing Core .......................................................... S.-W. Lee and S.-C. Lim 1350
Dynamic Programming-Based Reverse Frame Selection for VBR Video Delivery Under Constrained Resources ........
.................................................................. D. Tao, J. Cai, H. Yi, D. Rajan, L.-T. Chia, and K. N. Ngan 1362
High-Throughput Architecture for H.264/AVC CABAC Compression System .......... R. R. Osorio and J. D. Bruguera 1376
4-D Wavelet-Based Multiview Video Coding .......................... W. Yang, Y. Lu, F. Wu, J. Cai, K. N. Ngan, and S. Li 1385
A Flexible Hardware JPEG 2000 Decoder for Digital Cinema ..................................................................
...................................... A. Descampe, F.-O. Devaux, G. Rouvroy, J.-D. Legat, J.-J. Quisquater, and B. Macq 1397
Improved Super-Resolution Reconstruction From Video ............................................ C. Wang, P. Xue, W. Lin 1411
Reversible Visible Watermarking and Lossless Recovery of Original Images ............................ Y. Hu and B. Jeon 1423
Wyner–Ziv Video Coding With Universal Prediction ............................................ Z. Li, L. Liu, and E. J. Delp 1430
Low-Complexity Multiresolution Image Compression Using Wavelet Lower Trees .... . . J. Oliver and M. P. Malumbres 1437
Special Issue on Multiview Video Coding ........................................................................................ 1445
Call for Papers and Special Issue Proposals ...................................................................................... 1446
IEEE ISCAS 2007—Call for Participation ....................................................................................... 1447
IEEE ICME 2007 .................................................................................................................. 1448
Low-Complexity Multiresolution Image Compression
Using Wavelet Lower Trees
Jose Oliver, Member, IEEE, and Manuel P. Malumbres, Member, IEEE
Abstract—In this paper, a new image compression algorithm is
proposed based on the efficient construction of wavelet coefficient
lower trees. The main contribution of the proposed lower-tree
wavelet (LTW) encoder is the utilization of coefficient trees, not
only as an efficient method of grouping coefficients, but also as a
fast way of coding them. Thus, it presents state-of-the-art com-
pression performance, whereas its complexity is lower than the
one presented in other wavelet coders, like SPIHT and JPEG 2000.
Fast execution is achieved by means of a simple two-pass coding
and one-pass decoding algorithm. Moreover, its computation does
not require additional lists or complex data structures, so there
is no memory overhead. A formal description of the algorithm is
provided, while reference software is also given. Numerical results
show that our codec works faster than SPIHT and JPEG 2000
(up to three times faster than SPIHT and fifteen times faster than
JPEG 2000), with similar coding efficiency.
Index Terms—Image compression, low complexity, tree-based
coding, wavelets.
URING the last decade, several image compression
schemes emerged in order to overcome the known limi-
tations of block-based algorithms that use the discrete cosine
transform (DCT). Some of these alternative proposals were
based on more complex techniques, like Vector Quantization
[2] and Fractal image coding [3], whereas others simply pro-
posed the use of a different and more suitable mathematical
transform: the discrete wavelet transform (DWT) [4]. At that
time, there was a general idea: more efficient image coders
could only be achieved by means of sophisticated techniques
with high complexity. The embedded zero-tree wavelet coder
(EZW) [5] can be considered the first Wavelet image coder
that broke that trend. Since then, many wavelet coders were
proposed and finally, the DWT was included in the JPEG 2000
standard [6] due to its compression efficiency among other
interesting features (scalability, etc.).
All the wavelet-based image coders, and in general all the
transform-based coders, consist of two main stages. In the first
one, the image is transformed from spatial domain to another
one, in the case of the wavelet transform a combined spatial-fre-
quency domain, called wavelet domain. In the second pass, the
Manuscript received March 16, 2005; revised May 30, 2006; accepted May
30, 2006. This work was supported by the Spanish Ministry of Education and
Science under grant TIC2003-00339. A preliminary version of this paper was
first presented at the IEEE Data Compression Conference, Snowbird, UT, in
March 2003. This paper was recommended by Associate Editor H. Gharavi.
J. Oliver is with the Department of Computer Engineering (DISCA), Poly-
technic University of Valencia, 46022 Valencia, Spain (e-mail: joliver@disca.
M. P. Malumbres is with the Department of Physics and Computer Engi-
neering, Miguel Hernandez University, 03202 Elche, Spain (e-mail: mels@umh.
Digital Object Identifier 10.1109/TCSVT.2006.883505
transform coefficients are quantized and encoded in an efficient
way to achieve high compression efficiency and other features.
The wavelet transform can be implemented as a regular
filter-bank, however several strategies have been proposed to
reduce the running time and memory requirements. This way, a
line-based processing is proposed in [7], whereas an alternative
wavelet transform method, called lifting scheme, is proposed in
[8]. The lifting transform provides in-place calculation of the
coefficients by overwriting the input samples, and reduces the
number of operations required to compute the DWT.
On the other hand, the coding pass is not usually improved
in terms of complexity and memory usage. When designing a
new wavelet image encoder, the most important factor to opti-
mize is usually the rate/distortion (R/D) performance, whereas
other features like embedded bit-stream, signal-to-noise ratio
(SNR) scalability, spatial scalability and error resilience are also
considered. In this paper, we propose an algorithm aiming to
achieve state-of-the-art coding efficiency, with very low execu-
tion time. Moreover, due to in-place processing of the coeffi-
cients, there is no memory overhead (it only needs memory to
store the source image). In addition, our algorithm is naturally
spatial scalable and it is possible to achieve SNR scalability.
The key idea of the proposed algorithm is the use of wavelet
coefficient trees as a fast method of efficiently grouping co-
efficients. Tree-based wavelet coders have been widely used
in the literature [5], [9], [10], [14], presenting good R/D per-
formance. However, their excellent opportunities for fast pro-
cessing of quantized coefficients have not been clearly shown
so far. Wavelet trees are a simple way of grouping coefficients,
reducing the total number of symbols to be coded, which in-
volves not only good compression performance but also fast
processing. Moreover, for a low-complexity implementation,
bit-plane coding, present in many wavelet coders [5], [6], [9],
[11], [12], must be avoided. This way, multiple scans of the
transform coefficients, which involves many memory accesses
and causes high cache miss rates, is not performed, at the ex-
pense of generating a non-(SNR)-embedded bitstream.
This paper is organized as follows. In Section II, we analyze
the complexity and coding efficiency of some important wavelet
image coders, focusing on the non-embedded proposals. In Sec-
tion III, the proposed tree-based algorithm is described. Sec-
tion IV describes some implementation details and optimization
considerations. Finally, in Section V, we compare our proposal
with other wavelet image coders using real implementations.
One of the first efficient wavelet image coders reported in the
literature is EZW [5]. It is based on the construction of coeffi-
cient-trees and successive-approximations, which can be imple-
1051-8215/$20.00 © 2006 IEEE
mented with bit-plane coding. Due to its successive-approxima-
tion nature, it is SNR scalable, although at the expense of spa-
tial scalability. SPIHT [9] is an advanced version of this algo-
rithm, where coefcient-trees are processed in a more efcient
way. In this coder, coefcient-trees are partitioned depending
on the signicance of the coefcients belonging to each tree. If
a coefcient in a tree is higher than a threshold determined by
the current bit-plane, the tree is successively divided following
some established partitioning rules. Both EZW and SPIHT need
the computation of coefcient-trees to search for signicant co-
efcients and they take various iterations focusing on a dif-
ferent bit-plane, which involves high computational complexity.
A block-based version of SPIHT, called SPECK, is presented
in [11]. The main difference of this new version is that coef-
cients are grouped and partitioned using rectangular structures
instead of coefcient trees. A low-complexity implementation
of SPECK, called SBHP [12], was proposed in the framework
of JPEG 2000. In SBHP, an image is rst divided into blocks
(sized 64
64 or 128 128) and then, each block is processed
as in SPECK. To reduce complexity, Huffman coding is used
instead of arithmetic coding, which causes a decrease in coding
In the nal JPEG 2000 standard [6], the proposed algorithm
(a modied version of EBCOT [13]) does not use coefcient-
trees, but it performs bit-plane coding in code-blocks, with three
passes per plane so that the most important information is en-
coded in rst place. In order to overcome the disadvantage of
not using coefcient-trees, it uses an iterative optimization al-
gorithm, based on the Lagrange multiplier method, along with
a large number of contexts. JPEG 2000 obtains both spatial
and SNR scalability by reordering the encoded image. Hence,
the nal bitstream is very versatile. However, this encoder is
complex, mainly due to the use of the time-consuming iterative
optimization algorithm, the introduction of a reordering stage,
and the use of bit-plane coding with many contexts. In addi-
tion, a larger bitstream than the one actually encoded is usually
Many times, wavelet image encoders have features that are
not always needed, but which make them both CPU and memory
intensive. Fast processing is often preferable. Low complexity
may be desirable for high-resolution digital camera shooting,
where high delays could be annoying and even unacceptable
(even more for modern digital cameras, which tend to increase
their resolution). In general, image editing for large images (es-
pecially in GIS applications) cannot be easily tackled with the
complexity of previous encoders.
Since iterative methods and bit-plane coding must be avoided
to reduce complexity, very fast coding can only be achieved
through simpler non-SNR-embedded techniques (like in base-
line JPEG). In these encoders, an image is encoded at a constant
quality after applying a uniform quantization to the coefcients.
A. Non-Embedded Coders
One of the rst tree-based non-embedded coders is SFQ [10].
This wavelet encoder uses space quantization by means of a
tree-pruning algorithm, which modies the shape of the trees
by pruning their branches, whereas a scalar quantization is ap-
plied for frequency quantization. Although it achieves higher
compression than SPIHT, the iterative tree pruning stage makes
it about ve times slower than SPIHT.
Non-embedded coding was rst proposed to reduce com-
plexity in tree-based wavelet coding in [14], where a non-em-
bedded version of SPIHT was proposed. In this modied
SPIHT, once a coefcient is found to be signicant, all the
signicant bits are encoded, avoiding the renement passes (see
[9] for details). Note that a non-embedded version of SPECK
and SBHP is also possible with the same modications. Al-
though these non-embedded versions are faster than the original
ones, neither multiple image scan nor bit-plane processing of
the sorting passes (used to nd signicant coefcients) is
avoided, and hence, the complexity problem still remains. Note
that these modications of SPIHT and SPECK are neither SNR
nor resolution scalable.
Besides the JPEG standard, other DCT-based non-embedded
image coders have been proposed. In particular, the intra mode
of the H.264 standard [16]. However, due to the use of time-con-
suming prediction techniques on the coding side, the coding/de-
coding processes are very asymmetric, and the resulting encoder
is very slow.
For the most part, digital images are represented with a set
of pixels,
. The encoder proposed in this paper is applied to
a set of coefcients
resulting from a dyadic decomposition
, with . The most commonly used decomposi-
tion for image compression is the hierarchical wavelet subband
transform [4], thus an element
is called transform co-
efcient. In a wavelet transform, we call
, , and
the subbands resulting from the rst level of the image decom-
position, corresponding to horizontal, vertical and diagonal fre-
quencies. The rest of the transform is computed with a recursive
wavelet decomposition on the remaining low frequency sub-
band, until a desired decomposition level (N) is achieved (
is the remaining low frequency subband).
In Section II, we mentioned that one of the main drawbacks
in previous wavelet-based image encoders is their high com-
plexity. Many times, that is due to time-consuming iterative
methods, and bit-plane coding used to provide a fully embedded
bitstream. Although embedding is a nice feature in an image
coder, it is not always needed and other alternatives, like spa-
tial scalability, may be more valuable depending on the nal
application. In this section, we propose a tree-based coding al-
gorithm that is able to encode the wavelet coefcients without
performing an image scan per bit plane.
Tree-based wavelet image encoders are proved to efciently
store the transform coefcients, achieving good performance re-
sults. However, in the algorithm proposed in this paper, a tree-
based structure is introduced, not only to remove redundancy
among subbands, but also as a simple and fast way of grouping
As in the rest of tree-based encoders, coefcients can be log-
ically arranged as trees, as shown in Fig. 1. In this gure, we
observe that the coefcients in the wavelet subbands (except the
leaves) have always four direct descendants (i.e., four descen-
dants at a distance of one), while the rest of descendants can be
recursively obtained from the direct descendants. On the other
Fig. 1. Coefcient-trees in the proposed algorithm.
hand, in the subband, three out of every 2 2 coefcients
have four direct descendants, and the remaining coefcient in
the 2
2 block has no descendant.
In our proposal, the quantization process is performed with
two strategies: one coarser and another ner. The ner one con-
sists in applying a scalar uniform quantization to the coef-
cients, and it can be applied with the normalization factor in
the lifting transform (if the normalization factor in the lifting
scheme is
, and the quantization factor is , only a multiplica-
tion by
is needed). The coarser one is based on removing
bit planes from the least signicant part of the coefcients, and it
is performed while the algorithm is applied. Related to this bit
plane quantization, we dene
as the number of least
signicant bits to be removed.
We say that a coefcient
is signicant if it is different
from zero after discarding the least signicant
bits, in
other words, if
. In our proposal, signicant co-
efcients are encoded by using arithmetic coding, with a symbol
indicating the number of bits needed to encode that coefcient.
Then, the signicant bits and sign are binary encoded (in other
words, raw encoded).
Regarding the insignicant coefcients, a coefcient is
called lower-tree root if this coefcient and all its descendants
are insignicant (i.e., lower than
). The set formed
by all these coefcients is a lower-tree. We use the symbol
to point out that a coefcient is the root of a
lower-tree. The rest of coefcients in the lower-tree are labeled
, but are not encoded. On the
other hand, if a coefcient is lower than
, but it does not
form or belong to a lower-tree because it has at least one sig-
nicant descendant, it is considered
A. Lower-Tree Encoder Algorithm
Once we have dened the basic concepts to understand the
algorithm, we are ready to describe the coding process. It is
a two-pass algorithm. During the rst pass, the wavelet coef-
cients are properly labeled according to their signicance and
the lower-trees are formed. In the second image pass, the coef-
cient values are coded by using the labels computed in the rst
The coding algorithm is presented in Algorithm 1. Let us de-
scribe this algorithm. At the encoder initialization, the number
of bits needed to represent the highest coefcient
is calculated. This value and the parameter are
output to the decoder. Afterwards, we initialize an adaptive
arithmetic encoder that is used to encode the number of bits
required by the signicant coefcients, and the
In the rst image pass, the lower-tree labeling process is
performed in a recursive way, by building the lower-trees
from leaves to root. In the rst level subbands, coef-
cients are scanned in 2
2 blocks and, if the four coef-
cients are insignicant (i.e., lower than
), they
are considered part of the same lower-tree, being labeled
. Then, when scanning
higher level subbands, if a 2
2 block has four in-
signicant coefcients, and all their direct descendants are
, the coefcients in that block are
also labeled as
, increasing the
size of the lower-tree.
However, when at least one coefcient in the block is signif-
icant, the lower-tree cannot continue growing. In that case, an
insignicant coefcient in the block is labeled as
if all
its descendants are
, otherwise an
insignicant coefcient is labeled as
In the second pass, all the subbands are explored from
the Nth level to the rst one, and all their coefcients are
scanned in medium-sized blocks (to take advantage of
data locality). For each coefcient in a subband, if it is
a lower-tree root or an isolated lower, the corresponding
or symbol is encoded.
On the other hand, if a coefcient has been labeled as
no output is needed because
this coefcient is already represented by the lower-tree to
which it belongs.
A signicant coefcient is coded as follows. A symbol
indicating the number of bits required to represent that co-
efcient is arithmetically coded, and the signicant bits and
sign are raw coded. However, two types of numeric symbols
are used depending on the direct descendants of that coef-
cient. (a) A regular numeric symbol
, which simply
shows the number of bits needed to encode a coefcient, (b)
and a special
numeric symbol ,
which not only indicates the number of bits of the coef-
cient but also the fact that its descendants are labeled as
, and thus they belong to a
lower-tree not yet codied. This type of symbol is able to
represent efciently some special lower-trees, in which the root
coefcient is signicant and the rest of coefcients are insignif-
icant. Note that the number of symbols needed to represent both
sets of numeric symbols is
, therefore
the arithmetic encoder must be initialized to handle at least
this amount of symbols, along with two additional symbols:
and symbols. Observe
that the rst rplanes bits and the most signicant non-zero bit
are not encoded (the decoder can deduce the most signicant
non-zero bit through the arithmetic symbol that indicates the
number of bits required to encode this coefcient).
An important difference between our tree-based wavelet al-
gorithm and others like [5] and [9] is how the coefcient tree
building process is detailed. Our algorithm includes a simple
and efcient recursive method (within the E2 stage) to deter-
mine if a coefcient has signicant descendants in order to form
coefcient trees. However, EZW and SPIHT leave it as an open
and implementation dependent aspect, which increases dras-
tically the algorithm complexity if it is not carefully solved
(for example, if searches for signicant descendants are inde-
pendently calculated for every coefcient whenever they are
needed). Anyway, an efcient implementation for EZW and
SPIHT would increase their memory consumption due to the
need to store the maximum descendant for every coefcient ob-
tained during a pre-search stage.
B. Lower-Tree Decoder Algorithm
The decoding algorithm performs the reverse process in only
one pass, since symbols are directly decoded from the incoming
bitstream. All the subbands must be scanned in the order used
in the encoder. The granularity of this scan is 2
2 coefcient
blocks, thus coefcients that share the same parent (i.e., sibling
coefcients) are handled together. Note that all the coefcient
blocks have ascendant except those in the
When an insignicant coefcient is decoded, we set its value
to 0, since its order of magnitude is unknown (we only know
that it is lower than
). Later, insignicant coefcients in
a lower-tree are automatically propagated, since when the parent
of four sibling coefcients has been set to 0, all the descendant
coefcients are also assigned a value of 0, so that lower-trees are
recursively generated. However, if an isolated lower coefcient
has been decoded, it must not be propagated as a lower-tree.
Hence, a different value must be assigned. For this case, we keep
this coefcient as
until its 2 2 direct
descendants are scanned. At that moment, we can safely update
its value to 0 without risk of unwanted propagations, because no
more direct descendants of this coefcient will be scanned.
C. Encoder Features
The proposed algorithm is resolution scalable due to the se-
lected scanning order and the nature of the wavelet transform.
This way, the rst subband that the decoder attains is the
which is a low-resolution scaled version of the original image.
Then, the decoder progressively receives the remaining sub-
bands, from lower frequency subbands to higher ones, which
are used as a complement to the low-resolution image to recur-
sively double its size, which is known as Mallat decomposition
[17]. Spatial and SNR scalability are closely related features.
Spatial resolution allows us to have different resolution images
of the same image. Through interpolation techniques, all these
Fig. 2. A 2-level wavelet transform of an 8 8 example image.
images could be resized to the original size, so that the larger
an image is, the closer to the original it gets. Therefore, this al-
gorithm could also be used for SNR scalability purposes. Other
features, like the denition of Regions Of Interest (ROI), can be
implemented in this coder by using a specic
eter for those coefcients in the trees representing that special
As in most non-embedded encoders (like baseline JPEG,
H.264 intra mode, non-embedded SPIHT/SPECK, etc.), pre-
cise rate control is not feasible in the proposed algorithm
because it is usually achieved with bit-plane coding (like in
SPIHT) or iterative methods (like in JPEG 2000 or SFQ),
which signicantly increase the complexity of the encoder.
Nevertheless, rate control is still possible based on a fast anal-
ysis of the transform image features by extracting one or more
numerical parameters that are employed to determine the quan-
tization parameters needed to approximate a bit rate (see [15]
for details).
D. A Simple Example
Fig. 2 shows a small 8
8 image that has been transformed
using a 2-level DWT. In this section, we show the result of ap-
plying our tree-based encoder using this sample image. These
wavelet coefcients have been scalar quantized, and the selected
parameter is 2. Regarding the maxplane parameter, it
can be easily computed as
When the coarser quantization is applied using
the values within the interval
are absolutely quan-
tized. Thus, all the signicant coefcients can be represented by
using from 3 to 6 bits, and hence, the symbol set needed to repre-
sent the signicance map is
In this symbol set, special
numeric symbols
are marked with a superscript
,an symbol represents
an isolated lower coefcient, and an
indicates a regular
lower symbol (the root of a lower-tree). Coefcients be-
longing to a previously encoded lower-tree (those labeled as
) are not encoded and, in our
example, are represented with a star
. Fig. 3 shows the
symbols resulting from applying our algorithm to the example
image, and if we scan the subbands from the highest level (N)
to the lowest one (1), in 2
2 blocks, from left to right and top
to bottom, the resulting bitstream is the one illustrated in Fig. 4.
Note that the sign is not necessary for the LL subband since its
coefcients are always positive.
Fig. 3. Symbols resulting from applying our algorithm to the example image.
Implementation details and further adjustments may improve
the performance of a compression algorithm. In this section, we
give a guide for a successful implementation of the LTW algo-
rithm. All the improvements introduced in this section should
preserve fast processing.
Context coding has been widely used to improve the R/D per-
formance in image compression. Although high-order context
modeling presents high complexity, simpler context coding can
be efciently employed without noticeable increase in execution
time. We propose the use of two contexts based on the signi-
cance of the left and the upper coefcients, thus if they both are
insignicant or close to insignicant, a different model is used
for coding. Adding a few more models to establish more signi-
cance levels (and thus more number of contexts) would improve
compression efciency. However, it would slow down the algo-
rithm, mainly due to the context formation evaluation and the
higher memory usage.
Recall that
indicates the number of bits needed to
represent the highest coefcient in the wavelet decomposition.
This value (along with
) determines the number of sym-
bols needed for the arithmetic encoder. However, coefcients in
different subbands tend to be different in magnitude order, so
this parameter can be specically set for every subband level. In
this manner, the arithmetic encoder is initialized exactly with the
number of symbols needed in every subband, which increases
the coding efciency.
Related to the coarser quantization, consider the case in which
three coefcients in a block are insignicant, and the fourth
value is very close to insignicant. In this case, we can consider
that the entire block is insignicant and all its coefcients can
be labeled as
. The slight error in-
troduced is compensated by the saving in bit budget.
In general, each of these R/D improvements causes a slight
increase in PSNR (from 0.05 to 0.1 dB, depending on the source
image and the target bitrate).
V. C
We have implemented the lower-tree wavelet (LTW) coding
and decoding algorithms in order to test their performance.
They have been implemented using standard C++ language
(without using assembly language or platform-dependant
features), and the simulation tests have been performed on a
regular Personal Computer (with a 500 MHz Pentium Celeron
Fig. 4. Example image encoded using the proposed tree-based algorithm.
LENA (512
processor with 256 KB L2 cache), generating image les that
contain the compressed images, including the le headers
required for self-containing decompression. The reader can
easily perform new tests by using the LTW implementation
available at
In order to compare our algorithm with other wavelet en-
coders, we have selected the classical Lena and Barbara im-
ages (monochrome, 8 bpp, 512
512), the VisTex texture data-
base (monochrome, 8 bpp, 512
512) from the MIT media
lab, and the Café and Woman images (monochrome, 8 bpp,
2048) from the JPEG 2000 test bed. The widely used
Lena image (from the USC) allows us to compare it with prac-
tically all the published algorithms, since results are commonly
expressed using this image. The VisTex set allows testing the
behavior of the coders when coding textures. Finally, the Café
and Woman images are less blurred and more complex than
Lena and represent pictures taken with a 5-Mpixel-high de-
nition digital camera.
Table I provides R/D performance when using the Lena
image. In general, we see that LTW achieves similar or better
results than the other coders, including the JPEG 2000 stan-
dard, whose results have been obtained using the reference
software Jasper [18], an ofcial implementation included in
the ISO/IEC 15444-5 standard. For SPIHT, we have used the
implementation provided by the authors in the original paper.
For H.264 intra mode coding, results have been obtained with
the JM 9.6 reference software. For this coder, the PSNR result
for each bitrate has been interpolated from the nearest bitrates,
since the quantization granularity is very low, and an exact
bitrate cannot be achieved.
PSNR results for the rest of images are shown in Table II.
In this only SPIHT and Jasper have been compared to LTW,
since the compiled versions of the rest of coders have not been
AND VisTex D
released, or results are not published for these images. We can
observe that R/D performance for Café is still higher using
our algorithm, although Jasper performs similar to LTW. For
Woman, we see that our algorithm exceeds in performance
both SPIHT and Jasper. In the case of the textures from the
VisTex database, we present the average PSNR in this Table,
which shows that, in general, LTW works better than JPEG
2000 and SPIHT for coding of textures. Only a few images
from the set (and only at high-bit rates) exhibit better coding
results for JPEG 2000 (details of the results for each image
in the set are available at
textures.pdf). However, at high bit rates, Jasper encodes high-
frequency images, like Barbara, better than LTW. Two reasons
may explain it. First, when high-frequency images are encoded
at high bit rates, many coefcients in high-frequency subbands
are signicant, and hence our algorithm is not able to build
large lower-trees. Second, recall that JPEG 2000 includes many
contexts, and it causes higher performance for high-frequency
images. In our experiments, we have observed that if more
than two contexts are used in LTW, the R/D performance for
Barbara is close to the results shown with Jasper, but at the
cost of higher execution time.
On the other hand, the reader can perform a subjective evalu-
ation of the images Lena and Barbara encoded at 0.125 bpp with
SPIHT, JPEG 2000 and LTW by using Fig. 5, which shows a de-
tail of the face of Lena and another detail of Barbaras checked
trousers. In the rst group of pictures, if we look carefully at
Lenas lips, eyes and nose, we can observe that LTW offers
Fig. 5. Details of the Lena and Barbara images (at 0.125 bpp) for a subjective evaluation, using (a) SPIHT, (b) JPEG 2000, and (c) LTW.
the best results, achieving better denition and sharpness than
SPIHT and especially than JPEG 2000. However, if we ana-
lyze a highly detailed image such as Barbara, and we focus on a
high-frequency area (e.g., the trousers in the Barbara image) we
can clearly see that JPEG 2000 gives the best results, and SPIHT,
which decompresses an image full of blurred areas, produces the
worst results. Both results are consistent with the PSNR results,
and conrm that tree-based algorithms work better with mod-
erate to low detailed images, while JPEG 2000 is a better option
in detailed images, due to the reasons aforementioned.
The main advantage of the LTW algorithm is its lower com-
plexity. Table III shows that our algorithm greatly outperforms
SPIHT and Jasper in terms of execution time.
For medium sized
images (512
512), our encoder is from 3.25 to 11 times faster
than Jasper, whereas LTW decoder executes from 1.5 to 2.5
times faster than Jasper decoder, depending on the rate. In the
case of SPIHT, our encoder is from 2 to 2.5 times faster, and
the decoding process is from 1.7 to 2.1 times faster. With larger
images, like Café (2560
2048), the advantage is greater. LTW
encoder is from 5 to 16 times faster than Jasper and the decoder
is from 1.5 to 2.5 times faster. With respect to SPIHT, our al-
gorithm encodes Café from 1.7 to 3 times faster, and decodes it
from 1.5 to 2.5 times faster.
Note that in these tables we have only evaluated the coding
and decoding processes, and not the transform stage, since the
wavelet transform used is the same in all the cases; the popular
Daubechies 9/7 biorthogonal wavelet lter. Other wavelet trans-
forms, like Daubechies 23/25, have shown better compression
performance. However, this improvement is achieved with more
lter taps, and thus increasing the execution time of the DWT.
Measuring the complexity of algorithms is a hard issue. Execution time
of their implementation is largely dependant of the optimization level. This
way, there are commercial implementations of JPEG 2000 not included in the
ISO/IEC 15444-5 that are faster than Jasper, however they are usually imple-
mented by using platform dependant pieces of code (in assembly language) and
multimedia SIMD instructions. In our tests, SPIHT, JPEG 2000 and LTW im-
plementations are, as far as possible, written and compiled under the same con-
ditions, using plain C/C++ language and MS Visual C++6.0 for all them.
Other non-embedded encoders, like SFQ and H.264 intra
mode, are much slower than LTW. In particular, SFQ coding is
more than 10 times slower than LTW, while the H.264 encoder
(JM 9.6 implementation) is more than 50 times slower than our
proposal, due to the time-consuming predictive algorithm used
in this encoder.
Besides compression performance and complexity, a third
major issue usually considered in an image coder is the memory
usage. Our wavelet encoder and decoder are able to perform
in-place processing of the wavelet coefcients, and thus they
do not need to handle additional lists or memory-consuming
structure. This way, only 21 Mbytes are needed to encode the
Café image using LTW (note that 20 Mbytes are needed to store
the image in memory using
integer type), whereas SPIHT and
Jasper require 42 and 64 Mbytes, respectively.
In this paper, we have presented a new wavelet image en-
coder based on the construction and efcient coding of wavelet
lower-trees (LTW). Its compression performance is within the
state-of-the-art, achieving similar results as other popular algo-
rithms (SPIHT is improved in 0.20.4 dB, and JPEG 2000 with
Lena in 0.35 dB as mean value).
However, the main contribution of this algorithm is its lower
complexity. Depending on the image size and bitrate, it is able
to encode an image up to 15 times faster than Jasper and three
times faster than SPIHT. Thus, it can be stated that the LTW
coder is one of the fastest efcient image coders that can be
reported in the literature.
Therefore, due to its lower complexity, its high symmetry,
its simply design, and the lack of memory overhead, we think
that the LTW is a good candidate for real-time interactive multi-
media communications, allowing implementations both in hard-
ware and in software.
Results obtained with the Windows XP task manager, peak memory usage
[1] J. Oliver and M. P. Malumbres, Fast and efcient spatial scalable
image compression using wavelet lower trees, in
Proc. IEEE Data
Compression Conf., Snowbird, UT, Mar. 2003, pp. 133142.
[2] A. Gersho and R. M. Gray, Vector Quantization and Signal Compres-
sion. Norwell, MA: Kluwer, 1991.
[3] A. E. Jacquin, Image coding based on a fractal theory of iterated con-
tractive image transformation, IEEE Trans. Image Process., vol. 1, pp.
1830, Jan. 1992.
[4] M. Antonini, M. Barlaud, P. Mathieu, and I. Daubechies, Image
coding using wavelet transform,IEEE Trans. Image Processing, vol.
1, no. 2, pp. 205220, Feb. 1992.
[5] J. M. Shapiro, Embedded image coding using zerotrees of wavelet
coefcients, IEEE Trans. Signal Process., vol. 41, no. 12, pp.
34453462, Dec. 1993.
[6] JPEG2000 Image Coding System, ISO/IEC 15444-1, 2000.
[7] C. Chrysas and A. Ortega, Line-based, reduced memory, wavelet
image compression, IEEE Trans. Image Process., vol. 9, pp. 378389,
Mar. 2000.
[8] W. Sweldens, The lifting scheme: A custom-design construction
of biorthogonal wavelets, App. Comp. Harmon. Anal., vol. 3, pp.
186200, 1996.
[9] A. Said and A. Pearlman, A new, fast, and efcient image codec based
on set partitioning in hierarchical trees, IEEE Trans. Circuits Syst.
Video Technol., vol. 6, no. 3, pp. 243250, Jun. 1996.
[10] Z. Xiong, K. Ramchandran, and M. Orchard, Space-frequency quan-
tization for wavelet image coding, IEEE Trans. Image Process., vol.
46, no. 5, pp. 677693, May 1997.
[11] W. A. Pearlman, A. Islam, N. Nagaraj, and A. Said, Efcient, low-
complexity image coding with a set-partitioning embedded block
coder, IEEE Trans. Circuits Syst. Video Technol., vol. 14, no. 11, pp.
12191235, Nov. 2004.
[12] C. Chrysas, A. Said, A. Drukarev, A. Islam, and W. A. Pearlman,
SBHPa low complexity wavelet coder, in Proc. IEEE Int. Conf.
Acoust., Speech, Signal Process., 2000, pp. 20352038.
[13] D. Taubman, High performance scalable image compression with
EBCOT, IEEE Trans. Image Process., vol. 9, no. 7, pp. 11581170,
Jul. 2000.
[14] W. A. Pearlman, Trends of tree-based, set partitioning compression
techniques in still and moving image systems, in Proc. Picture Coding
Symp., Apr. 2001, pp. 18.
[15] O. Lopez, M. Martinez-Rach, J. Oliver, and M. P. Malumbres, A
heuristic bit rate control for non-embedded wavelet image coders, in
Proc. Int. Symp. ELMAR Focused Multimedia Signal Process., Jun.
2006, pp. 1316.
[16] A. Al, B. P. Rao, S. S. Kudva, S. Babu, D. Sumam, and A. V.
Rao, Quality and complexity comparison of H.264 intra mode with
JPEG2000 and JPEG,in Proc. IEEE ICIP, 2004, pp. 525528.
[17] S. Mallat, A theory for multiresolution signal decomposition, IEEE
Trans. Pattern Anal. Mach. Intell., vol. 11, no. 7, pp. 669718, Jul. 1989.
[18] M. Adams, Jasper Software Reference Manual (v1.6). Oct. 2002,
ISO/IEC JTC 1/SC 29/WG 1 N 2415.
... It will work with a large set of test images in order to obtain a good sign prediction that be image independent. The evaluation will be performed over a non-embedded wavelet base encoder called LTW [13], showing the benefits of minimizing the entropy of sign information resulting from the sign prediction coding process. ...
... We have applied a 6-level Dyadic Wavelet Transform decomposition of the source image and then a low quantization level to the resulting wavelet coefficients. Typically, all wavelet based image encoders apply 5 or 6 wavelet decomposition levels [7,9,13,14]. As a first approach and taking into account that the sign neighborhood correlation depends on the subband type (HL, LH, HH) as Deever assesses in [5], we have used three different neighbors depending on the subband type. ...
... In order to determine the impact of the proposed sign coding technique in the encoder overall performance, we implemented our sign coding proposal in the LTW wavelet encoder [13], calling it S-LTW. After that, we performed several experimental tests comparing S-LTW encoder with LTW and SPIHT (Spiht 8.01) in terms of R/D. ...
Conference Paper
Full-text available
Discrete Wavelet Transform has proved to be powerful for image compression because it is able to compact frequency and spatial localization of image energy into a small fraction of coefficients. For a long time it was assumed that there is no compression gain when coding the sign of wavelet coefficients. However, several attempts were carried out and several image encoders like JPEG 2000 include sign coding capabilities. In this paper, we analyze the convenience of including sign coding techniques in tree-based wavelet image encoders, showing their benefits (bit-rate saving). In order to exploit the scarce redundancy of wavelet coefficients sign, we propose the use of machine learning approaches, like evolutionary algorithms, to find the best sign prediction scheme that maximizes the resulting compression rate.We have developed a sign predictionmodule based on the results provided by the evolutionary algorithms, which it is able to work with whatever the tree-based wavelet encoder like SPIHT, LTW, and others. After performing several experiments, we have observed that, by including the proposed sign coding capabilities, the sign compression gain is up to 17%. These results show that sign coding techniques are of interest for improving compression rate, especially when working with large images (2Mpixel and beyond) at low compression rates (high quality).
... All these works provide sign-coding methods that fully exploits the available sign information of neighboring wavelet coefficients. However, several authors have shown interest in developing very fast and simple wavelet encoders that are able to get reasonable good performance with reduced computing resource requirements [1,[10][11][12][13]. Their design changed the way wavelet coefficients are encoded, following, most of them, the one-pass encoding scheme. ...
... We will show its real (not estimated) impact on the encoder performance. We have chosen the LTW wavelet encoder [11] as it has the same requirements (filterbank, neighborhood, etc.) that the ones followed in our evaluation study in the previous section. The target is to design a new sign-coding module based on the solution reported by our framework that will be implemented in the LTW image wavelet encoder. ...
Full-text available
Traditionally, it has been assumed that the compression of the sign of wavelet coefficients is not worth the effort because they form a zero-mean process. However, several image encoders such as JPEG 2000 include sign-coding capabilities. In this paper, we analyze the convenience of including sign-coding techniques into wavelet-based image encoders and propose a methodology that allows the design of sign-prediction tools for whatever kind of wavelet-based encoder. The proposed methodology is based on the use of metaheuristic algorithms to find the best sign prediction with the most appropriate context distribution that maximizes the resulting sign-compression rate of a particular wavelet encoder. Following our proposal, we have designed and implemented a sign-coding module for the LTW wavelet encoder, to evaluate the benefits of the sign-coding tool provided by our proposed methodology. The experimental results show that sign compression can save up to 18.91% of bit-rate when enabling sign-coding capabilities. Also, we have observed two general behaviors when coding the sign of wavelet coefficients: (a) the best results are provided from moderate to high compression rates; and (b) the sign redundancy may be better exploited when working with high-textured images.
... For sensor nodes equipped with a visual sensor (or camera) to capture images and transmit them over WSNs, there is a need for memory-efficient and low-complexity image codecs [7]- [12]. For efficient compression of images, a number of image coding algorithms [13]- [26] have been developed. The contemporary image coding methods, such as JPEG2000 [13], embedded zero tree of wavelet coefficients (EZW) [15], set partitioning in hierarchical trees (SPIHT) [16], virtual SPIHT (VSPIHT) [18], set partitioned embedded block coding (SPECK) [20], wavelet block tree coding (WBTC) [25], and wavelet lower tree (LTW) [26] support a wide range of functionalities. ...
... For efficient compression of images, a number of image coding algorithms [13]- [26] have been developed. The contemporary image coding methods, such as JPEG2000 [13], embedded zero tree of wavelet coefficients (EZW) [15], set partitioning in hierarchical trees (SPIHT) [16], virtual SPIHT (VSPIHT) [18], set partitioned embedded block coding (SPECK) [20], wavelet block tree coding (WBTC) [25], and wavelet lower tree (LTW) [26] support a wide range of functionalities. However, they have high computational complexity, high memory requirement, or generate non-embedded bit-streams. ...
The set partitioned embedded block (SPECK) algorithm is an efficient block-based image coder to encode wavelet transformed images. SPECK uses linked lists to track significant/insignificant coefficients and block sets, thereby having a large memory requirement that increases with the encoding rate. Furthermore, multiple memory read/write operations and list management slow down the algorithm. In addition, the implementation of the traditional discrete wavelet transform (DWT) is memory intensive and time-consuming. Therefore, it is difficult to implement image coding using the traditional DWT and SPECK algorithm on low-cost visual sensor nodes. Most of the existing studies on low-memory implementations of the SPECK algorithm attempt to replace the dynamic memory of linked lists by a static memory in the form of fixed-length state tables/markers. In this paper, a fast and memoryless image coder is proposed, which uses the fractional wavelet filter to calculate the DWT coefficients of the image and a zero-memory listless SPECK algorithm for quantization and coding of the DWT coefficients. The proposed algorithm, referred as zero-memory SPECK (ZM-SPECK), completely eliminates the linked lists and only uses a few registers to perform some low-level arithmetic/logical operations. The elimination of linked lists also reduces the memory access time, thereby making ZM-SPECK faster than the original SPECK algorithm. Simulation results show that the proposed ZM-SPECK coder outperforms the contemporary state-of-the-art wavelet image coders in terms of memory requirement and computational complexity, while retaining their coding efficiency. The proposed ZM-SPECK image coder is thus very well suited for image communication in visual sensor networks.
... The architectures, implementing line-based DWT are proposed in [23][24][25][26][27]. Other line-based DWT implementations are provided in [28,29] using lifting scheme. Lifting scheme is used to reduce the complexity, by breaking the bigger operations into smaller ones. ...
Full-text available
The Wavelet Block Tree Coding (WBTC) algorithm is an efficient block-tree based image coder. In WBTC, multiple Spatial Orientation Trees (SOTs) are combined together to make a single block tree. It uses three ordered lists while encoding, which require large memory. The management of large size lists to keep track of significant/insignificant coefficients, increases its complexity as well. Also it uses conventional Discrete Wavelet Transform (DWT) which again needs large memory to compute the transformed coefficients. In this paper, a Low-Complexity Block-Tree Coder (LCBTC) is proposed. The LCBTC uses two state tables and two very small lists to encode the transformed image into a bitstream. The algorithm uses two encoding passes; Sorting and Refinement pass. In order to reduce the complexity, the algorithm encodes the coefficients in block-tree manner. One block-tree is encoded after the other in depth-first search manner. Further the list size is independent of image size and depends only on the wavelet decomposition level. For a particular wavelet decomposition level, the lists have a fixed size. The state-tables store the information of each block and block-tree. The wavelet transform scheme used here is Modified Fractional Wavelet Filter (MFrWF) rather than conventional DWT. MFrWF requires lesser memory and has lower complexity in comparison to the conventional DWT. The simulation results show that the memory requirement of LCBTC is 2.69, 10.21 and 40.24 kB for image sizes 256 x 256, 512 x 512 and 1024 x 1024 respectively, with a wavelet decomposition level of 5. The complexity of the coder is reduced and the image quality of reconstructed image is improved in comparison to WBTC and other state-of-the-art algorithms. These features make the image coder a better candidate for compression in memory constrained and real-time visual sensor networks.
... There are some other low-complexity algorithms based on wavelet transform, such as the distributed source coding (DSC) [16] and lower tree wavelet (LTW) [17] encoder. However, DSC is only used for lossless compression and LTW is not a bit plane encoder which does not provide quality, position, and resolution scalability. ...
... S.M. Guo et al. in [22] proposed a highly efficient and easy to implement lossless compression method. This method proposes to centralize the prediction error values based on sign bits and perform maxplane coding enhancement which produce high compression ratio. ...
... In order to reduce the memory requirement, a line-based version of the DWT was proposed in [30]. A line-based DWT using the lifting scheme is also proposed in [31] and [48]. The line-based approach requires 26 kB of RAM for a six level transform of a 512 × 512 gray scale image [16]. ...
After the successful development of JPEG2000, many state-of-art wavelet-based image coding algorithms have been developed. However, the traditional discrete wavelet transform (DWT) is implemented with memory intensive and time consuming algorithms and therefore has very high system resource requirements. In particular, the very large required memory poses a serious limitation for multimedia applications on memory-constrained portable devices, such as digital cameras and sensor nodes. In this paper, we propose a novel wavelet-based image coder with low memory requirements and low complexity that preserves the compression efficiency. Our encoder employs the Fractional Wavelet Filter (FrWF) to calculate the DWT coefficients, which are quantized and encoded with a novel Low Memory Block Tree Coding (LMBTC) algorithm. LMBTC is a listless form of the Wavelet Block Tree Coding (WBTC) algorithm. Simulation results demonstrate that the proposed coder significantly reduces memory requirements and computational complexity and has competitive coding efficiency in comparison with other state-of-art coders. FrWF combined with LMBTC is thus a viable option for image communication over wireless sensor networks.
... It exploits the correlation between low pass and high pass outputs which can be used to compress the image and get good quality reconstructed image. Tree based wavelet image coder is proposed by Jose Oliver, Manuel P. Malumbres [7]. In this method wavelet tree is formed by logically arranging transform coefficients so as to remove the redundancy among the sub bands. ...
This paper compares digital image compression in various color spaces using Hybrid Haar wavelet transform. As DCT has high energy compaction property, it is combined with Haar transform generating Haar-DCT hybrid wavelet transform. Different component sizes are used and error at different compression ratios up to 32 is observed. At higher compression ratios 16-16 component pair gives minimum error. This pair is further used to compress images in different color spaces like RGB, KLUV, YUV, YIQ, XYZ and YCbCr. KLUV color space show considerably less Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) than other color spaces. Average Fractional Change in Pixel Value (AFCPV) in KLUV color space is nearly half than other color spaces. RMSE, MAE these measures are not sufficient as it give perceived error to human visual system. Hence other error metric Structural Similarity Index (SSIM) is used to observe performance superiority of various color spaces. SSIM in KLUV and RGB color space is equal having value 0.994 which is closest to one indicating better but similar image quality in both color spaces. This observation cannot be emphasized using traditional error metrics like RMSE, MAE and AFCPV.
Conference Paper
Several algorithms for accurate circular target localization can be used in optical-electronic measurement systems. In this paper we examine the effects of compression artifacts on the measurement results. We consider image acquisition model of optical-electronic measurement systems and use it to compare accuracy of widely used center of gravity (CoG) algorithm in the presence of compression artifacts. Several compression algorithms are studied in order to determine their influence on the measurement results, including original algorithm for image coding with region of interest (ROI) support.
In texture-plus-depth format, depth map compression is an important task. Different from normal texture images, depth maps have less texture information, while contain many homogeneous regions separated by sharp edges. This feature will be employed to form an efficient depth map coding scheme in this paper. Firstly, the histogram of the depth map will be analyzed to find an appropriate threshold that segments the depth map into the foreground and background regions, allowing the edge between these two kinds of regions to be obtained. Secondly, the two regions will be encoded through rate distortion optimization with a shape adaptive wavelet transform, while the edges are lossless encoded with JBIG2. Finally, a depth-updating algorithm based on the threshold and the depth range is applied to enhance the quality of the decoded depth maps. Experimental results demonstrate the effective performance on both the depth map quality and the synthesized view quality.
Full-text available
In addition to high compression efficiency, future still and moving image coding systems will require many other features. They include fidelity and resolution scalability, re-gion of interest enhancement, random access decoding, re-silience to errors due to channel noise or packet loss, fast en-coding and/or decoding speed, and low computational and hardware complexity. Moreover, there are emerging new venues for the application of image compression techniques to data associated with points of a two or three-dimensional grid that will demand these features and perhaps others. We shall discuss these new venues along with the usual ones and show how tree-based, set-partitioning wavelet coding methods, such as SPIHT (Set Partitioning in Hierarchical Trees) and SPECK (Set Partitioning Embedded bloCK) will fulfill most of the demands of current and future applica-tions. We shall also discuss the emerging JPEG-2000 in this framework.
The Embedded Zerotree Wavelet Algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code. The embedded code represents a sequence of binary decisions that distinguish an image from the “null” image. Using an embedded coding algorithm, an encoder can terminate the encoding at any point thereby allowing a target rate or target distortion metric to be met exactly. Also, given a bit stream, the decoder can cease decoding at any point in the bit stream and still produce exactly the same image that would have been encoded at the bit rate corresponding to the truncated bit stream. In addition to producing a fully embedded bit stream, EZW consistently produces compression results that are competitive with virtually all known compression algorithms on standard test images. Yet this performance is achieved with a technique that requires absolutely no training, no pre-stored tables or codebooks, and requires no prior knowledge of the image source. The EZW algorithm is based on four key concepts: 1) a discrete wavelet transform or hierarchical subband decomposition, 2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images, 3) entropy-coded successive-approximation quantization, and 4) universal lossless data compression which is achieved via adaptive arithmetic coding.
The author proposes an independent and novel approach to image coding, based on a fractal theory of iterated transformations. The main characteristics of this approach are that i) it relies on the assumption that image redundancy can be efficiently exploited through self-transformability on a block-wise basis, and ii) it approximates an original image by a fractal image. The author refers to the approach as fractal block coding. The coding-decoding system is based on the construction, for an original image to encode, of a specific image transformation--a fractal code--which, when iterated on any initial image, produces a sequence of images that converges to a fractal approximation of the original. It is shown how to design such a system for the coding of monochrome digital images at rates in the range of 0.5-1.0 b/pixel. The fractal block coder has performance comparable to state-of-the-art vector quantizers.
There is no better way to quantize a single vector than to use VQ with a codebook that is optimal for the probability distribution describing the random vector. However, direct use of VQ suffers from a serious complexity barrier that greatly limits its practical use as a complete and self-contained coding technique
Multiresolulion representations are very effective for analyzing the information content of images. We study the properties of the operator which approximates a signal at a given resolution. We show that the difference of information between the approximation of a signal at the resolutions 2 j + l and 2 j can be extracted by decomposing this signal on a wavelet orthonormal basis of L2 (Rn). In L2 (R), a wavelet orthonormal basis is a family of functions (√2j Ψ (2 Jx - π))j,n,ez2+ which is built by dilating and translating a unique functiOn Ψ(x). This decomposition defines an orthogonal multiresolulion representation called a wavelet representation. It is computed with a pyramidal algorithm based on convolutions with quadrature mirror lilters. For images, the wavelet representation differentia1es several spatial orientations. We study the application of this representation to data compression in image coding, texture discrimination and fractal analysis.
Abstract-Image,compression,is now,essential for applica- tions such as transmission,and storage in data bases. This paper proposes,a new,scheme,for image,compression,taking into ac- count psychovisual,features both in the space and frequency domains; this new method involves two steps. First, we use a wavelet transform,in order to obtain a set of biorthogonal,sub- classes of images; the original image is decomposed,at different scales using a pyramidal,algorithm,architecture. The decom- position is along the vertical and,horizontal,directions and maintains,constant,the number,of pixels required,to describe the image. Second, according to Shannon’s rate distortion the- ory, the wavelet coefficients are vector quantized using a multi- resolution codebook. Furthermore, to encode the wavelet coef- ficients, we propose a noise shaping bit allocation procedure which assumes,that details at high resolution are less visible to the human eye. Finally, in order to allow the receiver to rec- ognize a picture as quickly as possible at minimum cost, we present a progressive transmission,scheme. It is shown,that the wavelet transform,is particularly,well adapted,to progressive transmission. Keywords-Wavelet, biorthogonal wavelet, multiscale py-