Conference Paper

Adaptive wavepacket decomposition and quantization in wavelet-basedimage coding

Dept. of Electr. Eng., Ottawa Univ., Ont.;
DOI: 10.1109/MWSCAS.1995.510293 Conference: Circuits and Systems, 1995., Proceedings., Proceedings of the 38th Midwest Symposium on, Volume: 2
Source: IEEE Xplore

ABSTRACT Wavelet transform has recently emerged as a promising technique for image compression due to its flexibility in multiresolution representation of image signals. Current wavelet-based image compression techniques use compute intensive procedures for finding the optimal wave-packet decomposition and coefficient quantization for a given image. In this paper, we first propose an efficient wavepacket decomposition algorithm. We then propose a simple technique for finding quantization step-sizes in various wavelet bands when scalar quantization is employed. The proposed techniques provide a good coding performance at a substantially reduced complexity

1 Bookmark
 · 
65 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: Amplitude quantization and permutation encoding are two of the many approaches to efficient digitization of analog data. It is shown in this paper that these seemingly different approaches actually are equivalent in the sense that their optimum rate versus distortion performances are identical. Although this equivalence becomes exact only when the quantizer output is perfectly entropy coded and the permutation code block length is infinite, it nonetheless has practical consequences both for quantization and for permutation encoding. In particular, this equivalence permits us to deduce that permutation codes provide a readily implementable block-coding alternative to buffer-instrumented variable-length codes. Moreover, the abundance of methods in the literature for optimizing quantizers with respect to various criteria can be translated directly into algorithms for generating source permutation codes that are optimum for the same purposes. The optimum performance attainable with quantizers (hence, permutation codes) of a fixed entropy rate is explored too. The investigation reveals that quantizers with uniformly spaced thresholds are quasi-optimum with considerable generality, and are truly optimum in the mean-squared sense for data having either an exponential or a Laplacian distribution. An attempt is made to provide some analytical insight into why simple uniform quantization is so good so generally.
    IEEE Transactions on Information Theory 12/1972; · 2.62 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: An optimal bit allocation algorithm is presented that is suitable for all practical situations. Each source to be coded is assumed to have its own set of admissible quantizers (which can be either scalar or vector quantizers) which do not need to have integer bit rates. The distortion versus rate characteristic of each quantizer set may have an arbitrary shape. The algorithm is very simple in structure and can be applied to any practical coding scheme (such as a subband coder) that needs dynamic bit allocation
    Acoustics, Speech, and Signal Processing, 1988. ICASSP-88., 1988 International Conference on; 05/1988
  • [Show abstract] [Hide abstract]
    ABSTRACT: Adapted waveform analysis uses a library of orthonormal bases and an efficiency functional to match a basis to a given signal or family of signals. It permits efficient compression of a variety of signals, such as sound and images. The predefined libraries of modulated waveforms include orthogonal wavelet-packets and localized trigonometric functions, and have reasonably well-controlled time-frequency localization properties. The idea is to build out of the library functions an orthonormal basis relative to which the given signal or collection of signals has the lowest information cost. The method relies heavily on the remarkable orthogonality properties of the new libraries: all expansions in a given library conserve energy and are thus comparable. Several cost functionals are useful; one of the most attractive is Shannon entropy, which has a geometric interpretation in this context
    IEEE Transactions on Information Theory 04/1992; · 2.62 Impact Factor