Article

Fast vector quantisation encoding algorithm using zero-tree data structure

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

A fast codeword search algorithm based on partial distance search (PDS) and zero-tree data structure is presented. Before the fast search, the zero-trees of the wavelet coefficients of codewords and sourcewords are first identified. The PDS is then preformed only over the wavelet coefficients which are not inside the zero-trees. The algorithm is well-suited to applications where degradation in average distortion is allowed to achieve very low arithmetic complexity

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... They have used PDS over wavelet transforms for fast block-matching algorithms for video coding [77] and in variable-rate vector quantisers [78]. For applications where arithmetic complexity is of utmost concern, Hwang and Chen [74] propose the use of a zero-tree data structure of the wavelet coefficients [184,152] allowing controlled degradations in average distortion. [71], and between 9.9% and 12.2% for that of Li and Salari [103]. ...
... For instance, the schemes proposed by Ramasubramanian and Paliwal [137] and Tai et al. [163] are not full-search equivalent although they claim that the resulting performance degradation in terms of the distortion measure is very small when a large training data set is used to create the VQ codebook. Hwang and Chen [74], however, argued that the distortion-complexity relationship is uncertain for these algorithms. Decreasing the size of the training set can increase the average distortion, but it does not necessarily result in lower arithmetic complexity. ...
... Some implementations select candidate matches arbitrarily [127,103,71]. Others select the candidate matches in a manner which follows naturally from the progressive projections, such as the PDS, used in the elimination procedure [181,133,99,7,74]. ...
Article
Full-text available
A fundamental activity common to many image processing, pattern classification, and clustering algorithms involves searching a set of n, k-dimensional data for the one which is nearest to a given target item with respect to a distance function. Our goal is to find fast search algorithms which are full-search equivalent---that is, the resulting match is as good as what we could obtain if we were to search the set exhaustively. We propose a framework made up of three components, namely (i) a technique for obtaining a good initial match, (ii) an inexpensive method for determining whether the current match is a full-search equivalent match, and (iii) an effective technique for improving the current match. Our approach is to consider good solutions for each component in order to find an algorithm which balances the overall complexity of the search. We also propose a technique for hierarchical ordering and cluster elimination using a minimal cost spanning tree. Our experiments on vector quantisation coding of images show that the framework and techniques we proposed can be used to construct suitable algorithms for most of our data sets which require full-search equivalent matches at an average arithmetic cost of less than O(k log n) while using only O(n) space.
... For example, fast nearest neighbor algorithms have been developed for non-Euclidean metrics [49][50][51], for finding only the nearest neighbor [52], and for finding only the approximate nearest neighbors [53]. Also, some algorithms are designed specifically for images [54][55][56][57] and some approximate algorithms have been developed that reduce computation at the expense of accuracy [13,[51][52][53][58][59][60][61][62][63][64]. ...
Article
Full-text available
Previous studies have shown that local models are among the most accurate methods for predicting chaotic time series. This work discusses a number of improvements to local models that reduce computation, improve the model accuracy, or both. Local models are often criticized because they require much more computation than most global models to calculate the model outputs. Usually, most of this time is taken to find the nearest neighbors in the data set. This work introduces two new nearest neighbor algorithms that drastically reduce this time and enable local models to be evaluated very quickly. The two new algorithms are compared with fifteen other algorithms on a variety of benchmark problems. Local linear models are the most popular, and often the most accurate, type of local model. However, using an appropriate means of regularization to eliminate the e#ects of an ill-conditioned matrix inverse is crucial to producing accurate predictions. This work describes the two most popular ty...
Article
A new fast codeword search algorithm for vector quantisers is presented. This algorithm performs a fast search in the wavelet domain of codewords using the partial distance search technique. Simulation results show that the algorithm has only 2% of the arithmetic complexity of the exhaustive search method
Article
The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code. The embedded code represents a sequence of binary decisions that distinguish an image from the “null” image. Using an embedded coding algorithm, an encoder can terminate the encoding at any point thereby allowing a target rate or target distortion metric to be met exactly. Also, given a bit stream, the decoder can cease decoding at any point in the bit stream and still produce exactly the same image that would have been encoded at the bit rate corresponding to the truncated bit stream. In addition to producing a fully embedded bit stream, the EZW consistently produces compression results that are competitive with virtually all known compression algorithms on standard test images. Yet this performance is achieved with a technique that requires absolutely no training, no pre-stored tables or codebooks, and requires no prior knowledge of the image source. The EZW algorithm is based on four key concepts: (1) a discrete wavelet transform or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, and (4) universal lossless data compression which is achieved via adaptive arithmetic coding
Article
Fast search algorithms are proposed and studied for vector quantization encoding using the K -dimensional ( K -d) tree structure. Here, the emphasis is on the optimal design of the K -d tree for efficient nearest neighbor search in multidimensional space under a bucket-Voronoi intersection search framework. Efficient optimization criteria and procedures are proposed for designing the K -d tree, for the case when the test data distribution is available (as in vector quantization application in the form of training data) as well as for the case when the test data distribution is not available and only the Voronoi intersection information is to be used. The criteria and bucket-Voronoi intersection search procedure are studied in the context of vector quantization encoding of speech waveform. They are empirically observed to achieve constant search complexity for O (log N ) tree depths and are found to be more efficient in reducing the search complexity. A geometric interpretation is given for the maximum product criterion, explaining reasons for its inefficiency with respect to the optimization criteria
Article
In this paper, two efficient codebook searching algorithms for vector quantization (VQ) are presented. The first fast search algorithm utilizes the compactness property of signal energy on transform domain and the geometrical relations between the input vector and every codevector to eliminate those codevectors that have no chance to be the closest codeword of the input vector. It achieves a full search equivalent performance. As compared with other fast methods of the same kind, this algorithm requires the fewest multiplications and the least total times of distortion measurements. Then, a suboptimal searching method, which sacrifices the reconstructed signal quality to speed up the search of nearest neighbor, is presented. This algorithm performs the search process on predefined small subcodebooks instead of the whole codebook for the closest codevector. Experimental results show that this method not only needs less CPU time to encode an image but also encounters less loss of reconstructed signal quality than tree-structured VQ does