Article

Relating fractal image compression to transform methods [microform].

Authors:
Article

Relating fractal image compression to transform methods [microform].

If you want to read the PDF, try requesting it from the authors.

Abstract

Thesis (M. Math.)--University of Waterloo, 1995. Includes bibliographical references.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Article
Full-text available
Fractal image compression is a technique based on the representation of an image by a contractive transform, on the space of images, for which the fixed point is close to the original image. This broad principle encompasses a very wide variety of coding schemes, many of which have been explored in the rapidly growing body of published research. While certain theoretical aspects of this representation are well established, relatively little attention has been given to the construction of a coherent underlying image model that would justify its use. Most purely fractal-based schemes are not competitive with the current state of the art, but hybrid schemes incorporating fractal compression and alternative techniques have achieved considerably greater success. This review represents a survey of the most significant advances, both practical and theoretical, since the publication of Jacquin's (1990) original fractal coding scheme.
Conference Paper
An enhanced fractal image coding based on quadtree partition is presented in this paper. It has the lowest bit rate as well as the highest quality in comparison with other existing schemes. Also, it can obtain higher compression ratio and needs less computation encoding time with minimal loss in PSNR compared with the fixed size partition scheme. Since fractal image modeling is resolution independent, it can lead to significant improvement. Our results show superior performance from the point of view of compression ratio and coding time.
Article
We are concerned with the fractal approximation of multidimensional functions in L 2 . In particular, we treat a position-dependent approximation with no search using orthogonal bases of L 2 . We describe a framework that establishes a connection between the classic orthogonal approximation and the fractal approximation. The main theorem allows easy and univocal computation of the parameters of the approximating function. From the computational perspective, we can avoid to solve linear systems often suffering from ill conditioning and needed in former fractal approximation techniques. Moreover, using orthogonal bases we obtain the most compact representation of the approximation. As a direct application we show some results on the compression of gray scale digital images. Key words: deterministic fractal geometry, approximation theory, image compression, orthogonal bases 1 Introduction Some years ago it has been shown that the deterministic fractal geometry is capable to produce co...
Article
Full-text available
An algorithm and data structure are presented for searching a file containing N records, each described by k real valued keys, for the m closest matches or nearest neighbors to a given query record. The computation required to organize the file is propor- tional to kNlogN. The expected number of records examined in each search is independent of the file size. The expected compu- tation to perform each search is proportional-to 1ogN. Empirical evidence suggests that except for very small files, this algorithm is considerably faster than other methods.
Article
Full-text available
The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Article
Full-text available
Multiresolution representations are effective for analyzing the information content of images. The properties of the operator which approximates a signal at a given resolution were studied. It is shown that the difference of information between the approximation of a signal at the resolutions 2<sup>j+1</sup> and 2<sup>j</sup> (where j is an integer) can be extracted by decomposing this signal on a wavelet orthonormal basis of L <sup>2</sup>( R <sup>n</sup>), the vector space of measurable, square-integrable n -dimensional functions. In L <sup>2</sup>( R ), a wavelet orthonormal basis is a family of functions which is built by dilating and translating a unique function ψ( x ). This decomposition defines an orthogonal multiresolution representation called a wavelet representation. It is computed with a pyramidal algorithm based on convolutions with quadrature mirror filters. Wavelet representation lies between the spatial and Fourier domains. For images, the wavelet representation differentiates several spatial orientations. The application of this representation to data compression in image coding, texture discrimination and fractal analysis is discussed
Article
The embedded *erotree wavelet algorithm (EZVV) is :s simple, yet remarkably effective, image compression algo{reversed not sign}rithm, having the property thai the bits in the bit stream are generated in order of Importance, yielding a fully embedded code- The embedded code represents a sequence of binary de{reversed not sign}cisions that distinguish an image from Ihe "null" image. Using an embedded coding algorithm, an encoder can terminate Ihe encoding at any point thereby allowing a target rate or target distortion metric to he met exactly. Also, given a bit slream, the decoder can cease decoding at any point in Ihe hit stream and still produce exactly the same image that would have been encoded at the bit rate corresponding tn the truncated hit stream. In addition to producing a fully embedded bit slream, EZW consistenlly produces compression results that are com{reversed not sign}petitive with virtually all known compression algorithms on standard test images. Yet this performance is achieved with a technique that requires absolutely no training, no pre-slored lables or codebooks, and requires no prior knowledge of the image source. The EZW algorithm is based on four key concepts: 1) a dis{reversed not sign}crete wavelet transform or hierarchical subband decomposi{reversed not sign}tion, 2) prediction of the absence of significant information across scales by exploiting Ihe self-similarity inherent in im{reversed not sign}ages, .1) entropy-coded successive-approximation quantization, and 4} universal lossless data compression which is achieved via adaptive arithmetic coding.
Article
The author proposes an independent and novel approach to image coding, based on a fractal theory of iterated transformations. The main characteristics of this approach are that i) it relies on the assumption that image redundancy can be efficiently exploited through self-transformability on a block-wise basis, and ii) it approximates an original image by a fractal image. The author refers to the approach as fractal block coding. The coding-decoding system is based on the construction, for an original image to encode, of a specific image transformation--a fractal code--which, when iterated on any initial image, produces a sequence of images that converges to a fractal approximation of the original. It is shown how to design such a system for the coding of monochrome digital images at rates in the range of 0.5-1.0 b/pixel. The fractal block coder has performance comparable to state-of-the-art vector quantizers.
Article
The processing of spatial patterns by the mammalian visual system shows a number of similarities to the ‘wavelet transforms’ which have recently attracted considerable interest outside of the study of sensory systems. At the level of the primary visual cortex, these visual systems consist of arrays of neurons selective to local regions of space, spatial frequency and orientation. The spatial frequency bandwidths of these neurons increase with frequency resulting in a set of approximately self- similar “receptive fields”. In this paper, we look at the question of why this strategy of representing the visual environment would evolve. The question is approached by looking at the statistical structure of natural scenes and observing how this structure relates to the visual system’s representation of spatial patterns. It is proposed that natural scenes are approximately scale invariant with regards to both their power spectra and their phase spectra. Principally, because of the phase spectra, wavelet-like transforms are capable of producing a sparse, informative representation of these images. It is suggested that self- similar codes like the wavelet are effective for so many natural phenomena because such phenomena show similar structures to those found in these natural scenes.
Article
Written for mathematicians, engineers, researchers in experimental science, and anyone interested in fractals, this book presents the fundamentals of curve analysis with a new and clear introduction to fractal dimension. It explains the geometrical and analytical properties of trajectories, aggregate contours, geographical coastlines, profiles of rough surfaces, and other curves of finite and fractal length. The approach is through precise definitions from which properties are deduced and applications and computational methods are derived. Written without the traditional heavy symbolism of mathematics texts, this book requires two years of calculus as a prerequisite to understanding. This text also contains material appropriate for graduate coursework in curve analysis and/or fractal dimension.
Article
Multiresolulion representations are very effective for analyzing the information content of images. We study the properties of the operator which approximates a signal at a given resolution. We show that the difference of information between the approximation of a signal at the resolutions 2 j + l and 2 j can be extracted by decomposing this signal on a wavelet orthonormal basis of L2 (Rn). In L2 (R), a wavelet orthonormal basis is a family of functions (√2j Ψ (2 Jx - π))j,n,ez2+ which is built by dilating and translating a unique functiOn Ψ(x). This decomposition defines an orthogonal multiresolulion representation called a wavelet representation. It is computed with a pyramidal algorithm based on convolutions with quadrature mirror lilters. For images, the wavelet representation differentia1es several spatial orientations. We study the application of this representation to data compression in image coding, texture discrimination and fractal analysis.
Article
A new family of unitary transforms is introduced. It is shown that the well-known discrete Fourier, cosine, sine, and the Karhunen-Loeve (KL) (for first-order stationary Markov processes) transforms are members of this family. All the member transforms of this family are sinusoidal sequences that are asymptotically equivalent. For finite-length data, these transforms provide different approximations to the KL transform of the said data. From the theory of these transforms some well-known facts about orthogonal transforms are easily explained and some widely misunderstood concepts are brought to light. For example, the near-optimal behavior of the even discrete cosine transform to the KL transform of first-order Markov processes is explained and, at the same time, it is shown that this transform is not always such a good (or near-optimal) approximation to the above-mentioned KL transform. It is also shown that each member of the sinusoidal family is the KL transform of a unique, first-order, non-stationary (in general), Markov process. Asymptotic equivalence and other interesting properties of these transforms can be studied by analyzing the underlying Markov processes.
Article
We construct orthonormal bases of compactly supported wavelets, with arbitrarily high regularity. The order of regularity increases linearly with the support width. We start by reviewing the concept of multiresolution analysis as well as several algorithms in vision decomposition and reconstruction. The construction then follows from a synthesis of these different approaches.
Conference Paper
We describe an adaptive wavelet-based compression scheme for images. We decompose an image into a set of quantized wavelet coefficients and quantized wavelet subtrees. The vector codebook used for quantizing the subtrees is drawn from the image. Subtrees are quantized to contracted isometries of coarser scale subtrees. This codebook drawn from the contracted image is effective for quantizing locally smooth regions and locally straight edges. We prove that this self-quantization enables us to recover the fine scale wavelet coefficients of an image given its coarse scale coefficients. We show that this self-quantization algorithm is equivalent to a fractal image compression scheme when the wavelet basis is the Haar basis. The wavelet framework places fractal compression schemes in the context of existing wavelet subtree coding schemes. We obtain a simple convergence proof which strengthens existing fractal compression results considerably, derive an improved means of estimating the error incurred in decoding fractal compressed images, and describe a new reconstruction algorithm which requires O(N) operations for an N pixel image
Conference Paper
In fractal image compression the encoding step is computationally expensive. A large number of sequential searches through a list of domains (portions of the image) are carried out while trying to find the best match for another image portion. Our theory developed here shows that this basic procedure of fractal image compression is equivalent to multi-dimensional nearest neighbor search. This result is useful for accelerating the encoding procedure in fractal image compression. The traditional sequential search takes linear time whereas the nearest neighbor search can be organized to require only logarithmic time. The fast search has been integrated into an existing state-of-the-art classification method thereby accelerating the searches carried out in the individual domain classes. In this case we record acceleration factors from 1.3 up to 11.5 depending on image and domain pool size with negligible or minor degradation in both image quality and compression ratio. Furthermore, as compared to plain classification our method is demonstrated to be able to search through larger portions of the domain pool without increased the computation time
Article
The author proposes an independent and novel approach to image coding, based on a fractal theory of iterated transformations. The main characteristics of this approach are that (i) it relies on the assumption that image redundancy can be efficiently exploited through self-transformability on a block-wise basis, and (ii) it approximates an original image by a fractal image. The author refers to the approach as fractal block coding. The coding-decoding system is based on the construction, for an original image to encode, of a specific image transformation-a fractal code-which, when iterated on any initial image, produces a sequence of images that converges to a fractal approximation of the original. It is shown how to design such a system for the coding of monochrome digital images at rates in the range of 0.5-1.0 b/pixel. The fractal block coder has performance comparable to state-of-the-art vector quantizers
Article
A joint ISO/CCITT committee known as JPEG (Joint Photographic Experts Group) has been working to establish the first international compression standard for continuous-tone still images, both grayscale and color. JPEG's proposed standard aims to be generic, to support a wide variety of applications for continuous-tone images. To meet the differing needs of many applications, the JPEG standard includes two basic compression methods, each with various modes of operation. A DCT (discrete cosine transform)-based method is specified for `lossy' compression, and a predictive method for `lossless' compression. JPEG features a simple lossy technique known as the Baseline method, a subset of the other DCT-based modes of operation. The Baseline method has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications. The author provides an overview of the JPEG standard, and focuses in detail on the Baseline method
Article
With the continuing growth of modern communications technology, demand for image transmission and storage is increasing rapidly. Advances in computer technology for mass storage and digital processing have paved the way for implementing advanced data compression techniques to improve the efficiency of transmission and storage of images. In this paper a large variety of algorithms for image data compression are considered. Starting with simple techniques of sampling and pulse code modulation (PCM), state of the art algorithms for two-dimensional data transmission are reviewed. Topics covered include differential PCM (DPCM) and predictive coding, transform coding, hybrid coding, interframe coding, adaptive techniques, and applications. Effects of channel errors and other miscellaneous related topics are also considered. While most of the examples and image models have been specialized for visual images, the techniques discussed here could be easily adapted more generally for multidimensional data compression. Our emphasis here is on fundamentals of the various techniques. A comprehensive bibliography with comments is included for a reader interested in further details of the theoretical and experimental results discussed here.
Article
In this paper, we provide an algorithm for the construction of IFSM approximations to a target set v is an element of L(+)(2)(X, mu), where X C R(D) and mu = m((D)) (Lebesgue measure). The algorithm minimizes the squared "collage distance" parallel to v - Tv parallel to(2)(2). We work with an infinite set of fixed affine IFS maps w(i) : X -> X satisfying a certain density and nonoverlapping condition. As such, only an optimization over the grey level maps phi(i): R(+) -> R(+) is required. If affine maps are assumed, i.e. phi(i) = alpha(i)t + beta(i), then the algorithm becomes a quadratic programming (QP) problem in the alpha(i) and beta(i). We can also define a "local IFSM" (LIFSM) which considers the actions of contractive maps wi on subsets of X to produce smaller subsets. Again, affine phi(i) maps are used, resulting in a QP problem. Some approximations of functions on [0,1] and images in [0, 1](2) are presented.
Article
: Fractal image compression allows fast decoding but suffers from long encoding times. During the encoding a large number of sequential searches through a list of domains (portions of the image) are carried out while trying to find a best match for another image portion called range. In this article we review and extend the methods that have been developed to reduce the time complexity of this searching. Also we present a new taxonomy of the methods, provide an evaluation and propose two new techniques. Keywords: Image compression, classification, clustering, nearest neighbors, time complexity, fractals, Hilbert curve. 1 Introduction Fractal image compression began with the visionary conception of M. Barnsley in 1987 and the ground breaking work of A. Jacquin [1, 2] in 1989. Since then a lot of work on the topic has been undertaken which has given fractal image compression the power to become a serious competitor of the established compression techniques. Fractal image compression ...
Article
This note is a very basic introduction to wavelets. It starts with an orthogonal basis of piecewise constant functions, constructed by dilation and translation. The ``wavelet transform'' maps each $f(x)$ to its coefficients with respect to this basis. The mathematics is simple and the transform is fast (faster than the Fast Fourier Transform, which we briefly explain), but approximation by piecewise constants is poor. To improve this first wavelet, we are led to dilation equations and their unusual solutions. Higher-order wavelets are constructed, and it is surprisingly quick to compute with them --- always indirectly and recursively. We comment informally on the contest between these transforms in signal processing, especially for video and image compression (including high-definition television). So far the Fourier Transform --- or its 8 by 8 windowed version, the Discrete Cosine Transform --- is often chosen. But wavelets are already competitive, and they are ahead for fingerprints. We present a sample of this developing theory.
Orthogonal pyramid transform for image coding
  • H Edward
  • Eero Adelson
  • Simoncelli
Edward H. Adelson and Eero Simoncelli. Orthogonal pyramid transform for image coding. SPIE, 845:50{58, 1987.
Fractal Image Compression. A. K. Peters, Wellesley, Massachusetts
  • Michael Barnsley
Michael Barnsley. Fractal Image Compression. A. K. Peters, Wellesley, Massachusetts, 1993.
Colorimetry in color television
  • F J Binley
F. J. Binley. Colorimetry in color television. Proc. IRE, 41:838{851, July 1953.
Fractal representation of images via the discrete wavelet transform. IEEE 18th Convention of Electrical Engineering in Israel
  • H Krupnik
  • D Malah
  • E Karnin
H. Krupnik, D. Malah, and E. Karnin. Fractal representation of images via the discrete wavelet transform. IEEE 18th Convention of Electrical Engineering in Israel, Tel-Aviv, March 1995.
D eriv ees fractionnaires, d eriv ees de Lipschitz et dimension fractale
  • Salim Salem
Salim Salem. D eriv ees fractionnaires, d eriv ees de Lipschitz et dimension fractale. PhD thesis, Ecole Polytechnique de Montr eal, April 1995.