William Pearlman

William Pearlman
  • Ph.D. in Electrical Engineerin
  • Professor Emeritus at Rensselaer Polytechnic Institute

About

251
Publications
23,180
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
12,655
Citations
Current institution
Rensselaer Polytechnic Institute
Current position
  • Professor Emeritus
Additional affiliations
August 1979 - October 2010
Rensselaer Polytechnic Institute
Position
  • Professor
Education
September 1965 - September 1974
Stanford University
Field of study
  • Electrical Engineering

Publications

Publications (251)
Article
Many scenarios require single frame random access and error resilience in coding, transmission, and decoding of image sequences. We propose in this paper a SPIHT coder that encodes a sequence of images one frame at a time and uses two methods together to achieve strong error resilience. The first method groups wavelet coefficients into a number of...
Article
A distributed video coding (DVC) system based on wavelet transform and set partition coding (SPC) is presented in this paper. Conventionally the significance map (sig-map) of SPC is not conducive to Slepian–Wolf (SW) coding, because of the difficulty of generating a side information sig-map and the sensitivity to decoding errors. The proposed DVC s...
Book
This book explains the stages necessary to create a wavelet compression system for images and describes state-of-the-art systems used in image compression standards and current research. It starts with a high level discussion of the properties of the wavelet transform, especially the decomposition into multi-resolution subbands. It continues with a...
Chapter
A good example of applying the methods studied thus far resides in the FBI Fingerprint Image Compression Standard. This standard uses adaptive entropy-coded scalar quantization of wavelet packet subbands.We shall describe briefly the salient parts of this method.
Chapter
In this and following sections, we shall describe in detail methods for coding of subbands in which the bit allocation is not statistically based and which attempt to represent individual coefficient amplitudes with their minimal numbers of natural bits.We begin with a set partitioning method that has a quadtree representation and is part of the co...
Chapter
In order to exercise rate control when encoding the wavelet transform in separate blocks, such as we have in JPEG2000, SBHP, and block-oriented SPIHT, we need a method to determine the rate in each block that minimizes the distortion for a given average bit rate target.When the blocks are bit-embedded, their codestreams can simply be truncated to t...
Chapter
The generic wavelet transform coding system, regardless of the source, normally starts with subband/wavelet transformation of the source input. There is often then an optional pre-processing step that consists of statistical estimation, segmentation, weighting, and/or classification of the transform coefficients. Then the coefficients are subjected...
Chapter
The basis functions of the wavelet transform are contained within a number of orthogonal or biorthogonal sets of time- or space-shifted waveforms. Within a set, the resolution or scale is the same, but between sets the scale differs by an octave (factor of 2). In fact, the set of waveforms at a given scale is a basis for the basic waveform in the s...
Chapter
We have described several block-based wavelet transform coding systems, so now we turn to two algorithms which are tree-based, Set Partitioning In Hierarchical Trees (SPIHT) [41] and EZW [45]. The SPIHT and EZW algorithms operate on trees in the transform domain, because they require the energy compaction properties of transforms to be efficient. T...
Chapter
In this book we have attempted to present the principles and practice of image or two-dimensional wavelet compression systems. Besides compression efficiency as an objective, we have shown how to imbue such systems with features of resolution and quality scalability and random access to source regions within the codestream. The simultaneous achieve...
Article
Efficient image sequence coding exploits both intra- and inter-frame correlations. Set partition coding (SPC) is efficient in intra-frame de-correlation for still images. Based on SPC, a novel image sequence coding system, called motion differential SPC (M-D-SPC), is presented in this paper. It removes inter-frame redundancy by re-using the signifi...
Article
Full-text available
Set partition coding (SPC) has shown tremendous success in image compression. Despite its popularity, the lack of error resilience remains a significant challenge to the transmission of images in error-prone environments. In this paper, we propose a novel data representation called the progressive significance map (prog-sig-map) for error-resilient...
Article
Full-text available
This paper presents lossy compression algorithms which build on a state-of-the-art codec, the Set Partitioned Embedded Block Coder (SPECK), by incorporating a lattice vector quantizer codebook, therefore allowing it to process multiple samples at one time. In our tests, we employ scenes derived from standard AVIRIS hyperspectral images, which posse...
Book
With clear and easy-to-understand explanations, this book covers the fundamental concepts and coding methods of signal compression, whilst still retaining technical depth and rigor. It contains a wealth of illustrations, step-by-step descriptions of algorithms, examples and practice problems, which make it an ideal textbook for senior undergraduate...
Conference Paper
Full-text available
Efficient image sequence coding exploits both spatial correlation and temporal correlation. SPIHT is a powerful algorithm in exploiting spatial correlations for still image coding. Based on SPIHT, we present a novel approach to exploit temporal correlations for image sequence coding, while maintaining single frame, random access decoding. The so-ca...
Article
Full-text available
This work investigates a central problem in steganography, that is: How much data can safely be hidden without being detected? To answer this question, a formal definition of steganographic capacity is presented. Once this has been defined, a general formula for the capacity is developed. The formula is applicable to a very broad spectrum of channe...
Conference Paper
Full-text available
Enhanced versions of the LVQ-SPECK algorithm are presented that further exemplify the codec's good performance when dealing with multidimensional data sets. The two different alternatives might, in fact, be considered stepwise improvements over the original codec. First, an extended-range option is implemented, so that the number of spectral bands...
Article
Full-text available
End users of large volume image datasets are often interested only in certain features that can be identified as quickly as possible. For hyperspectral data, these features could reside only in certain ranges of spectral bands and certain spatial areas of the target. The same holds true for volume medical images for a certain volume region of the s...
Conference Paper
Full-text available
We discuss the use of lattice vector quantizers in conjunction with a quadtree-based sorting algorithm for the compression of multidimensional data sets, as encountered, for example, when dealing with hyperspectral imagery. An extension of the SPECK algorithm is presented that deals with vector samples and is used to encode a group of successive sp...
Article
Full-text available
This monograph describes current-day wavelet transform image coding systems. As in the first part [ibid. 2, No. 2, 95–180 (2008; Zbl 1157.94300)], steps of the algorithms are explained thoroughly and set apart. An image coding system consists of several stages: transformation, quantization, set partition or adaptive entropy coding or both, decoding...
Article
Full-text available
Recently, several methods that divide the original bitstream of an image/video progressive wavelet based source coders into multiple correlated substreams have been proposed. The principle behind transmitting independent multiple substream is to generate multiple descriptions of the source such that the graceful degradation is achieved when transmi...
Article
Full-text available
The purpose of this two-part monograph is to present a tutorial on set partition coding, with emphasis and examples on image wavelet transform coding systems, and describe their use in modern image cod- ing systems. Set partition coding is a procedure that recursively splits groups of integer data or transform elements guided by a sequence of thres...
Conference Paper
Lattice vector quantization (LVQ) offers substantial reduction in computational load and design complexity due to the lattice regular structure [1]. In this paper, we extended the SPIHT [2] coding algorithm with lattice vector quantization to code hyperspectral images. In the proposed algorithm, multistage lattice vector quantization (MLVQ) is used...
Article
Full-text available
An image coding algorithm, Progressive Resolution Coding (PROGRES), for a high-speed resolution scalable decoding is proposed. The algorithm is designed based on a prediction of the decaying dynamic ranges of wavelet subbands. Most interestingly, because of the syntactic relationship between two coders, the proposed method costs an amount of bits v...
Article
Full-text available
Locating zerotrees in a wavelet transform allows encoding of sets of coefficients with a single symbol. It is an efficient means of coding if the overhead to identify the locations is small compared to the size of the zerotree sets on the average. It is advantageous in this regard to define classes of zerotrees according to the levels from the root...
Conference Paper
Full-text available
This paper proposes a low-complexity wavelet-based method for progressive lossy-to-lossless compression of four dimensional (4-D) medical images. The subband block hierarchal partitioning (SBHP) algorithm is modified and extended to four dimensions, and applied to every code block independently. The resultant algorithm, 4D-SBHP, efficiently encodes...
Book
Set partition coding is a simple, popular, and effective method that has produced some of the best results in image and signal coding. Nevertheless, the literature has not adequately explained the principles of this method that make it so effective. This article, in tandem with Part I, which is available separately, remedies this situation and desc...
Book
Set partition coding is a simple, popular, and effective method that has produced some of the best results in image and signal coding. Nevertheless, the literature has not adequately explained the principles of this method that make it so effective. This article, in tandem with Part II, which is available separately, remedies this situation and des...
Conference Paper
Full-text available
With the increase of remote sensing images, fast access to some features of the image is becoming critical. This access could be some part of the spectrum, some area of the image, high spatial resolution. An adaptation of 3D-SPIHT image compression algorithm is presented to allow random access to some part of the image, whether spatial or spectral....
Article
Full-text available
In this paper, we present a two-stage near-lossless compression scheme. It belongs to the class of "lossy plus residual coding" and consists of a wavelet-based lossy layer followed by arithmetic coding of the quantized residual to guarantee a given L(infinity) error bound in the pixel domain. We focus on the selection of the optimum bit rate for th...
Conference Paper
In this paper we consider the problem of faded wireless image and video transmission schemes under energy transmission constraints. An example of such a constrained system experiencing Rayleigh fading is code division multiple access (CDMA) where power control is used. We imposed on the average energy transmitted per an image or image sequence. A c...
Article
Full-text available
A low-complexity three-dimensional image compression algorithm based on wavelet transforms and set-partitioning strategy is presented. The Subband Block Hierarchial Partitioning (SBHP) algorithm is modified and extended to three dimensions, and applied to every code block independently. The resultant algorithm, 3D-SBHP, efficiently encodes 3D image...
Article
Full-text available
The goal of this program was to integrate the error-resilient set partitioning in hierarchical trees (ER-SPIHT) video codec into a system for transmission over an orthogonal frequency-division multiplexing (OFDM) channel with rate adaptation. The bit stream from video codec should be distributed among the subcarriers for optimum throughput with min...
Chapter
Full-text available
This chapter proposed a three dimensional set partitioned embedded block coder for hyperspectral image compression. The three dimensional wavelet transform automatically exploits inter-band dependence. Two versions of the algorithm were implemented. The integer filter implementation enables lossy-to-lossless compression, and the floating point filt...
Conference Paper
Full-text available
A low-complexity three-dimensional image compression algorithm based on wavelet transforms and set-partitioning strategy is presented. The subband block hierarchial partitioning (SBHP) algorithm is modified and extended to three dimensions, and applied to every code block independently. The resultant algorithm, 3D-SBHP, efficiently encodes 3D image...
Conference Paper
Full-text available
In this paper we consider the problem of faded wireless image and video transmission schemes over multi-input multi-output (MIMO) orthogonal frequency division multiplexing (OFDM) channels that are energy and the bandwidth constrained. A computationally inexpensive analytic mean square error (MSE) distortion rate (D-R) estimator for progressive wav...
Conference Paper
Full-text available
This paper presents SPIHT and block-wise SPIHT algorithms where full depth first search algorithm is used to agglomerate significant bits at each bitplane. Search strategies used for SPIHT to date are more or less based on a breadth first search algorithm. The aim of this work is to minimize the final memory usage without paying additional overhead...
Article
We present a wavelet-based near-lossless coder with L∞-error scalability. The method presented consists of a wavelet-based lossy layer encoder followed by multiple stages of residual refinements for L∞-error scalability. We introduce a successive refinement scheme in L∞-error sense for residual layers reminiscent of popular set-partitioning algorit...
Article
Full-text available
We propose two approaches to scalable lossless coding of motion video. They achieve SNR-scalable bitstream up to lossless reconstruction based upon the subpixel-accurate MCTF-based wavelet video coding. The first approach is based upon a two-stage encoding strategy where a lossy reconstruction layer is augmented by a following residual layer in ord...
Article
Full-text available
One interesting feature of image compression is support of region of interest (ROI) access, in which an image sequence can be encoded only once and then the decoder can directly extract a subset of the bitstream to reconstruct a chosen ROI of required quality. In this paper, we apply Three-dimensional Subband Block Hierarchical Partitioning (3-D SB...
Chapter
Mammography is the most effective method for the early detection of breast cancer. However, the high-resolution requirements associated to preserve small-size or low-contrast lesions have limited the widespread use of digital mammography and have hindered the use of computers as a second opinion in the detection of this disease. Data compression pr...
Conference Paper
Full-text available
In this paper, we elaborate on denoising schemes based on lossy compression. First, we provide an alternative interpretation of the so-called Occam filter and relate it with the complexity-regularized denoising schemes in the literature. Next, we discuss about the 'critical distortion' of a noisy source and argue that optimal denoising is achieved...
Conference Paper
Full-text available
A degree-k zero tree model is presented, in order to quantify the coding power of zerotrees in wavelet-based image coding. Based on the model, the coding behaviors of modern zerotree based image coders are clearly explained. Also, we explain why the well-known SPIHT algorithm can code a wider range of zerotrees than EZW. Experimental results suppor...
Article
This paper presents a multilayered protection of embedded video bitstreams over bit errors and packet erasure channels using Error Resilient and Error Concealment 3-D SPIHT (ERC-SPIHT) algorithm, which is based on the 3-D SPIHT concepts. A robust source coder is created to give error resilience in source level of the codestream. This robustness is...
Article
Full-text available
We propose resolution progressive Three-Dimensional Set Partitioned Embedded bloCK (3D-SPECK), an embedded wavelet based algorithm for hyperspectral image compression. The proposed algorithm also supports random Region-Of-Interest (ROI) access. For a hyperspectral image sequence, integer wavelet transform is applied on all three dimensions. The tra...
Article
Full-text available
We propose an integrated, wavelet based, two-stage coding scheme for lossy, near-lossless and lossless compression of medical volumetric data. The method presented determines the bit-rate while encoding for the lossy layer and without any iteration. It is in the spirit of "lossy plus residual coding" and consists of a wavelet-based lossy layer foll...
Conference Paper
Full-text available
Here we propose scalable Three-Dimensional Set Partitioned Embedded bloCK (3D-SPECK)-an embedded, block-based, wavelet transform coding algorithm of low complexity for hyperspectral image compression. Scalable 3D-SPECK supports both SNR and resolution progressive coding. After wavelet transform, 3D-SPECK treats each subband as a coding block. To ge...
Article
Full-text available
For faster random access of a target image block, a bi-section idea is applied to link image blocks. Conventional methods configure the blocks in linearly linked way, for which the block seek time entirely depends on the location of the block on the compressed bitstream. The block linkage information is configured such that binary search is possibl...
Conference Paper
Full-text available
A very fast, low complexity algorithm for resolution scalable and random access decoding is presented. The algorithm avoids the multiple passes of bit-plane coding for speed improvement. The decrease in dynamic ranges of wavelet coefficients magnitudes is efficiently coded. The hierarchical dynamic range coding naturally enables a resolution scalab...
Conference Paper
Full-text available
Summary form only given. A very fast, low complexity algorithm for resolution-scalable and random access decoding is presented. The algorithm avoids the multiple passes of bit-plane coding for speed improvement. The decrease in dynamic range of wavelet coefficient magnitudes is efficiently coded. The hierarchical dynamic range coding naturally enab...
Conference Paper
Full-text available
An embedded, block-based, wavelet transform coding algorithm of low complexity is proposed. Three-dimensional set partitioned embedded block (3D-SPECK) efficiently encodes, hyperspectral volumetric image data by exploiting the dependencies in all dimensions. Integer wavelet transform is applied to enable lossy and lossless decompression from the sa...
Article
Full-text available
In this letter, a novel and computationally inexpensive analytic mean square error (mse) distortion rate (D-R) estimator for SPIHT which generates a nearly exact D-R function for the two- and three-dimensional SPIHT algorithm is presented. Utilizing our D-R estimate, we employ unequal error protection and equal error protection in order to minimize...
Conference Paper
Full-text available
A new coding scheme for lossy to lossless compression of volumetric data using a three-dimensional packet wavelet transform is presented. The need of a unitary transform via the integer lifting scheme and the performance comparison of different integer filter kernels is discussed. A state-of-the-art coder, set partitioning in hierarchical trees (SP...
Article
Full-text available
This article presents a lossless compression of volumetric medical images with the improved three-dimensional (3-D) set partitioning in hierarchical tree (SPIHT) algorithm that searches on asymmetric trees. The tree structure links wavelet coefficients produced by 3-D reversible integer wavelet transforms. Experiments show that the lossless compres...
Article
Full-text available
The use of kernel Fisher discriminants is used to detect the presence of JPEG based hiding methods. The feature vector for the kernel discriminant is constructed from the quantized DCT coe#cient indices. Using methods developed in kernel theory a classifier is trained in a high dimensional feature space which is capable of discriminating original f...
Article
Full-text available
This work reduces the computational requirements of the additive noise steganalysis presented by Harmsen and Pearlman. The additive noise model assumes that the stegoimage is created by adding a pseudo-noise to a coverimage. This addition predictably alters the joint histogram of the image. In color images it has been shown that this alteration can...
Article
In this paper we consider the problem of robust image coding and packetization for the purpose of communications over slow fading frequency selective channels and channels with a shaped spectrum like those of digital subscribe lines (DSL). Towards this end, a novel and analytically based joint source channel coding (JSCC) algorithm to assign unequa...
Article
In critical applications of image compression that are sensitive to information loss, lossless compression techniques are usually employed. The compression ratios obtained from lossless techniques are low. Hence we need different schemes that give quantitative guarantees about the type and amount of distortion incurred, viz. near-lossless compressi...
Article
In this paper we consider the problem of robust image coding and packetization for the purpose of communications over slow fading frequency selective channels and channels with a shaped spectrum like those of digital subscribe lines (DSL). Towards this end, a novel and analytically based joint source channel coding (JSCC) algorithm to assign unequa...
Article
Full-text available
Set Partitioned Embedded bloCK coder (SPECK) has been found to perform comparable to the best-known still grayscale image coders like EZW, SPIHT, JPEG2000 etc. In this paper, we first propose Color-SPECK (CSPECK), a natural extension of SPECK to handle color still images in the YUV 4:2:0 format. Extensions to other YUV formats are also possible. PS...
Article
We investigate and compare the performance of several three-dimensional (3D) embedded wavelet algorithms on lossless 3D image compression. The algorithms are Asymmetric Tree Three-Dimensional Set Partitioning In Hierarchical Trees (AT-3DSPIHT), Three-Dimensional Set Partitioned Embedded bloCK (3D-SPECK), Three-Dimensional Context-Based Embedded Zer...
Conference Paper
In critical applications that are sensitive to information loss, lossless compression techniques are usually employed. The compression ratios obtained from lossless techniques are low. Hence we need different schemes that give quantitative guarantees about the type and amount of distortion incurred, viz. near-lossless compression methods. In this p...
Conference Paper
Full-text available
Technical imaging applications such as coding "images" of digital elevation maps, require extracting regions of compressed images with the pixel values within a pre-defined range, without having to decompress the whole image. Previously, we introduced a class of nonlinear transforms which are small modifications of linear transforms to facilitate a...
Conference Paper
Full-text available
Many technical imaging applications, like coding "images" of digital elevation maps, require extracting regions of compressed images in which the pixel values are within a predefined range, and there is a need for coding methods that allow finding these regions efficiently, without having to decompress the whole image. A series of techniques to sol...
Article
Full-text available
Abstract This paper presents a lossless ,compression ,of volumetric ,medical images ,with the improved,3-D SPIHT algorithm that searches on asymmetric ,trees. The tree structure links wavelet ,coefficients produced ,by three-dimensional reversible integer wavelet transforms. Experiments show ,that the lossless compression ,with the improved ,3-D SP...
Article
In critical applications of image compression that are sensitive to information loss, lossless compression techniques are usually employed. The compression ratios obtained from lossless techniques are low. Hence we need different schemes that give quantitative guarantees about the type and amount of distortion incurred, viz. near-lossless compressi...
Article
Full-text available
Spatial resolution and contrast sensitivity requirements for some types of medical image techniques, including mammography, delay the implementation of new digital technologies, namely CAD, PACS or teleradiology. In order to reduce transmission time and storage cost, an efficient data compression scheme to reduce digital data without significant de...
Article
Full-text available
Spatial resolution and contrast sensitivity requirements for some types of medical image techniques, including mammography, delay the implementation of new digital technologies, namely, computer-aided diagnosis, picture archiving and communications systems, or teleradiology. In order to reduce transmission time and storage cost, an efficient data-c...
Article
Full-text available
We propose an embedded, block-based, image wavelet transform coding algorithm of low complexity. It uses a recursive set-partitioning procedure to sort subsets of wavelet coefficients by maximum magnitude with respect to thresholds that are integer powers of two. It exploits two fundamental characteristics of an image transform---the well defined h...
Article
Full-text available
This paper first introduces an asymmetric tree structure for utilization with an error resilient form 3-D SPIHT, called ERC-SPIHT. Then, we present a fast error concealment scheme, borrowed from Rane et al.'s work with images, for embedded video bi t tC am using ER-SPIHT. In additon t using simple averaging method int2root subband, we detect the pr...
Article
Full-text available
We evaluated some arbitrary-shape ROI (Region of Interest) coding techniques for three-dimensional volumetric images, and through the observation we propose a flexible ROI coding with efficient compression performance. In arbitrary-shape ROI coding, the object in an image is coded with higher fidelity than the rest of the image, together with the s...
Article
Full-text available
The process of information hiding is modeled in the context of additive noise. Under an independence assumption, the histogram of the stegomessage is a convolution of the noise probability mass function (PMF) and the original histogram. In the frequency domain this convolution is viewed as a multiplication of the histogram characteristic function (...
Conference Paper
Full-text available
In this paper, we investigate region-based embedded wavelet compression methods and introduce region-based extensions of the set partitioning in hierarchical trees (SPIHT) and the set partitioning embedded block (SPECK) coding algorithms applied to the breast region of digital mammograms. We have compared these region-based extensions, called OB-SP...
Conference Paper
Hyperspectral images are generated by collecting hundreds of narrow and contiguously spaced spectral bands of data producing a highly correlated long sequence of images. Some application specific data compression techniques may be applied advantageously before we process, store or transmit hyperspectral images. This paper applies asymmetric tree 3D...
Conference Paper
Summary form only given. Hyperspectral images were generated through the collection of hundreds of narrow and contiguously spaced spectral bands of data producing a highly correlated long sequence of images. An investigation and comparison was made on the performance of several three-dimensional embedded wavelet algorithms for compression of hypers...
Conference Paper
The error resilient and error concealment 3-D SPIHT (ERC-SPIHT) are introduced, which are encoded with an asymmetric tree structure. Next, error resilient video transmission is presented using the ERC-SPHITC algorithm with rate-compatible punctured convolutional (RCPC) code and Reed-Solomon code.
Article
Full-text available
Hyperspectral image is a sequence of images generated by hundreds of detectors. Each detector is sensitive only to a narrow range of wavelengths. One can view such an image sequence as a three-dimensional array of intensity values (pixels) within a rectangular prism. This image prism reveals contiguous spectrum information about the composition of...
Article
The three-dimensional (3-D) SPIHT coder is a scalable or embedded coder that has proved its efficiency and its real-time capability in compression of video. A forward-error-correcting (FEC) channel (RCPC) code combined with a single automatic repeat request (ARQ) proved to be an effective means for protecting the bitstream. There were two problems...
Conference Paper
Error Resilient and Error Concealment 3-D SPIHT (ERC-SPIHT) is a joint source channel coder developed to improve the overall performance against channel bit errors without requiring automatic-repeat-request (ARQ). The objective of this research is to test and validate the properties of two competing video compression algorithms in a wireless enviro...
Article
Full-text available
In this paper, we present a motion compensated twolink chain coding technique to effectively encode 2-D binary shape sequences for object-based video coding. This technique consists of a contour motion estimation and compensation algorithm and a two-link chaincoding algorithm. The object contour is defined on a 6-connected contour lattice for a smo...
Article
We investigate and prove the relationships among several commonly used and new performance metrics for embedded image bit streams transmitted over noisy channels. The average of the first error-free run length (FEFRL) is proposed as a simpler performance metric. On binary symmetric channels (BSC), the average FEFRL is obtained in closed form, which...
Article
Full-text available
Compressed video bitstreams require protection from channel errors in a wireless channel. The 3-D set partitioning in hierarchical trees (SPIHT) coder has proved its efficiency and its real-time capability in the compression of video. A forward-error-correcting (FEC) channel (RCPC) code combined with a single automatic-repeat request (ARQ) proved t...
Article
Full-text available
In addition to high compression efficiency, future still and moving image coding systems will require many other features. They include fidelity and resolution scalability, re-gion of interest enhancement, random access decoding, re-silience to errors due to channel noise or packet loss, fast en-coding and/or decoding speed, and low computational a...
Article
Full-text available
In this paper, we present the OB-SPECK algorithm (objectbased set partitionedembedded block coder) for wavelet coding of arbitrary shaped video objects. Based on the SPECK algorithmintroduced in [1], the shape informationof the video object in the wavelet domain is integrated into the coding process for efficient coding of a video object of arbitra...
Article
Full-text available
This paper presents a region-based encoding/decoding for image sequences using Spatio-Temporal Tree Preserving 3-D SPIHT (STTP-SPIHT) algorithm, which is based on the 3-D SPIHT concepts, then further describes multiresolution decoding of the sequences. We have already proved the efficiency and robustness of the STTP-SPIHT in both of noisy and noise...
Conference Paper
Full-text available
This paper presents an embedded video compression with error resilience and error concealment using three di- mensional SPIHT (3-D SPIHT) algorithm. We use a new method for partitioning the wavelet coefficients into spatio-temporal (s-t) blocks to get higher error resilience and to support error concealment. Instead ofgrouping adjacent coefficients...
Chapter
This chapter is devoted to the exposition of a complete video coding system, which is based on coding of three dimensional (wavelet) subbands with the SPIHT (set partitioning in hierarchical trees) coding algorithm. The SPIHT algorithm, which has proved so successful in still image coding, is also shown to be quite effective in video coding, while...
Article
This paper presents an unequal error protection of embedded video bitstreams using three dimensional SPIHT (3-D SPIHT) algorithm and Spatio-Temporal Tree Preserving 3-D SPIHT (STTP-SPIHT) algorithm. We have already proved the efficiency and robustness of the STTP-SPIHT in both of noisy and noiseless channels by modifying the 3-D SPIHT algorithm. We...
Article
Compressed video bitstreams require protection from channel errors in a wireless channel. The three-dimensional (3-D) SPIHT coder has proved its efficiency and its real-time capability in compression of video. A forward-error- correcting (FEC) channel (RCPR) code combined with a single ARQ (Automatic-repeat-request) proved to be an effective means...
Article
A semi-automatic algorithm to extract the semantic video object in image sequences is proposed. Different schemes are used to get the initial video object in the first frame and other frames of a sequence. In the first frame, two polygons are input by the user to specify the area in which the object boundary is located. Then the video object is ext...
Article
Full-text available
The performance of optimum vector quantizers subject to a conditional entropy constraint is studied in this paper. This new class of vector quantizers was originally suggested by Chou and Lookabaugh. A locally optimal design of this kind of vector quantizer can be accomplished through a generalization of the well known entropy-constrained vector qu...
Article
Full-text available
Embedded zerotree wavelet (EZW) coding, introduced by J. M. Shapiro, is a very effective and computationally simple technique for image compression. Here we offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitu...
Conference Paper
Full-text available
This paper proposes several low complexity algorithmic modifications to the SPIHT (set partitioning in hierarchical trees) image coding method of Said and Pearlman (1996). The modifications exploit universal traits common to the real world images. Approximately 1-2% compression gain (bit rate reduction for a given mean squared error) has been obtai...
Article
Full-text available
We propose a low bit-rate embedded video coding scheme that utilizes a 3-D extension of the set partitioning in hierarchical trees (SPIHT) algorithm which has proved so successful in still image coding. Three-dimensional spatio-temporal orientation trees coupled with powerful SPIHT sorting and refinement renders 3-D SPIHT video coder so efficient t...
Article
Full-text available
Compressed video bitstreams require protection from channel errors in a wireless channel and protection from packet loss in a wired ATM channel. The three-dimensional (3-D) SPIHT coder has proved its efficiency and its real-time capability in compression of video. A forward-error-correcting(FEC) channel (RCPC) code1 combined with a single ARQ (auto...

Network

Cited By