Conference Paper

Signal denoising using wavelet and block hidden Markov model

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In this paper, we propose a novel wavelet domain HMM using block to strike a delicate balance between improving spatial adaptability of contextual HMM (CHMM) and modeling a more reliable HMM. Each wavelet coefficient is modeled as a Gaussian mixture model, and the dependencies among wavelet coefficients in each subband are described by a context structure, then the structure is modified by blocks which are connected areas in a scale conditioned on the same context. Before denoising the signal, efficient expectation maximization (EM) algorithms are developed for fitting the HMMs to observational signal data. Parameters of trained HMM are used to modify wavelet coefficients according to the rule of minimizing the mean squared error (MSE) of the signal. Then, the reverse wavelet transformation is utilized to modify wavelet coefficients. Finally, experimental results are given. The results show that the block hidden Markov model (BHMM) is a powerful yet simple tool in signal denoising.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Chapter
In this paper, we propose a statistical scheme to judge the activity level measurement (ALM) that is based on wavelet-domain hidden Markov model (WD-HMM) and maximum likelihood (MLK). The source images are firstly decomposed by the wavelets and only the coefficients in the high frequency (HH) are utilized. Considering the shift-variance of wavelets, the merged image is obtained from the source images directly. The regions of each source image are obtained by the Hough transform (HT) and their ALM are decided by the ALM of their coefficients in HH according to MLK. Finally, two multi focus images are merged by our new framework. The fusion results show the high ability of our scheme in preserving edge information and avoiding shift-variant.
Conference Paper
Edge and detail preserving is a very important requirement in blind medical denoising since the details can help doctors diagnosis diseases. Bilateral filter (BF) is a famous filter in preserving details while smoothing. However, its performance in image denoising is unsatisfied. The main reason for this situation is that the bilateral filter uses gray levels of pixels directly, which causes the propagation of noise (PoN). In this paper, we propose a method, named context bilateral filter (CBF), which conditions the bilateral filter on the context. That is, the range filter is not defined on the gray levels directly while it is defined on the context! Since context can be defined as a smoother version for the image, the PoN can be suppressed greatly. In order to demonstrate the good performance for our method, experimental results using real medical images are given.
Conference Paper
Ultrasound images are effected by serious noises. In this paper, we propose an adaptive method by determining a variable neighborhood of each pixel in ultrasound image denoising. The real neighbors of a pixel can be chosen according to the local energy of the pixel, the distance between the pixel and its close pixels, the requirement of reliable estimate. All of the real neighbors of a pixel form a neighborhood, named the real neighborhood (RN) of the pixel. The RNs can preserve edges and obtain reliable estimates simultaneously, which can not be met at the same time and becomes a tradeoff problem in existing methods. The experiments also show better performance both in denoising and edges preserving than the state-of-art techniques.
Conference Paper
The objective of image fusion is to combine information from multiple images of the some scene. The fusion image is a single image, which is more suitable for human and machine perception or further image processing tasks. In this paper, we propose a novel region-based framework based on wavelet domain hidden Markov models (WD-HMM) and Hough transform (HT). Since the shift-variant of wavelets leads to artifacts and blurred edges in the fused image, wavelets in our framework are only used to judge natures of the source images. Besides this, the new framework also provides the fusion rules based on max-likelihood theory, which is proposed by us firstly and allows fusion on a systematic statistical theory. The region-based fusion is achieved through HT. Finally, our scheme is applied to merging two images in which have two clocks with different focus. The fusion results show the high ability of our scheme in preserving edge information and avoiding shift-variant.
Conference Paper
Wavelet domain hidden Markov models (HMMs) have been proven to be useful tools for statistical signal and image processing. However, most of the improvements of wavelet domain HMMs only focus on how to impose an additional dependency structure on the original wavelet domain HMMs to capture the additional dependencies among wavelet coefficients. Besides this, existing methods do not fully consider the effects of noise in high frequency subbands of wavelet transforms. Some simple algorithms of wavelet domain HMMs, such as dividing the subband of wavelet coefficients into blocks, cannot be carried on smoothly in a noisy image. We give a more general framework to simplify wavelet domain HMM using templates. The new model enables us to concisely share the statistics in real-world noisy image using a more reasonable way and enables us to get a simple, local and reliable model using templates. Templates are constructed in the subband of scaling coefficients in order to reduce the effects of image noises and provide powerful yet tractable probabilistic image models. Before we process images using wavelet domain HMMs, the estimation of parameters for the HMMs must be obtained by the EM training algorithm that shares statistics according to the templates. Finally, to demonstrate the utility of new models, we give an example for image denoising using templates and wavelet-domain HMMs.
Article
Full-text available
The method of wavelet thresholding for removing noise, or denoising, has been researched extensively due to its effectiveness and simplicity. Much of the literature has focused on developing the best uniform threshold or best basis selection. However, not much has been done to make the threshold values adaptive to the spatially changing statistics of images. Such adaptivity can improve the wavelet thresholding performance because it allows additional local information of the image (such as the identification of smooth or edge regions) to be incorporated into the algorithm. This work proposes a spatially adaptive wavelet thresholding method based on context modeling, a common technique used in image compression to adapt the coder to changing image characteristics. Each wavelet coefficient is modeled as a random variable of a generalized Gaussian distribution with an unknown parameter. Context modeling is used to estimate the parameter for each coefficient, which is then used to adapt the thresholding strategy. This spatially adaptive thresholding is extended to the overcomplete wavelet expansion, which yields better results than the orthogonal transform. Experimental results show that spatially adaptive wavelet thresholding yields significantly superior image quality and lower MSE than the best uniform thresholding with the original image assumed known.
Article
Full-text available
Wavelet domain hidden Markov models (HMMs) have been proposed and applied to image processing, e.g., image denoising. We develop a new HMM, called local contextual HMM (LCHMM), by introducing the Gaussian mixture field where wavelet coefficients are assumed to locally follow the Gaussian mixture distributions determined by their neighborhoods. The LCHMM can exploit both the local statistics and the intrascale dependencies of wavelet coefficients at a low computational complexity. We show that the LCHMM combined with the "cycle-spinning" technique can achieve state-of-the-art image denoising performance.
Article
When fitting wavelet based models, shrinkage of the empirical wavelet coefficients is an effective tool for denoising the data. This article outlines a Bayesian approach to shrinkage, obtained by placing priors on the wavelet coefficients. The prior for each coefficient consists of a mixture of two normal distributions with different standard deviations. The simple and intuitive form of prior allows us to, propose automatic choices of prior parameters. These parameters are chosen adaptively according to the resolution level of the coefficients, typically shrinking high resolution (frequency) coefficients more heavily. Assuming a good estimate of the background noise level, we obtain closed form expressions for the posterior means and variances of the unknown wavelet coefficients. The latter may be used to assess uncertainty in the reconstruction. Several examples are used to illustrate the method, and comparisons are made with other shrinkage methods.
Article
We attempt to recover a function of unknown smoothness from noisy sampled data. We introduce a procedure, SureShrirtk, that suppresses noise by thresholding the empirical wavelet coefficients. The thresholding is adaptive: A threshold level is assigned to each dyadic resolution level by the principle of minimizing the Stein unbiased estimate of risk (Sure) for threshold estimates. The computational effort of the overall procedure is order N · log(N) as a function of the sample size N, SurvShrink is smoothness adaptive: If the unknown function contains jumps, then the reconstruction (essentially) does also; if the unknown function has a smooth piece, then the reconstruction is (essentially) as smooth as the mother wavelet will allow. The procedure is in a sense optimally smoothness adaptive: It is near minimax simultaneously over a whole interval of the Besov scale; the size of this interval depends on the choice of mother wavelet. We know from a previous paper by the authors that traditional smoothing methods-kernels, splines, and orthogonal series estimates-even with optimal choices of the smoothing parameter, would be unable to perform in a near-minimax way over many spaces in the Besov scale. Examples of SureShtink are given. The advantages of the method are particularly evident when the underlying function has jump discontinuities on a smooth background.
Article
Contenido: El qué, por qué y cómo de la ondículas (wavelets); La transformación continua de las ondículas; Transformación discreta de ondículas: marcos; Densidad de frecuencia-tiempo y bases ortonormales; Bases ortonormales de ondículas y análisis de multiresolución; Bases ortonormales de ondículas apoyadas compactas; Simetría para bases de ondículas apoyadas compactas; Caracterización de espacios funcionales por el significado de las ondículas. Generalizaciones y trucos para bases ortonormales de ondículas.
Conference Paper
Wavelet-domain hidden Markov models (HMMs) provide a powerful new approach for statistical modeling and processing of wavelet coefficients. In addition to characterizing the statistics of individual wavelet coefficients, HMMs capture some of the key interactions between wavelet coefficients. However, as HMMs model an increasing number of wavelet coefficient interactions, HMM-based signal processing becomes increasingly complicated. In this paper, we propose a new approach to HMMs based on the notion of context. By modeling wavelet coefficient inter-dependencies via contexts, we retain the approximation capabilities of HMMs, yet substantially reduce their complexity. To illustrate the power of this approach, we develop new algorithms for signal estimation and for efficient synthesis of nonGaussian, long-range-dependent network traffic.
Article
Wavelet-based statistical signal processing techniques such as denoising and detection typically model the wavelet coefficients as independent or jointly Gaussian. These models are unrealistic for many real-world signals. We develop a new framework for statistical signal processing based on wavelet-domain hidden Markov models (HMMs) that concisely models the statistical dependencies and non-Gaussian statistics encountered in real-world signals. Wavelet-domain HMMs are designed with the intrinsic properties of the wavelet transform in mind and provide powerful, yet tractable, probabilistic signal models. Efficient expectation maximization algorithms are developed for fitting the HMMs to observational signal data. The new framework is suitable for a wide range of applications, including signal estimation, detection, classification, prediction, and even synthesis. To demonstrate the utility of wavelet-domain HMMs, we develop novel algorithms for signal denoising, classification, and detection
Article
This tutorial provides an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and gives practical details on methods of implementation of the theory along with a description of selected applications of the theory to distinct problems in speech recognition. Results from a number of original sources are combined to provide a single source of acquiring the background required to pursue further this area of research. The author first reviews the theory of discrete Markov chains and shows how the concept of hidden states, where the observation is a probabilistic function of the state, can be used effectively. The theory is illustrated with two simple examples, namely coin-tossing, and the classic balls-in-urns system. Three fundamental problems of HMMs are noted and several practical techniques for solving these problems are given. The various types of HMMs that have been studied, including ergodic as well as left-right models, are described
Bayesian tree-structured image modeling using wavelet-domain hidden Markov model
  • J K Romberg
  • H Choi
  • R Baranuik