Article

Image Denoising: Pointwise Adaptive Approach

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

. The paper is concerned with the problem of image denoising. We consider the case of black-white type images consisting of a finite number of regions with smooth boundaries and the image value is assumed to be piecewise constant within each region. New method of image denoising is proposed which is adaptive (assumption free) to the number of regions and smoothness properties of edges. The method is based on a pointwise image recovering and it relies on an adaptive choice of a smoothing window. It is shown that the attainable quality of estimation depends on the distance from the point of estimation to the closest boundary and on the smoothness properties of this boundary. As a consequence, it turns out that the proposed method provides the optimal rate of the edge estimation. 1. Introduction One of the main problem of image analysis is reconstruction of an image (a picture) from noisy data. It has been intensively studied last years, see e.g. the books of Pratt (1978), Grenander (19...

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... The accuracy analysis produced in[38]for estimation at far and near change points shows that the estimates are nearly optimal within the usual log n-factor unavoidable for the adaptive estimation convergence rate. A 2D generalization of the algorithm from[38]is proposed for image denoising in[35]. It is assumed that the image intensity is an unknown piece-wise constant function. ...
... The theoretical analysis produced for 1D regression in[9]and for multidimensional regression in[33]shows that the ICI adaptive algorithms achieve the best convergence rate and in this way the ICI adaptation is asymptotically optimal. Similar results for different classes of function and different accuracy criteria are proved for many versions of Lepski's adaptation algorithms[27,28,35]. The introduced MR spectral decomposition transforms the original nonparametric estimation problem into the sequence estimation framework with the sequence of ...
Article
In nonparametric local polynomial regression the adaptive selection of the scale parameter (window size/bandwidth) is a key problem. Recently new efficient algorithms, based on Lepski's approach, have been proposed in mathematical statistics for spatially adaptive varying scale denoising. A common feature of these algorithms is that they form test-estimates different by the scale h∈H and special statistical rules are exploited in order to select the estimate with the best pointwise varying scale. In this paper a novel multiresolution (MR) local polynomial regression is proposed. Instead of selection of the estimate with the best scale h a nonlinear estimate is built using all of the test-estimates . The adaptive estimation consists of two steps. The first step transforms the data into noisy spectrum coefficients (MR analysis). On the second step, this noisy spectrum is filtered by the thresholding procedure and used for estimation (MR synthesis).
... Observations Y i with X i outside the region U (x) are not used when estimating the value θ(x) . This kind of localization arises e.g. in the regression tree approach, in change point estimation Müller (1992) and Spokoiny (1998), in image denoising, Qiu (1998), Polzehl and Spokoiny (2003) among many others. ...
Article
Full-text available
The paper presents a unified approach to local likelihood estimation for a broad class of nonparametric models, including e.g. the regression, density, Poisson and binary response model. The method extends the adaptive weights smoothing (AWS) procedure introduced in Polzehl and Spokoiny (2000) in context of image denoising. The main idea of the method is to describe a greatest possible local neighborhood of every design point Xi in which the local parametric assumption is justified by the data. The method is especially powerful for model functions having large homogeneous regions and sharp discontinuities. The performance of the proposed procedure is illustrated by numerical examples for density estimation and classification. We also establish some remarkable theoretical nonasymptotic results on properties of the new algorithm. This includes the ``propagation'' property which particularly yields the root-n consistency of the resulting estimate in the homogeneous case. We also state an ``oracle'' result which implies rate optimality of the estimate under usual smoothness conditions and a ``separation'' result which explains the sensitivity of the method to structural changes.
... This method ignores higher order statistics of the noise. Others use a hypothesis test between the empirical distribution of the residual and the true noise distribution[16]for polynomial order selection in regressionbased smoothing. However the exact variance of the noise or its complete distribution is usually not known in practical situations. ...
Conference Paper
Full-text available
Despite the vast body of literature on image denoising, relatively little work has been done in the area of automatically choosing the filter parameters that yield optimal filter performance. The choice of these parameters is crucial for the performance of any filter. In the literature, some independence-based criteria have been proposed, which measure the degree of independence between the denoised image and the residual image (defined as the difference between the noisy image and the denoised one). We contribute to these criteria and point out an important deficiency inherent in all of them. We also propose a new criterion which quantifies the inherent `noiseness' of the residual image without referring to the denoised image, starting with the assumption of an additive and i.i.d. noise model, with a loose lower bound on the noise variance. Several empirical results are demonstrated on two well-known algorithms: NL-means and total variation, on a database of 13 images at six different noise levels, and for three types of noise distributions.
Conference Paper
Image restoration tasks are ill-posed problems, typically solved with priors. Since the optimal prior is the exact unknown density of natural images, actual priors are only approximate and typically restricted to small patches. This raises several questions: How much may we hope to improve current restoration results with future sophisticated algorithms? And more fundamentally, even with perfect knowledge of natural image statistics, what is the inherent ambiguity of the problem? In addition, since most current methods are limited to finite support patches or kernels, what is the relation between the patch complexity of natural images, patch size, and restoration errors? Focusing on image denoising, we make several contributions. First, in light of computational constraints, we study the relation between denoising gain and sample size requirements in a non parametric approach. We present a law of diminishing return, namely that with increasing patch size, rare patches not only require a much larger dataset, but also gain little from it. This result suggests novel adaptive variable-sized patch schemes for denoising. Second, we study absolute denoising limits, regardless of the algorithm used, and the converge rate to them as a function of patch size. Scale invariance of natural images plays a key role here and implies both a strictly positive lower bound on denoising and a power law convergence. Extrapolating this parametric law gives a ballpark estimate of the best achievable denoising, suggesting that some improvement, although modest, is still possible.
Conference Paper
The goal of natural image denoising is to estimate a clean version of a given noisy image, utilizing prior knowledge on the statistics of natural images. The problem has been studied intensively with considerable progress made in recent years. However, it seems that image denoising algorithms are starting to converge and recent algorithms improve over previous ones by only fractional dB values. It is thus important to understand how much more can we still improve natural image denoising algorithms and what are the inherent limits imposed by the actual statistics of the data. The challenge in evaluating such limits is that constructing proper models of natural image statistics is a long standing and yet unsolved problem. To overcome the absence of accurate image priors, this paper takes a non parametric approach and represents the distribution of natural images using a huge set of 1010 patches. We then derive a simple statistical measure which provides a lower bound on the optimal Bayesian minimum mean square error (MMSE). This imposes a limit on the best possible results of denoising algorithms which utilize a fixed support around a denoised pixel and a generic natural image prior. Our findings suggest that for small windows, state of the art denoising algorithms are approaching optimality and cannot be further improved beyond ~ 0.1dB values.
Article
Full-text available
We review the evolution of the nonpara- metric regression modeling in imaging from the local Nadaraya-Watson kernel estimate to the nonlocal means and further to transform-domain Þltering based on non- local block-matching. The considered methods are classi- Þed mainly according to two main features: local/nonlocal and pointwise/multipoint. Here nonlocal is an alterna- tive to local, and multipoint is an alternative to point- wise. These alternatives, though obvious simpliÞcations, allow to impose a fruitful and transparent classiÞcation of the basic ideas in the advanced techniques. Within this framework, we introduce a novel single- and multiple- model transform domain nonlocal approach. The Block Matching and 3-D Filtering (BM3D) algorithm, which is currently one of the best performing denoising algo- rithms, is treated as a special case of the latter approach.
Article
Since their introduction in image denoising, the family of non-local methods, whose Non-Local Means (NL-Means) is the most famous member, has proved its ability to challenge other powerful methods such as wavelet based approaches or variational techniques. Though simple to implement and efficient in practice, the classical NL-Means algorithm suffers from several limitations: noise artifacts are created around edges and regions with few repetitions in the image are not treated at all. In this paper, we present an easy to implement and time efficient modification of the NL-Means based on a better reprojection from the patches space to the original pixel space, specially designed to reduce the artifacts due to the rare patch effect. We compare the performance of several reprojection schemes on a toy example and on some classical natural images.
Article
Full-text available
A new image denoising algorithm to deal with the additive Gaussian white noise model is given. Like the non-local means method, the filter is based on the weighted average of the observations in a neighborhood, with weights depending on the similarity of local patches. But in contrast to the non-local means filter, instead of using a fixed Gaussian kernel, we propose to choose the weights by minimizing a tight upper bound of mean square error. This approach makes it possible to define the weights adapted to the function at hand, mimicking the weights of the oracle filter. Under some regularity conditions on the target image, we show that the obtained estimator converges at the usual optimal rate. The proposed algorithm is parameter free in the sense that it automatically calculates the bandwidth of the smoothing kernel; it is fast and its implementation is straightforward. The performance of the new filter is illustrated by numerical simulations.
Conference Paper
Full-text available
We develop a novel statistical model, called multiscale adaptive regression model (MARM), for spatial and adaptive analysis of neuroimaging data. The primary motivation and application of the proposed methodology is statistical analysis of imaging data on the two-dimensional (2D) surface or in the 3D volume for various neuroimaging studies. The existing voxel-wise approach has several major limitations for the analyses of imaging data, underscoring the great need for methodological development. The voxel-wise approach essentially treats all voxels as independent units, whereas neuroimaging data are spatially correlated in nature and spatially contiguous regions of activation with rather sharp edges are usually expected. The initial smoothing step before the voxel-wise approach often blurs the image data near the edges of activated regions and thus it can dramatically increase the numbers of false positives and false negatives. The MARM, which is developed for addressing these limitations, has three key features in the analysis of imaging data: being spatial, being hierarchical, and being adaptive. The MARM builds a small sphere at each location (called voxel) and use these consecutively connected spheres across all voxels to capture spatial dependence among imaging observations. Then, the MARM builds hierarchically nested spheres by increasing the radius of a spherical neighborhood around each voxel and combine all the data in a given radius of each voxel with appropriate weights to adaptively calculate parameter estimates and test statistics. Theoretically, we first establish that the MARM outperforms classical voxel-wise approach. Simulation studies are used to demonstrate the methodology and examine the finite sample performance of the MARM. We apply our methods to the detection of spatial patterns of brain atrophy in a neuroimaging study of Alzheimers disease. Our simulation studies with known ground truth confirm that the MARM significantly outperforms the voxel-wise methods.
Article
Full-text available
This paper deals with numerical problems arising when performing maximum likelihood parameter estimation in speckled imagery using small samples. The noise that appears in images obtained with coherent illumination, as is the case of sonar, laser, ultrasound-B, and synthetic aperture radar, is called speckle, and it can neither be assumed Gaussian nor additive. The properties of speckle noise are well described by the multiplicative model, a statistical framework from which stem several important distributions. Amongst these distributions, one is regarded as the universal model for speckled data, namely, the 𝒢0 law. This paper deals with amplitude data, so the 𝒢A0 distribution will be used. The literature reports that techniques for obtaining estimates (maximum likelihood, based on moments and on order statistics) of the parameters of the 𝒢A0 distribution require samples of hundreds, even thousands, of observations in order to obtain sensible values. This is verified for maximum likelihood estimation, and a proposal based on alternate optimization is made to alleviate this situation. The proposal is assessed with real and simulated data, showing that the convergence problems are no longer present. A Monte Carlo experiment is devised to estimate the quality of maximum likelihood estimators in small samples, and real data is successfully analyzed with the proposed alternated procedure. Stylized empirical influence functions are computed and used to choose a strategy for computing maximum likelihood estimates that is resistant to outliers.
Article
Full-text available
M-estimators and M-kernel estimators with a redescending ψ-function are not in general consistent. This is often handled by means of coupling the estimator to a consistent one. Coupling the estimator to the (inconsistent) starting point improves the jump preserving properties. However, the consistency depends heavily on the shape of the density of the residuals. This paper shows inconsistency under convenient conditions as well as consistency – even at jump points – under somewhat stronger conditions.
Article
Full-text available
We propose a novel nonparametric regression metthod for deblurring noisy images. The method is based on the local polynomial approximation (LPA) of the image and the paradigm of intersecting confidence intervals (ICI) that is applied to define the adaptive varying scales (window sizes) of the LPA estimators. The LPA-ICI algorithm is nonlinear and spatially adaptive with respect to smoothness and irregularities of the image corrupted by additive noise. Multiresolution wavelet algorithms produce estimates which are combined from different scale projections. In contrast to them, the proposed ICI algorithm gives a varying scale adaptive estimate defining a single best scale for each pixel. In the new algorithm, the actual filtering is performed in signal domain while frequency domain Fourier transform operations are applied only for calculation of convolutions. The regularized inverse and Wiener inverse filters serve as deblurring operators used jointly with the LPA-design directional kernel filters. Experiments demonstrate the state-of-art performance of the new estimators which visually and quantitatively outperform some of the best existing methods.
Article
Recently, new efficient algorithms, based on Lepski's approach , have been proposed for spatially adaptive varying scale de-noising. Special statistical rules are exploited in order to select the estimate with the best point-wise varying scale h from a set of test-estimates yˆh(x),h∈H. In this paper, a novel multiresolution (MR) nonparametric regression technique is developed. The adaptive algorithm consists of two steps. The first step transforms the data into noisy spectrum coefficients (MR analysis). In the second step, these noisy spectrum is filtered by the thresholding procedure and exploited for estimation (MR synthesis). This nonlinear estimate is built using the test-estimates yˆh(x) of all scales. Simulation confirms the advanced performance of the new de-noising algorithms based on the MR nonparametric regression.
Article
Let X i , i 2 I , be independent random variables de ned on a probability space( ; A; P ) and with values in a measurable space (X ; B), where the index - set ; : : : ; : 1 i j n j 8 1 j d ; (n 1 ; : : : ; n d 2 N), is a regular d-dimensional grid in the unit cube [0; 1] , d 2 N. Assume there exists a partition = + of the unit cube into two disjoint regions and such that X i has distribution Q or Q according to i 2 or i 2 , respectively: P X i = Q ; if i 2 : Here Q and Q are dierent and unknown probability measures on (X ; B). The problem is to estimate the common (topological) boundary = @ = @ of the two regions and based on the data set fX i : i 2 Ig. For that purpose we use the empirical measures PM pertaining to fX i : i 2 Mg for subsets M [0; 1] PM := jM j X i2M X i ; where x denotes the Dirac-measure at point x 2 X and jM j = cardfi 2 I : i 2 Mg is the number of nodes i in M . The sample-based estimate of will be selected from a nite collection T = T I of so-called candidate boundaries, with generic element T . Each candidate T 2 T I induces a partition = T + T : De ne the weighted dierence D I (T ) = jI j jT jjT jfP T P T g ; T 2 T I ; which has expectation I (T ) = ED I (T ) = I (T )(Q Q) ; where I (T ) = jI j fj T jjT j j T jjT jg : If N = N I is a (possibly random) semi-norm on the set M of all nite signed measures on (X ; B) then N[ I (T )] = j I (T )jN [Q Q] : (1) The mapping I (T ) = j I (T )j satis es I () I (T ) d(; T ) 8 T 2 T I ; (2) for some positive , where d(; T ) = jI j j4T j with 4 denoting the set-theoretic symmetric dierence. The pseudometric d can be regarded as a distance between the two boundaries and T . Let us assume f...
ResearchGate has not been able to resolve any references for this publication.