ArticlePDF Available

A noise-filtering method using a local information measure.

Authors:
  • Université Sorbonne Paris Nord
  • EFREI PARIS

Abstract and Figures

A nonlinear-noise filtering method based on the entropy concept is developed and compared to the well-known median filter and to the center weighted median filter (CWM). The performance of the proposed method is evaluated through subjective and objective criteria. It is shown that this method performs better than the classical median for different types of noise and can perform better than the CWM filter in some cases.
Content may be subject to copyright.
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 6, NO. 6, JUNE 1997 879
Correspondence
A Noise-Filtering Method Using
a Local Information Measure
Azeddine Beghdadi and Ammar Khellaf
Abstract— A nonlinear-noise filtering method based on the entropy
concept is developed and compared to the well-known median filter and
to the center weighted median filter (CWM). The performance of the
proposed method is evaluated through subjective and objective criteria.
It is shown that this method performs better than the classical median
for different types of noise and can perform better than the CWM filter
in some cases.
I. INTRODUCTION
There are two basic approaches for noise filtering, namely, spatial
methods and frequency methods. Most of the spatial smoothing
processes as the mean and the median filters, which are widely used
[1]–[5], generally tend to remove noise without explicitly identifying
it. It is, though, possible to filter selectively the noise signal by
comparing the gradient grey level in a neighborhood to a fixed
threshold [2], [6]. The frequency smoothing methods [7], [8] remove
the noise by designing a frequency filter and by adapting a cut-
off frequency when the noise components are decorrelated from the
useful signal in the frequency domain. Unfortunately, these methods
are time consuming and depend on the cut-off frequency and the
filter function behavior. Furthermore, they may produce artificial
frequencies in the processed image. This correspondence introduces a
new nonlinear filter based on the entropy concept. Since the pioneer
work of Frieden [9], the use of entropy [10], [11] in image analysis
has attracted a great number of researchers, especially in image
reconstruction [12], [13] and segmentation [14], [15]. In our previous
work [16] that followed the idea of Shiozaki [17], which consists in
defining a local entropy and thus performing a local treatment, we
have shown that by amplifying the local entropy of the contrast, one
can get a contrasted image. It has also been suggested that a noise-
filtering treatment can be obtained by decreasing the entropy of the
local contrast in a given neighborhood. In this correspondence, we
develop this point in detail in Section II, and in Section III give
some examples to show the effectiveness of the proposed filtering
method. A comparison with filters of comparable complexity—the
well-known median filter [3] and the CWM filter [18], [19]—followed
by a general discussion is also given. The proposed method is
not compared to the weighted median (WM) filter, which is more
complex, since it requires the optimization of the weights using some
error criterion under certain constraints [19], [21]. Finally, Section IV
is devoted to the conclusion and perspective.
Manuscript received February 17, 1995; revised June 20, 1996. The
associate editor coordinating the review of this manuscript and approving
it for publication was Prof. Moncef Gabbouj.
A. Beghdadi is with the Laboratoire des Propri´et´es M´ecaniques et Ther-
modynamiques des Mat´
eriaux, C.N.R.S. LP 9001, Institut Galil´
ee, Universit´
e
Paris Nord, 93 430 Villetaneuse, France (e-mail: bab@lpmtm.univ-paris13.fr).
A. Khellaf is with the Groupe d’Analyse d’Images Biologiques, CNAM,
Universit´
e Paris V, 75 015 Paris, France.
Publisher Item Identifier S 1057-7149(97)03735-4.
II. NOISE FILTERING—A NEW APPROACH
In the present approach, we propose a method of identifying
the noise by using the local contrast entropy. A picture element is
considered as noise when the associated local contrast is very different
from those of its neighboring pixels. Therefore, a local contrast
threshold allowing this discrimination is defined according to the local
contrast entropy. Thus, a pixel is identified as noise according to its
contribution weight to the local contrast entropy. Given a pixel ,
center of a window , of grey level , the associated contrast is
defined according to the Weber–Fechner law by
(1)
where is the mean grey level of the surround region of the center
pixel in the window and is the gradient level. In contrast to
other contrast definitions [6], [20], this quantity gives the same local
contrast for grey levels situated at the same distance from the mean
grey level. One can associate to the local contrast the probability
or (2)
where is the window size. Once the probability of the local contrast
is defined, one can estimate the probability to find a noise point. In
fact, a zero contrast zone, i.e., a homogeneous region, corresponds to
a zero probability. That means that the probability to find a noise point
in a homogeneous region is equal to zero. In information context, we
say that the degree of uncertainty to find a noisy point is minimum
and is equal to zero. Therefore, to give a measure of the degree of
uncertainty that a point in a region is a noise, we associate to the
given region a local contrast entropy defined as follows:
(3)
Therefore, a noise pixel or isolated point heavily contributes to the
contrast entropy since its probability is high. The basic idea of the
proposed technique is to modify the grey level of the pixel according
to its contrast value or its contrast probability. Then, the grey level
of the current pixel is transformed with respect to a threshold
contrast probability corresponding to a window where all the
local contrasts are equally distributed; this results in a maximum
entropy. From (2), one can show that this case occurs when all the
local contrasts are equal and different from zero. It corresponds to a
region composed of two homogeneous subregions of the same area,
for example, a well-contrasted sharp symmetrical edge. Thus, a pixel
is considered as noise signal when the corresponding probability
is greater than or equal to the critical value , where is
the size of the analysis window. We give to the pixel the mean or
better the median grey level if the associated probability is greater
than or equal to . In the following we quantitatively justify the
choice of this critical value for the contrast probability.
A. Noise Model—How to Identify the Undesirable Information
It could be noticed that the proposed method is data dependent.
It becomes thus difficult to perform a complete analytical analysis
without assuming some a priori knowledge of the signal and the
noise. In the following, for the sake of simplicity we choose a size of
1057–7149/97$10.00 1997 IEEE
880 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 6, NO. 6, JUNE 1997
(a) (b) (c)
(d) (e) (f)
Fig. 1. (a) Original image. (b) Noisy image (Gaussian and impulsive noise). (c) Median filter. (d) CWM filter (weight . (e) CWM
((f) Our method.
3 3 pixels for the analysis window and we consider some typical
cases of spatial grey-level distribution in the window. Let us now
analyze the following decision rule: is a noise grey level iff
(4)
First Case—A Spot Point: This is the easiest case. It consists of a
well-localized spot point with a grey level surrounded by its eight
neighbors of grey level such that . It is easy to show
that . In other words, the degree of uncertainty to identify
this point as noise is zero.
Second Case—A Homogeneous Region: An example of a homo-
geneous and noise free region consists of 3 3 pixels having almost
the same grey level, say . It can be easily shown that the contrast
entropy associated to this case is zero or very low and thus .
In this case we are sure, with a degree of uncertainty equal to zero,
that there is no noise in the given window.
Third Case—Critical Situation: This case corresponds to a sharp
transition or a ramp. Indeed, in such a situation, the number of pixels
having a grey level greater than the mean grey level is the same as
that of the complementary set. It results in a contrast probability equal
to . It is easy to establish this result from (3). If is the window
size, then , and the considered pixel is not a noise point.
Fourth Case—A Transition Region: To simplify the analysis, let
us consider a window of size , where pixels have nearly the same
grey level and pixels have the grey level . Let be the grey
level associated to the central pixel. One can easily obtain the mean
grey level and the local contrasts , and corresponding,
respectively, to the central point, a pixel of the first class, and a pixel
belonging to the second class. It is easy to show that if the grey level
of the two regions 1 and 2 are identical, then it results in a zero
probability. It corresponds to , and . This case
has been already considered (a spot point).
A similar analysis for the case when yields
the following decision rule. The central point of grey level is
considered as noise if
or (5)
Now, let us consider the case where is different from and
is not equal to . The size of the window is .
A similar computation leads to
(6)
It can be noticed that the first side of the inequality is nothing else
than the difference between the grey level of the center pixel and the
mean grey level of the surround pixels. The term on the right side
of the relation is proportional to the interclass variance. Indeed, the
interclass variance is
(7)
Therefore, condition (6) can be written
(8)
In summary, conditions (5) and (8), which are equivalent to
condition (4), state that a pixel is considered as a noise element
if its grey level is far from those of its neighbors. This distance is
measured through statistical parameters. This is debatable but so are
other similar noise testing models.
Fifth Case—Actual Noisy Window: This case corresponds to the
most encountered configuration in nontextured images. It consists in
a noisy point embedded in a quasihomogeneous region as shown
below.
The grey levels of the surrounding points can be written in the form
, where is the mean grey level and is a small
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 6, NO. 6, JUNE 1997 881
(a) (b) (c)
(d) (e) (f)
Fig. 2. (a) Original image. (b) Noisy image (structured noise). (c) Median filter. (d) CWM filter (weight (e) CWM ( (f) Our method.
value. In this case, is different from the mean grey level, that is
can be written as follows: where is much greater
than . Using the contrast definition and the associated probability,
one can easily show that the central point gives the major contribution
to the local contrast entropy. The associated probability is greater than
the threshold value . Indeed, from (3) the probability of the
center pixel is greater than if the following condition is satisfied:
(9)
The condition (9), equivalent to the inequality ,is
intuitively appealing. Indeed, this condition states that the examined
pixel is considered as a noise point if the corresponding gradient
grey level is greater than the average gradient level computed in its
neighboring.
III. EXPERIMENTAL RESULTS
To test the efficiency of the proposed method, two types of additive
noise have been considered. The first one is a mixture of a zero
mean Gaussian noise (with ) and a impulsive noise (with
probabilty ), and the second one is a structured noise. The
method is compared to the classical median filter and the CWM filter
with 5 5 square window. The degraded signal with the additive
structured noise is generated following the expressions:
, where is the
original grey level, the indicator function, the interference noise
given by: and the
desired maximum noise amplitude. In the experiments, the following
values are chosen for the noise parameters:
and . With these values the fraction
of corrupted pixels is 18%. The size of the test images is 256 256
pixels quantized with 256 grey levels. For subjective comparison only
subjective criteria, namely, the visual perception quality, are used. For
objective comparison the well-known normalized mean square error
(NMSE) and the mean absolute error (MAE) are used as in [18],
[19] and [21].
Fig. 1(a) shows a digitized image of mandrill. This image presents
a typically difficult case for filtering purposes. In fact, many interest-
ing small structures have size and tone values comparable to those of
the noise. Thus, filtering such an image seems to be a rather difficult
task. Fig. 1(b) shows the image of Fig. 1(a) after adding a Gaussian
and impulsive noise. Fig. 1(c) is the result of applying a 5 5
median filter and Fig. 1(d) and (e) corresponds to the CWM filter
with, respectively, a weight of 3 and 5. Fig. 1(f) displays the result
obtained with the proposed method. Through these results, it can be
noticed that the median filter smooths out the noise as well as the
image details. This results in a blurred image. This undesirable effect
is less important when using the CWM filter. Whereas, our method
cleans the image without blurring the contours. Furthermore, Table
I clearly shows that the proposed method performs better than the
median filter and the CWM with the lower weight. However, for
the high weight, the CWM yields lower errors (NMSE and MAE)
and, thus, objectively performs better than our method. However, a
simple visual comparison clearly shows that the proposed method
preserves better the image contrast and details than the CWM filter.
This disagreement with the quantitative comparison is essentially due
to the fact that the NMSE and MAE measures cannot distinguish
between a few large deviations and many small ones. Consequently,
one has to develop other quantitative measures taking into account
the visual criteria to compare the obtained results. Unfortunately, to
our knowledge, it is difficult to find such measures at present time.
The second comparison concerns the structured noise. One can
observe in Fig. 2(b) a periodical structure with bands of the same
orientation and size. This noise is easily distinguishable from the
actual structure of the image. Thus, it is easy to follow the filter
effects on the noise. This noise can obviously be smoothed out by
using frequency filtering as described in [7] and [8]. But the frequency
methods require orthogonal transformations to decorrelate the image
components. This approach is time consuming and complicated.
882 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 6, NO. 6, JUNE 1997
TABLE I
OBJECTIVE COMPARISON—NMSE’S AND MAE’s ASSOCIATED WITH
THE DIFFERENT FILTERS APPLIED TO THE DEGRADED IMAGE
OF FIG. 1(b) (ZERO MEAN GAUSSIAN NOISE WITH 215
AND IMPULSIVE NOISE WITH THE PROBABILITY )
TABLE II
OBJECTIVE COMPARISON—NMSE’S AND MAE’S
ASSOCIATED WITH THE DIFFERENT FILTERS APPLIED TO
THE DEGRADED IMAGE OF FIG.2(b) (STRUCTURED NOISE)
Furthermore, the result depend on the filter function behavior which
may create artificial frequencies at the output.
Fig. 2 shows the obtained results corresponding to the median
filters and to the proposed method for a 5 5 window size. The
superiority of our technique is clearly demonstrated on this example.
The median is successful in eliminating the grid, but it blurs the
image, and an ondulation effect due to the grid appears in the
processed image of Fig. 2(c). In contrast, the CWM does not blur
the image but the interference noise is not completely removed.
Whereas, our method [see Fig. 2(f)] removes a large fraction of
the noise without sensibly modifying the contours and other details.
Furthermore, it is noticed that the classical median filter performs
better than the CWM filter in smoothing out this structured noise. This
result is not surprising since it was shown by Ko and Lee [12] that the
CWM filter tends to preserve lines and more details at the expense
of less noise suppression. Table II confirms these comparison results.
Indeed, the proposed method yields the lowest NMSE and MAE.
IV. CONCLUSION
A simple method for noise filtering has been presented and
compared to the well-known median filter and the CWM filter. The
obtained results on actual images corrupted by two types of noises
confirm the superiority of the proposed technique over the well-
known median filter and the CWM filter. The usefulness of the
entropy concept for image enhancement purposes is demonstrated.
This superiority is justified by the fact that the proposed method is
simple and successful in smoothing out different noises. It was shown
that for noise smoothing, the CWM can objectively perform better
than the proposed method for some weights. But the central weight
should be carefully selected depending on both the characteristics
of the original image and the added noise. In the proposed method,
such constraints do not exist. This makes the proposed method more
flexible than the median filters, as it does not necessitate sorting
the data. The derivation of a detailled analytical analysis taking into
account the data-dependent character of the method is under study.
Furthermore, the separability of the filter will be considered in a near
future, making thus the method faster.
REFERENCES
[1] A. Rosenfeld and A. Kak, Digital Picture Processing. New York:
Academic, 1982.
[2] W. K. Pratt, Digital Image Processing, 2nd ed. New York: Wiley,
1991.
[3] T. S. Huang, Ed., Two Dimensional Digital Signal Processing II: Trans-
forms and Median Filters. New York: Springer-Verlag, 1981.
[4] N. C. Gallagher and G. L. Wise, “A theoretical analysis of the properties
of median filters,” IEEE Trans. Acoust., Speech, Signal Processing, vol.
ASSP-29, pp. 1136–1140, Dec. 1981.
[5] P. K. Sinha and Q. H. Hong, “An improved median filter,” IEEE Trans.
Med. Imag., vol. 9, pp. 345–346, Sept. 1990.
[6] A. Le N´
egrate, A. Beghdadi, and H. Dupoisot, “An image enhancement
technique and its evaluation through bimodality analysis,” CVGIP:
Graphic. Models, Image Processing, vol. 54, pp. 13–22, Jan. 1992.
[7] R. C. Gonzales and R. E. Woods, Digital Image Processing. New York:
Addison-Wesley, 1992.
[8] E. L. Hall, Computer Image Processing and Recognition. New York:
Academic, 1979.
[9] B. R. Frieden, “Restoring with maximum likelihood and maximum
entropy,” J. Opt. Soc. Amer., vol. 62, pp. 511–518, Apr. 1972.
[10] C. E. Shannon, “A mathematical theory of communications,” Bell. Syst.
Tech. J, vol. 27, pp. 623–656, July 1948.
[11] L. Brillouin, Science and Information Theory. New York: Academic,
1956.
[12] S. F. Burch, S. F. Gull, and J. Skilling, “Image restoration by a powerful
maximum entropy method,” Comput. Vis., Graph. Image Processing,
vol. 23, pp. 113–128, Aug. 1983.
[13] A. Mohammad-Djafari and G. Demoment, “Maximum entropy fourier
synthesis with application to diffraction tomography,” Appl. Opt., vol.
26, pp. 1745–1754, May 1987.
[14] T. Pun, “Entropic thresholding, a new approach,” Comput. Graph. Image
Processing, vol. 16, pp. 210–239, July 1981.
[15] J. N. Kapur, P. K. Sahoo, and A. K. C. Wong, “A new method for gray-
level picture thresholding using the entropy of the histogram,” Comput.
Vis., Graph. Image Processing, vol. 29, pp. 273–285, Mar. 1985.
[16] A. Khellaf, A. Beghdadi, and H. Dupoisot, “Entropic contrast enhance-
ment,” IEEE Trans. Med. Imag., vol. 10, pp. 589–592, Dec. 1991.
[17] A. Shiozaki, “Edge extraction using entropy operator,” Comput. Vis.
Graphic Image Processing, vol. 36, pp. 1–9, Oct. 1986.
[18] S. J. Ko and Y. H. Lee, “Center weighted median filters and their
applications to image enhancement,” IEEE Trans. Circuits Syst., vol.
9, pp. 984–993, Sept. 1991.
[19] T. Sun, M. Gabbouj, and Y. Neuvo, “Center weighted median filter:
some properties and their applications in image processing,” Signal
Processing, vol. 35, pp. 213–229, Feb. 1994.
[20] R. Gordon and R. M. Rangayan, Appl. Opt., vol. 23, pp. 560–564, Feb.
1984.
[21] L. Yin, R. Yang, and M. Gabbouj, “Weighted median filters: A tutorial,”
IEEE Trans. Circuits Syst. II: Analog Digital Signal Processing, vol. 43,
pp. 157–192, Mar. 1996.
... Methods of identifying noise pixels by using uncertainty measures have been reported in the literature [13,14]. A method for noise filtering using contrast entropy was reported by Beghdadi and Khellaf [15]. We follow the idea of Beghdadi and Khellaf, but formulate the probability of a stuck-pixel using a local variance measure instead. ...
... We follow the idea of Beghdadi and Khellaf, but formulate the probability of a stuck-pixel using a local variance measure instead. This significantly reduces the blurring effects compared to the lower-order contrast measure used in Ref. [15]. We define the probability that pixel y(k,l) centered in window W(k,l) is a stuck pixel as where is a local gradient variance measure and is the mean pixel value inside the N×N window W(k,l). ...
... Assuming that all pixels W(k,l) are equally likely to be stuck pixels, then the probability of the window pixels being stuck-pixels can be given by P w = 1/N 2 , where the denominator denotes the total number of pixels in the window W(k,l). This probability corresponds to a window where all the local gradient variances are equally distributed [15]. The criterion for a stuck pixel, thus, reduces to testing for a probability that is greater than P w . ...
Article
Full-text available
With the advent of the inexpensi ve CCD (Charge-Coupled Device) on a chip, the wide-spread move from traditional 35mm film photography to digital photography is becoming increasingly apparent especially with journalists and professional photographers. This has prompted digital camera manufacturers to try to implement most of the legacy techniques common among traditional film cameras such as high -ISO film, long exposures, high-speed shutters...etc into digital cameras. One technique that is of utmost importance to a large community of photographers is the digital camera equivalent to the traditional high-speed silver-based film sensitivity, commonly known as the ISO sensitivity number. This paper focuses on the development of an adaptive window-based signal equalization median filtering system for removing sensor noise from images acquired using digital cameras at high ISO settings. The problem lies in the fact that the algorithm must deal with hardware-related sensor noise that affects certain color channels more than others and is thus non-uniform over all color channels. This work i s an attempt at addressing the growing concern in the digital photography community about the reduced visual fidelity in images acquired by modern professional digital cameras at high -ISO settings.
... The proposed methods are simpler than recently developed approaches which extend the design framework of standard filtering schemes by considering weighting coefficients [25][26][27], sub-window structures [28,29], and similarities of the samples along digital paths [30]. In addition to other approaches [31][32][33][34][35] based on the switching concept, the proposed framework is designed to perform simple, fast, and pure vector operations respecting the nature of color images and computational requirements. ...
... Although a number of switching approaches [31][32][33][34][35]37,39,46] have been proposed to date, a majority of them focuses on outliers detection in gray-scale images. Another problem related to switching filters can be often observed with respect to their inefficient robustness [31], high computational complexity [35,39], complex optimization [33,46], high number of switching levels [34], and low flexibility to accommodate the algorithm for a variety of window shapes [33,35]. ...
... Although a number of switching approaches [31][32][33][34][35]37,39,46] have been proposed to date, a majority of them focuses on outliers detection in gray-scale images. Another problem related to switching filters can be often observed with respect to their inefficient robustness [31], high computational complexity [35,39], complex optimization [33,46], high number of switching levels [34], and low flexibility to accommodate the algorithm for a variety of window shapes [33,35]. ...
Article
This paper presents a new adaptive filtering approach capable of detecting and removing impulsive noise in multichannel images. The proposed methodology constitutes a powerful unified framework for multichannel signal processing. Robust order-statistic concepts and statistical measure of vectors’ deviation are used in conjunction with different distance measures among multichannel inputs to determine an efficient switching rule between filter output and no filtering (identity operation). The special case of color image filtering is studied as an important example of multichannel signal processing. Simulation studies reported in this paper indicate that the proposed filter class is computationally attractive, has excellent performance, and is able to preserve fine details while suppressing impulsive noise.
... Contrast probability is defined as local contrast (absolute value of the difference between eight neighbouring pixels and mean value of these neighbouring pixels) over a summation of eight neighbouring pixels in the 3 × 3 selected window (or μ = 1 size). In the selected window, if contrast probability at the pixel point is greater than critical probability which is defined according to window size (or different size) as ( 1/ 2μ + 1 2μ + 1 − 1) [118], then, the pixel being processed is considered noise. In order to eliminate the noise, toggle mappings are defined. ...
... The second proposed toggle mapping is based on white top-hat [118] (the difference between the original and the morphological opening image) and allows detection of noise only in white components and does not use a contrast measure nor a probability criterion. The noise criterion is top-hat checking at the pixel point; if it is equal or less than mean white top-hat, it is left unchanged, otherwise, it replaces with a multiplication of opening and closing operation. ...
Article
Full-text available
This study presents a comprehensive survey on mixed impulse and Gaussian denoising filters which are applied to an image in order to gauge the effects of this type of noise combination and to then determine optimal ways that can overcome such effects. The random noise model considered in this survey is the combined effect of impulse (salt and pepper) and Gaussian noise. After describing the noise models, the denoising filters which are applied to the images are classified and explained according to their design structure, the type of filters they use, the noise level they could overcome, and the limitations they face. This survey covers all related denoising methods and provides an assessment of the strengths and practical limitations of the different classes of denoising filters.
... For this reason we will limit ourselves to discussing few filtering methods based on contrast measure. A gray level image filtering approach based on the concept of contrast entropy has been proposed in [143]. The contrast defined in this method is based on the adaptation of the Weber-Fechner contrast measure to digital images. ...
... The study takes advantage of the CSF and the masking effect but it is not explicitly based on any of the contrast measures. Recently, a denoising method based on local contrast and inspired by the idea developed in [143] has been proposed in [145]. ...
Article
Full-text available
Contrast in visible images is one of the most relevant characteristics of visual signals. Since the pioneering works performed in vision psychology and optics, different definitions have been proposed in the literature. However, for the time being there exist no definition of contrast on which the vision research and visual processing scientific community can agree on. This makes it critical to have a clear view on the notion of contrast and its use in various applications. One issue to consider is how to define and particularly use this important measure in developing image processing and analysis methods. In this paper, we present a critical review of contrast measures and associated models developed by the scientific community in vision, optics, and image processing. We also provide learned lessons and guidelines on the use of appropriate contrast measures in selected visual information processing and analysis applications. We discuss challenges and propose research avenues for models enriched by recent findings in the field of human vision research and machine learning. We believe that this work serves as a guideline and can potentially open new research perspectives to the scientific community working on visual information processing and analysis.
... In [28] a nonlinear-noise filtering method based on the local contrast concept has been developed. In this method, a pixel is considered as noisy when its local contrast is much different from its neighbors. ...
... Following the design of FPGF AMF , the corrupted pixels are replaced by the average of the undisturbed pixels surrounding the central pixel, instead of the vector Table 1 List of filters used for the comparisons with the proposed noise removal method Filters used for comparisons VMF [13] Vector median filter BVDF [19] Basic vector directional filter DDF [19] Directional distance filter VDOSF [32] Vector directional order-statistics filter HDF [45] Hybrid directional filter LIMF [28] Local information measure filter VLUMS [29] Vector LUM smoother AVLUMS [30] Adaptive vector LUM smoother MICM [27] Modified ICM method SF [41] Sigma filter AVFF [25] Adaptive video filtering framework AVMF [26] Adaptive vector median filter ACIF [33] Adaptive color image filter CWVMF [22] Central weighted vector median filter MCWVMF [22] Modified central weighted vector median filter FANRF [35] Fast adaptive noise reduction filter SANRF [23] Self-adaptive noise reduction filter GAVSF [9] Generalized adaptive vector sigma filters median. Thus, additional speed gain is achieved without sacrificing the efficiency of the noise suppression as shown in Table 2. ...
Article
Full-text available
In this paper, a novel approach to the impulsive noise removal in color images is presented. The proposed technique employs the switching scheme based on the impulse detection mechanism using the so-called peer group concept. Compared to the vector median filter and other commonly used multichannel filters, the proposed technique consistently yields very good results in suppressing both the random and fixed-valued impulsive noise. The main advantage of the proposed noise detection framework is its enormous computational speed, which enables efficient filtering of color images in real-time applications.
... In this paper, necessary and sufficient conditions for a signal to be invariant under a specific form of median filtering . A new impulse detector for switching median filters had been presented by Shuqun Zhang et al [11][12][13][14][15][16][17]. This impulse detector is based on the minimum absolute value of four convolutions obtained using one dimensional Laplacian operators. ...
Article
The main objective of this survey is to compare different nonlinear filtering techniques for denoising and enhancing digital images for multiple noise environments. In this Survey, the various noise conditions are studied and some efficient nonlinear filters are designed to suppress bipolar fixed-valued impulse noise quite effectively. Efforts have been made to develop some noise removal techniques
... The efficiency and robustness of the switching filter depends on how well the noise detector can differentiate between various types of noises within the given digital media. In case of estimator [7][8][9][10][11], we are employing a rank order filter as a denoising filter. A filter whose the output value depends on the ranking of the pixels according to their grey values inside the filter window is called rank order filter. ...
... In [35] a perceptual nonlinear lter based on local contrast entropy was proposed. The idea is based on the fact that additive noise increases the local contrast entropy. ...
Article
Perceptual approaches have been widely used in many areas of visual information processing. This paper presents an overview of perceptual based approaches for image enhancement, segmentation and coding. The paper also provides a brief review of image quality assessment (IQA) methods, which are used to evaluate the performance of visual information processing techniques. The intent of this paper is not to review all the relevant works that have appeared in the literature , but rather to focus on few topics that have been extensively researched and developed over the past few decades. The goal is to present a perspective as broad as possible on this actively evolving domain due to relevant advances in vision research and signal processing. Therefore, for each topic, we identify the main contributions of perceptual approaches and their limitations, and outline how perceptual vision has inuenced current state-of-the-art techniques in image enhancement, segmentation, coding and visual information quality assessment.
Article
In the paper, we achieved the speckle image statistics restoration by computing the most likely state at each pixel based on hidden Markov models (HMM). Among the features of the proposed method, HMM takes the adaptive window size which allows us to obtain a better estimate of the local variance of the noise for different regions of the image. Therefore, the additive noise is removed more in the smooth regions while the edges are preserved in nonsmooth ones. Another feature of this method has to do with the proportionality of the execution time and the noise power. Meanwhile, the software and hardware of speckle measurement system are designed and realized in laser projection displaying based on LabVIEW flat. The performance of this soft algorithm indicated that the restored images have higher contrast and clearness which is attributed to nearly optimal usage of the statistical properties of the image by HMM.
Conference Paper
In this paper a new method is proposed for image denoising based on HMM (hidden Markov modeling). In this manner the noisy imageis modeled as a hidden Markov process, we use the local statistics of the images in defining the HMM parameters, and the image restoration is achieved by computing the most likely state at each pixel. Among the features of the proposed method is the adaptive window size for different regions of the image (smooth and nonsmooth regions). The adaptive window size allows us to obtain a better estimate of the local variance of the noise, therefore, the additive noise is removed more in the smooth regions while the edges are preserved in nonsmooth ones. Another feature of this method has to do with the proportionality of the execution time and the noise power; the less the noise power, the faster is the execution of the proposed algorithm. The performance of this algorithm is evaluated through subjective and objective criteria and it is shown that the restored images by HMM have higher contrast and clearness which is attributed to nearly optimal usage of the statistical properties of the image by HMM.
Book
Parts 2 and 3 of a 2 volume set on digital techniques of solving image processing problems. Considers in 6 chapters: efficient matrix transposition; 2-D convolution; DFT algorithms; median filters. All chapters are abstracted separately. -R.Harris
Book
The 2nd edition adds material on the role of errors in scientific observation and a critical discussion of determinism from the standpoint of information theory to the material of the 1st edition, which applied information theory to a great number of problems of physics, including: the analysis of signals; thermodynamics; Brownian movement; thermal agitation in electronic tubes, rectifiers, etc.; entropy; Maxwell's demon; Szilard's well-informed heat engine; observations and error; communication; and computing. The new material on determinism leads to Brillouin's "matter of fact" point of view that strict determinism is impossible in scientific prediction because the high cost at some point makes increasing accuracy unattainable. The limit of accuracy is a practical rather than an inevitable limitation in the logical sense. The limitations can be formulated in precise ways by quantum conditions and information theory and should be included in the physical theory.
Article
Several deterministic properties of center weighted median (CWM) filters are analyzed in this paper. The root structures of CWM filters are derived. A test is devised to check whether a signal is a root of a given CWM filter. The convergence behavior of recursive and nonrecursive CWM filters is then analyzed. In particular, based on their root structures, it is proven that repeated filtering on any appended finite length signal by any CWM filter produces roots in a finite number of filter passes. A synthesis method of CWM filters under structural constraints is presented. A separable CWM filter is proposed as an application of one-dimensional CWM filters in image processing. It is expected that by using CWM filters, more details can be preserved along the horizontal and vertical directions. Adaptive separable CWM filters are then proposed. We show that they can be better than two-dimensional adaptive CWM filters with the same window size in some cases. Adaptive CWM filters are then extended to adaptive symmetric weighted median filters.
Article
A powerful iterative algorithm has been developed which produces a maximum entropy solution to the image restoration problem. It has been applied to images containing up to 1024 × 1024 pixels and has been implemented on both mini and mainframe computers. Unlike some methods, the algorithm does not require the point-spread function to have any special symmetry properties. Examples are given of the application of this method both to artificially and to experimentally blurred photographs, and also to an X-ray radiograph, blurred by the size of the radiation source. For comparison, restorations of some of these images by the linear method of constrained least squares are also shown. The maximum entropy algorithm revealed detail on the images not seen in the linear restorations, but which are known to exist. Maximum entropy is also applied to the reconstruction of images from sparse data, which no comparable linear algorithm can handle.
Article
This paper presents a new operator called entropy operator for extracting edges using entropy of brightness in a local region of a picture. This operator calculates the entropy of brightness in the region. The entropy is small when the change of brightness is severe and is large when the change of brightness is smooth. Then, edges may be extracted by the detection of the regions where the entropy is small. Moreover, the entropy of color is defined and the entropy operator for a color picture calculates the entropy of color in a local region. The entropy is small when the change of not only brightness but also hue is severe. Then, color edges may be extracted by the detection of the regions where the entropy is small. We obtain a smooth “edge picture” without noisy pixels using the entropy operator.
Article
This paper describes an automatic threshold selection method for picture segmentation. The basic concept is the definition of an anisotropy coefficient, which is related to the asymmetry of the grey-level histogram. Its use permits the derivation of the entropic thresholds, which has been successfully applied to images having various kinds of histograms. Several experimental results are presented. An extension to multithresholding is also suggested.
Article
Two methods of entropic thresholding proposed by T. Pun have been carefully and critically examined. An algorithm is derived for choosing a threshold from the gray-level histogram of a picture by using the entropy concept from information theory. The advantage of this algorithm is that it uses a global and objective property of the histogram. Because of its general nature, this algorithm can be used for segmentation purposes. Examples are given on a number of real and artificially generated histograms.
Article
It is well known that the most effective method for comparing images at present is subjective human evaluation. One factor complicating the design and evaluation of a given image treatment is the lack of an objective measure of picture quality. In the present paper a contrast enhancement and noise filtering technique is developed and evaluated through bimodality measure analysis. The proposed algorithm makes the different classes of an image statistically separable without sensibly modifying the contours. This method can be used as an aid in gray-level thresholding.