Article

Relations Between Regularization and Diffusion Filtering

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Regularization may be regarded as diffusion filtering with an implicit time discretization where one single step is used. Thus, iterated regularization with small regularization parameters approximates a diffusion process. The goal of this paper is to analyse relations between noniterated and iterated regularization and diffusion filtering in image processing. In the linear setting, we show that with iterated Tikhonov regularization noise can be better handled than with noniterated. In the nonlinear framework, two filtering strategies are considered: total variation regularization and the diffusion filter of Perona and Malik. It is established that the Perona-Malik equation decreases the total variation during its evolution. While noniterated and iterated total variation regularization is well-posed, one cannot expect to find a minimizing sequence which converges to a minimizer of the corresponding energy functional for the Perona-Malik filter. To address this shortcoming, a novel regu...

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... These properties include well-posedness results, preservation of the average grey value, maximum-minimum principles, Lyapunov functionals, and convergence to a constant steady state. It was possible to regard also regularization methods as scale-spaces by interpreting their Euler-Lagrange equations as fully implicit discretizations of a parabolic partial differential equation with a single time step [6]. Radmoser et al. [7] have extended these results to nonstationary iterative regularization methods, where the regularization parameter can vary from iteration to iteration. ...
... Note that (1.5) immediately implies 6) and the relation (1.6) shows in which way the density introduced in (1.3) approximates the TV-case from (1.1). ...
... As outlined in [6] or [7], we can introduce Lyapunov functionals for the nonstationary regularization methods considered here leading to an appropriate version of Theorem 4.1 (a) in [7] with some adjustments resulting from the linear growth of F stated in (A3). In what follows, r : [0, 1] → R is a function such that r ∈ C 2 ([0, 1]) and r 0. With u(t) from Theorem 1.1 we define the Lyapunov functional ...
Article
Full-text available
The TV-regularization method due to Rudin, Osher, and Fatemi is widely used in mathematical image analysis. We consider a nonstationary and iterative variant of this approach and provide a mathematical theory that extends the results of Radmoser et al. to the BV setting. While existence and uniqueness, a maximum–minimum principle, and preservation of the average grey value are not hard to prove, we also establish the convergence to a constant steady state and consider a large family of Lyapunov functionals. These properties allow us to interpret the iterated TV-regularization as a time-discrete scale-space representation of the original image.
... The minimization problem (2) was studied for different functions p, see [23] or [24]. One of the motivation of the representation (2) is the interpretation of the regularization term with diffusion equations. ...
... Total-Variation has the advantage to provide satisfactory image restoration without a priori information about the target image. On the other hand, the Perona-Malik model [16] has shown impressive results on edge preservation with a suited contrast parameter λ and has been extended to inverse problems with applications in image restoration, see [23]. A disadvantage of the Perona-Malik model is to be sensitive to the sole parameter λ when the complexity of an image makes intervene many different levels of contrast. ...
... While the latter two were based on partial differential equations, the former were very problem specific. Another combination of these methods can be found in [23], where the authors analyzed regularizations and diffusion processes for image denoising, that is the special case A being the identity operator. In [7], the authors gave conditions on the penalty term to perserve edges and considering a single optimization problem. ...
Preprint
Building on the well-known total-variation (TV), this paper develops a general regularization technique based on nonlinear isotropic diffusion (NID) for inverse problems with piecewise smooth solutions. The novelty of our approach is to be adaptive (we speak of A-NID) i.e. the regularization varies during the iterates in order to incorporate prior information on the edges, deal with the evolution of the reconstruction and circumvent the limitations due to the non-convexity of the proposed functionals. After a detailed analysis of the convergence and well-posedness of the method, this latter is validated by simulations perfomed on computerized tomography (CT).
... On the other hand, anisotropic diffusion is inherent to Perona-Malik (PM) filtering [28]. It is well-known that the diffusion process governed by the PM equation leads to a decrease in the total variation during its evolution [34]. We, thus, choose the energy functional of the Perona-Malik equation for anisotropic diffusion [28]. ...
... We refer to [6,34] for a general introduction to anisotropic diffusion and a detailed discussion on the PM functional. Further, in [31], the PM model was used in the reconstruction of log-conductivities in AET and it was observed that the reconstructions obtained demonstrated superior contrast and resolution. ...
Preprint
A new non-linear optimization approach is proposed for the sparse reconstruction of log-conductivities in current density impedance imaging. This framework comprises of minimizing an objective functional involving a least squares fit of the interior electric field data corresponding to two boundary voltage measurements, where the conductivity and the electric potential are related through an elliptic PDE arising in electrical impedance tomography. Further, the objective functional consists of a $L^1$ regularization term that promotes sparsity patterns in the conductivity and a Perona-Malik anisotropic diffusion term that enhances the edges to facilitate high contrast and resolution. This framework is motivated by a similar recent approach to solve an inverse problem in acousto-electric tomography. Several numerical experiments and comparison with an existing method demonstrate the effectiveness of the proposed method for superior image reconstructions of a wide-variety of log-conductivity patterns.
... We exploit and extend connections between variational models and diffusion processes [65], and their relations to residual networks [2,63]. In contrast with our previous works [2,4] which focussed on the one-dimensional setting and corresponding numerical algorithms, we now concentrate on two-dimensional diffusion models that incorporate different strategies to achieve rotation invariance. ...
... with an increasing regulariser function Ψ which can be connected to the diffusivity g by g = Ψ [65]. Comparing the functional 8 to the one in (4), we have now specified the form of the regulariser to be R(u) = Ψ tr ∇u∇u . ...
Article
Full-text available
Partial differential equation models and their associated variational energy formulations are often rotationally invariant by design. This ensures that a rotation of the input results in a corresponding rotation of the output, which is desirable in applications such as image analysis. Convolutional neural networks (CNNs) do not share this property, and existing remedies are often complex. The goal of our paper is to investigate how diffusion and variational models achieve rotation invariance and transfer these ideas to neural networks. As a core novelty, we propose activation functions which couple network channels by combining information from several oriented filters. This guarantees rotation invariance within the basic building blocks of the networks while still allowing for directional filtering. The resulting neural architectures are inherently rotationally invariant. With only a few small filters, they can achieve the same invariance as existing techniques which require a fine-grained sampling of orientations. Our findings help to translate diffusion and variational models into mathematically well-founded network architectures and provide novel concepts for model-based CNN design.
... More recently, linear scale-spaces based on pseudodifferential operators have attracted attention, such as the Poisson scale-space [20], α-scale-spaces [19], summed α-scale-spaces [14], and relativistic scale-spaces [11]. Also regularisation methods and related concepts can be interpreted as scale-spaces by considering their Euler-Lagrange equations, both in the linear and the nonlinear setting [10,13,44,50,53]. Since Gaussian scale-space can be described by a linear diffusion equation, it is natural to generalise it also to nonlinear diffusion scale-spaces [49,60]. ...
... So far, our framework requires evolutions that adhere to a semi-group property. This excludes its application to scale-spaces that do not satisfy this requirement, e.g., regularisation scale-spaces [13,44,50,53] and the closely related Bessel scale-space [10]. We will investigate if our concepts can be extended to handle also these processes. ...
Article
Full-text available
It is well-known that there are striking analogies between linear shift-invariant systems and morphological systems for image analysis. So far, however, the relations between both system theories are mainly understood on a pure convolution / erosion level. A formal connection on the level of differential or pseudodifferential equations and their induced scale-spaces is still missing. The goal of our paper is to close this gap. We present a simple and fairly general dictionary that allows to translate any linear shift-invariant evolution equation into its morphological counterpart and vice versa. It is based on a scale-space representation by means of the symbol of its (pseudo)differential operator. Introducing a novel transformation, the Cramér–Fourier transform, puts us in a position to relate the symbol to the structuring function of a morphological scale-space of Hamilton–Jacobi type. As an application of our general theory, we derive the morphological counterparts of many linear shift-invariant scale-spaces, such as the Poisson scale-space, \(\alpha \)-scale-spaces, summed \(\alpha \)-scale-spaces, relativistic scale-spaces, and their anisotropic variants. Our findings are illustrated by experiments.
... We exploit and extend connections between variational models and diffusion processes [65], and their relations to residual networks [2,63]. In contrast to our previous works [2,4] which focussed on the one-dimensional setting and corresponding numerical algorithms, we now concentrate on two-dimensional diffusion models that incorporate different strategies to achieve rotation invariance. ...
... with an increasing regulariser function Ψ which can be connected to the diffusivity g by g = Ψ ′ [65]. The argument of the regulariser is the trace of the so-called structure tensor [26], here without Gaussian regularisation, which reads ...
Preprint
Full-text available
Partial differential equation (PDE) models and their associated variational energy formulations are often rotationally invariant by design. This ensures that a rotation of the input results in a corresponding rotation of the output, which is desirable in applications such as image analysis. Convolutional neural networks (CNNs) do not share this property, and existing remedies are often complex. The goal of our paper is to investigate how diffusion and variational models achieve rotation invariance and transfer these ideas to neural networks. As a core novelty we propose activation functions which couple network channels by combining information from several oriented filters. This guarantees rotation invariance within the basic building blocks of the networks while still allowing for directional filtering. The resulting neural architectures are inherently rotationally invariant. With only a few small filters, they can achieve the same invariance as existing techniques which require a fine-grained sampling of orientations. Our findings help to translate diffusion and variational models into mathematically well-founded network architectures, and provide novel concepts for model-based CNN design.
... On the other hand, anisotropic diffusion is inherent to Perona-Malik (PM) filtering [36]. It is well known that the diffusion process governed by the PM equation leads to a decrease in the total variation during its evolution [43]. We, thus, choose the energy functional of the Perona-Malik equation for anisotropic diffusion [36]. ...
... We refer to [11,43] for a general introduction to anisotropic diffusion and a detailed discussion on the PM functional. Further, in [40], the PM model was used in the reconstruction of log-conductivities in AET and it was observed that the reconstructions obtained demonstrated superior contrast and resolution. ...
Article
Full-text available
A new nonlinear optimization approach is proposed for the sparse reconstruction of log-conductivities in current density impedance imaging. This framework comprises of minimizing an objective functional involving a least squares fit of the interior electric field data corresponding to two boundary voltage measurements, where the conductivity and the electric potential are related through an elliptic PDE arising in electrical impedance tomography. Further, the objective functional consists of a \(L^1\) regularization term that promotes sparsity patterns in the conductivity and a Perona–Malik anisotropic diffusion term that enhances the edges to facilitate high contrast and resolution. This framework is motivated by a similar recent approach to solve an inverse problem in acousto-electric tomography. Several numerical experiments and comparison with an existing method demonstrate the effectiveness of the proposed method for superior image reconstructions of a wide variety of log-conductivity patterns.
... and results in the diffusion equation [25,42,51]. In the case of isotropic filtering, the diffusion equation has a closed form solution. ...
... This problem has previously been studied by, e.g., Scherzer and Weickert [51]. Here we tackle the energy functional directly and show that the failing component is the application of Green's identity. ...
Article
Full-text available
In this work, we introduce a novel tensor-based functional for targeted image enhancement and denoising. Via explicit regularization, our formulation incorporates application-dependent and contextual information using first principles. Few works in literature treat variational models that describe both application-dependent information and contextual knowledge of the denoising problem. We prove the existence of a minimizer and present results on tensor symmetry constraints, convexity, and geometric interpretation of the proposed functional. We show that our framework excels in applications where nonlinear functions are present such as in gamma correction and targeted value range filtering. We also study general denoising performance where we show comparable results to dedicated PDE-based state-of-the-art methods.
... More recent approaches are attempted to fuse the two formulations [Bruhn 2005]. The penalty term plays a key role as a diffusion operator can act isotropically or anisotropically [Black 1998, Scherzer 2000, Aubert 2006]. A variety of diffusion mechanisms has been proposed so that, e.g., optical flow discontinuities could be preserved depending on velocity field variations or image structures. ...
Thesis
In this thesis, we studied the problem of motion estimation in mammals and propose that scaling up models rooted in biology for real world applications can give us fresh insights into the biological vision. Using a classic model that describes the activity of directionally-selective neurons in V1 and MT areas of macaque brain, we proposed a feedforward V1-MT architecture for motion estimation and benchmarked it on computer vision datasets (first publicly available evaluation for this kind of models), revealing interesting shortcomings such as lack of selectivity at motion boundaries and lack of spatial association of the flow field. To address these, we proposed two extensions, a form modulated pooling strategy to minimize errors at texture boundaries and a regression based decoding scheme. These extensions improved estimation accuracy but also reemphasized the debate about the role of different cell types (characterized by their tuning curves) in encoding motion, for example relative role of pattern cells versus component cells. To understand this, we used a phenomenological neural fields model representative of a population of directionally tuned MT cells to check whether different tuning behaviors could be reproduced by a recurrently interacting population or if we need different types of cells explicitly. Our results indicated that a variety of tuning behavior can be reproduced by a minimal network, explaining dynamical changes in the tuning with change of stimuli leading us to question the high inhibition regimes typically considered by models in the literature.
... In order to overcome the problems of ADF and TV and combine their benefits, it is helpful to notice the close relation between diffusion filtering and regularization, which was initially studied in [57] for isotropic diffusion. The relation between anisotropic diffusion and the TV regularization was studied in [25], via the TRTV model as follows, ...
Article
Full-text available
Third harmonic generation (THG) microscopy shows great potential for instant pathology of brain tissue during surgery. However, the rich morphologies contained and the noise associated make image restoration, necessary for quantification of the THG images, challenging. Anisotropic diffusion filtering (ADF) has been recently applied to restore THG images of normal brain, but ADF is hard‐to‐code, time‐consuming and only reconstructs salient edges. This work overcomes these drawbacks by expressing ADF as a tensor regularized total variation (TRTV) model, which uses the Huber penalty and the L1 norm for tensor regularization and fidelity measurement, respectively. The diffusion tensor is constructed from the structure tensor of ADF yet the tensor decomposition is performed only in the non‐flat areas. The resulting model is solved by an efficient and easy‐to‐code primal‐dual algorithm. Tests on THG brain tumor images show that the proposed model has comparable denoising performance as ADF while it much better restores weak edges and it is up to 60% more time efficient. This article is protected by copyright. All rights reserved.
... For the Rudin, Osher, Fatemi (ROF) regularization [52], also known as total variation regularization, the data term is the squared L 2 -norm and R(w) = |w| T V is the total variation semi-norm. Other widely used regularization functionals are sparsity promoting [22,39], Besov space norms [44,40] and anisotropic regularization norms [45,54]. Aside from various regularization terms there also have been proposed different fidelity terms other than quadratic norm fidelities, like the p-th powers of p and L p -norms of the differences of F (w) and v , [53,55], Maximum Entropy [25,26] and Kullback-Leibler divergence [50] (see [48] for some reference work). ...
Preprint
We present an approach for variational regularization of inverse and imaging problems for recovering functions with values in a set of vectors. We introduce regularization functionals, which are derivative-free double integrals of such functions. These regularization functionals are motivated from double integrals, which approximate Sobolev semi-norms of intensity functions. These were introduced in Bourgain, Br\'ezis and Mironescu, "Another Look at Sobolev Spaces". In: Optimal Control and Partial Differential Equations-Innovations and Applications, IOS press, Amsterdam, 2001. For the proposed regularization functionals we prove existence of minimizers as well as a stability and convergence result for functions with values in a set of vectors.
... For example, regularisation methods and related concepts can be interpreted as scale-spaces by considering their Euler-Lagrange equations, both in the linear and the nonlinear setting [see e.g. Poggio et al., 1988, Scherzer and Weickert, 2000, Burgeth et al., 2005b, Demetz et al., 2012. Since Gaussian scale-space can be described by a linear diffusion equation, it is natural to generalise it also to nonlinear diffusion scale-spaces [Perona andMalik, 1990, Weickert, 1998]. ...
... For simplicity, we call this process diffusion in both cases, as usual in physics. For details of the very close relation between regularization and diffusion see(Scherzer and Weickert, 1998). ...
Article
Full-text available
In this work we propose a novel non-linear diffusion filtering approach for images based on their channel representation. To derive the diffusion update scheme we formulate a novel energy functional using a soft-histogram representation of image pixel neighborhoods obtained from the channel encoding. The resulting Euler-Lagrange equation yields a non-linear robust diffusion scheme with additional weighting terms stemming from the channel representation which steer the diffusion process. We apply this novel energy formulation to image reconstruction problems, showing good performance in the presence of mixtures of Gaussian and impulse-like noise, e.g. missing data. In denoising experiments of common scalar-valued images our approach performs competitive compared to other diffusion schemes as well as state-of-the-art denoising methods for the considered noise types. Copyright © 2014 SCITEPRESS - Science and Technology Publications. All rights reserved.
... Interestingly, diffusion filtering can be related to many other types of denoising methods. For instance, Scherzer and Weickert [23] have shown connections between variational methods such as Tikhonov [26] or TV regularisation [22] and fully implicit time discretisations of diffusion filters. Furthermore, a large variety of diffusion filters for denoising can be interpreted as Bayesian denoising models; see e.g. ...
Conference Paper
Full-text available
The filling-in effect of diffusion processes has been successfully used in many image analysis applications. Examples include image reconstructions in inpainting-based compression or dense optic flow computations. As an interesting side effect of diffusion-based inpainting, the interpolated data are smooth, even if the known image data are noisy: Inpainting averages information from noisy sources. Since this effect has not been investigated for denoising purposes so far, we propose a general framework for denoising by inpainting. It averages multiple inpainting results from different selections of known data. We evaluate two concrete implementations of this framework: The first one specifies known data on a shifted regular grid, while the second one employs probabilistic densification to optimise the known pixel locations w.r.t. the inpainting quality. For homogeneous diffusion inpainting, we demonstrate that our regular grid method approximates the quality of its corresponding diffusion filter. The densification algorithm with homogeneous diffusion inpainting, however, shows edge-preserving behaviour. It resembles space-variant diffusion and offers better reconstructions than homogeneous diffusion filters.
... Originally, the procedure to determine the minimum-norm solutions for ill-posed systems was given by Tikhonov (1963), for regularisation methods, see the references Charbonnier et al. (1994), Nordstrom (1990), Scherzer (1998), Scherzer and Weickert (2000) and Schnörr (1994). Our exemplar of a variational method for denoising or simplifying an image u 0 can be achieved in Welk et al. (2005) by minimising the energy functional ...
Article
Image denoising is still a crucial problem for image processing community as well as mathematicians alike. This paper propose a denoising technique based on wavelet coefficients via diffusion equation. The present work compares different denoising process by diffusion models. The data is initially subjected to synthetic noisy data with various levels of standard deviation ${\sigma}^2$. To quantify the results, we use peak signal to noise ratio (PSNR) as metric.
... Typical representatives of nonlinear scale-spaces are given by nonlinear diffusion scale-spaces [1], the morphological equivalent of Gaussian scale-space [16], and curvature-driven evolutions such as the affine morphological scale-space [17]. Moreover, also spatio-temporal scale-spaces have been considered [18,19], regularisation methods have been identified as scale-spaces [20], and spatially varying dominant scales have been proposed in [21]. Many of these processes exhibit a local behaviour and can be described in terms of partial differential equations (PDEs) or pseudodifferential equations. ...
... Previous works [48], [41] show that in the nonlinear diffusion framework, there exist natural relations between reaction diffusion and regularization based energy functional. First of all, we can interpret (4) as one gradient descent step at u t−1 of a certain energy functional given by ...
Article
Image restoration is a long-standing problem in low-level computer vision with many interesting applications. We describe a flexible learning framework based on the concept of nonlinear reaction diffusion models for various image restoration problems. By embodying recent improvements in nonlinear diffusion models, we propose a dynamic nonlinear reaction diffusion model with time-dependent parameters (\ie, linear filters and influence functions). In contrast to previous nonlinear diffusion models, all the parameters, including the filters and the influence functions, are simultaneously learned from training data through a loss based approach. We call this approach TNRD -- \textit{Trainable Nonlinear Reaction Diffusion}. The TNRD approach is applicable for a variety of image restoration tasks by incorporating appropriate reaction force. We demonstrate its capabilities with three representative applications, Gaussian image denoising, single image super resolution and JPEG deblocking. Experiments show that our trained nonlinear diffusion models largely benefit from the training of the parameters and finally lead to the best reported performance on common test datasets for the tested applications. Our trained models preserve the structural simplicity of diffusion models and take only a small number of diffusion steps, thus are highly efficient. Moreover, they are also well-suited for parallel computation on GPUs, which makes the inference procedure extremely fast.
... where the first term, called fidelity term, encourages the similarity betweenf and its denoised version f , and the second term P( f , ∇ f ), called penalty term, controls the regularity of the solution. Indeed, it is known that there is a strong relations between regularisation methods and diffusion filters [74]. ...
... This is a fundamental idea in image processing related to the concept of scale spaces, see e.g. [4,11] for useful discussions. However, especially in the context of nonlinear filtering, it is not at all obvious when to stop the diffusion process. ...
Conference Paper
Anisotropic diffusion is a time-dependent process in image processing useful for denoising and related tasks. Applied at an input image, the latter is gradually simplified in such a way that edges tend to be preserved. Meaningful image structures can then be detected depending on the diffusion time and the spatial scale of a structure. One of the most important points in anisotropic diffusion filtering is introducing a stopping criterion for the time evolution as this defines the quality of output images. In this paper, we follow the approach of a recent method by Ilyevsky and Turkel for determining a useful stopping time. While following the same basic idea, we simplify the underlying algorithm and improve at the same time the quality of filtering results significantly. The superiority of our scheme is validated by several numerical experiments with standard test images in the field.
... For simplicity, we call this process diffusion in both cases, as usual in physics. For details of the very close relation between regularization and diffusion see(Scherzer and Weickert, 1998). ...
Conference Paper
Describing persons and their actions is a challenging problem due to variations in pose, scale and viewpoint in real-world images. Recently, semantic pyramids approach [1] for pose normalization has shown to provide excellent results for gender and action recognition. The performance of semantic pyramids approach relies on robust image description and is therefore limited due to the use of shallow local features. In the context of object recognition [2] and object detection [3], convolutional neural networks (CNNs) or deep features have shown to improve the performance over the conventional shallow features. We propose deep semantic pyramids for human attributes and action recognition. The method works by constructing spatial pyramids based on CNNs of different part locations. These pyramids are then combined to obtain a single semantic representation. We validate our approach on the Berkeley and 27 Human Attributes datasets for attributes classification. For action recognition, we perform experiments on two challenging datasets: Willow and PASCAL VOC 2010. The proposed deep semantic pyramids provide a significant gain of \(17.2\,\%\), \(13.9\,\%\), \(24.3\,\%\) and \(22.6\,\%\) compared to the standard shallow semantic pyramids on Berkeley, 27 Human Attributes, Willow and PASCAL VOC 2010 datasets respectively. Our results also show that deep semantic pyramids outperform conventional CNNs based on the full bounding box of the person. Finally, we compare our approach with state-of-the-art methods and show a gain in performance compared to best methods in literature.
... Then, for a smooth initial function u (0) and α → 0, (u α − u (0) )/α converges to an element of the subgradient ∂R(u (0) ) of R. Performing an iterative minimization of .20)) are comparable and look rather similar [149]. We expect a similar behavior for the non-convex functional ...
... PM, : [37] PM; 2000, Chen [38] ; Weickert ...
... TV denoising in particular has several very interesting equivalences. It is well known that TV denoising and other more general first-order denoising methods are equivalent to smoothing with a certain nonlinear diffusion models [26], a typical result of writing the equivalent Euler-Lagrange equations. Perhaps discussed less frequently and most related to the observations in our current work, TV denoising is equivalent to soft threshold denoising with the highest-frequency basis elements of the Haar wavelets [27,28], in particular with the so-called cycle spinning [29]. ...
Article
Full-text available
In the realm of signal and image denoising and reconstruction, ℓ 1 regularization techniques have generated a great deal of attention with a multitude of variants. In this work, we demonstrate that the ℓ 1 formulation can sometimes result in undesirable artifacts that are inconsistent with desired sparsity promoting ℓ 0 properties that the ℓ 1 formulation is intended to approximate. With this as our motivation, we develop a multiscale higher-order total variation (MHOTV) approach, which we show is related to the use of multiscale Daubechies wavelets. The relationship of higher-order regularization methods with wavelets, which we believe has generally gone unrecognized, is shown to hold in several numerical results, although notable improvements are seen with our approach over both wavelets and classical HOTV. These results are presented for 1D signals and 2D images, and we include several examples that highlight the potential of our approach for improving two- and three-dimensional electron microscopy imaging. In the development approach, we construct the tools necessary for MHOTV computations to be performed efficiently, via operator decomposition and alternatively converting the problem into Fourier space.
... Other widely used regularization functionals are sparsity promoting [22,41], Besov space norms [42,46] and anisotropic regularization norms [47,56]. Aside from various regularization terms, there also have been proposed different fidelity terms other than quadratic norm fidelities, like the pth powers of p and L p -norms of the differences of F(w) and v , [55,57], maximum entropy [26,28] and Kullback-Leibler divergence [52] (see [50] for some reference work). ...
Article
Full-text available
We present an approach for variational regularization of inverse and imaging problems for recovering functions with values in a set of vectors. We introduce regularization functionals, which are derivative-free double integrals of such functions. These regularization functionals are motivated from double integrals, which approximate Sobolev semi-norms of intensity functions. These were introduced in Bourgain et al. (Another look at Sobolev spaces. In: Menaldi, Rofman, Sulem (eds) Optimal control and partial differential equations-innovations and applications: in honor of professor Alain Bensoussan’s 60th anniversary, IOS Press, Amsterdam, pp 439–455, 2001). For the proposed regularization functionals, we prove existence of minimizers as well as a stability and convergence result for functions with values in a set of vectors.
... where the penaliser can be linked to the diffusivity with g = [97]. The penaliser must be increasing, but not necessarily convex. ...
Article
Full-text available
We investigate numerous structural connections between numerical algorithms for partial differential equations (PDEs) and neural architectures. Our goal is to transfer the rich set of mathematical foundations from the world of PDEs to neural networks. Besides structural insights, we provide concrete examples and experimental evaluations of the resulting architectures. Using the example of generalised nonlinear diffusion in 1D, we consider explicit schemes, acceleration strategies thereof, implicit schemes, and multigrid approaches. We connect these concepts to residual networks, recurrent neural networks, and U-net architectures. Our findings inspire a symmetric residual network design with provable stability guarantees and justify the effectiveness of skip connections in neural networks from a numerical perspective. Moreover, we present U-net architectures that implement multigrid techniques for learning efficient solutions of partial differential equation models, and motivate uncommon design choices such as trainable nonmonotone activation functions. Experimental evaluations show that the proposed architectures save half of the trainable parameters and can thus outperform standard ones with the same model complexity. Our considerations serve as a basis for explaining the success of popular neural architectures and provide a blueprint for developing new mathematically well-founded neural building blocks.
... Since its introduction by Perona and Malik, an extensive amount of literature has existed presenting a number of PDE-based anisotropic models which offer diverse modifications in order to attain a steady state solution and to address the issue of staircase effects [23][24][25][26][27][28][29][30] . The staircase effect presents a visually unpleasant artefactual display in images and often perplex as false edge. ...
Article
At the crossing of the statistical and functional analysis, there exists a relentless quest for an efficient image denoising algorithm. In terms of greyscale imaging, a plethora of denoising algorithms have been documented in the literature, in spite of which the level of functionality of these algorithms still holds margin to acquire desired level of applicability. Quite often noise affecting the pixels in image is Gaussian in nature and uniformly deters information pixels in image. Based on some specific set of assumptions all methods work optimally, however they tend to create artefacts and remove fine structural details under general conditions. This article focuses on classifying and comparing some of the significant works in the field of denoising.
... Integrodifferential models for nonlinear diffusion predominantly involve models with Gaussian-smoothed derivatives. Most of these models [14,7,15,16] have been proposed as regularisations of the Perona-Malik filter [1]. For enhancing coherent structures, one also considers a smoothed structure tensor [17] that captures directional information to steer the diffusion process accordingly [2]. ...
Preprint
Full-text available
We introduce an integrodifferential extension of the edge-enhancing anisotropic diffusion model for image denoising. By accumulating weighted structural information on multiple scales, our model is the first to create anisotropy through multiscale integration. It follows the philosophy of combining the advantages of model-based and data-driven approaches within compact, insightful, and mathematically well-founded models with improved performance. We explore trained results of scale-adaptive weighting and contrast parameters to obtain an explicit modelling by smooth functions. This leads to a transparent model with only three parameters, without significantly decreasing its denoising performance. Experiments demonstrate that it outperforms its diffusion-based predecessors. We show that both multiscale information and anisotropy are crucial for its success.
... Assuming that the approximate solutions u n i;j , for 1 i; j M À 1 have been computed, we can approximate the boundary condition by u n 0;j ¼ u n 1;j ; u n M;j ¼ u n MÀ1;j ; u n i;0 ¼ u n i;1 ; u n i;M ¼ u n i;MÀ1 ; and u n 0;0 ¼ u n 1;1 ; u n 0;M ¼ u n 1;MÀ1 ; u n M;0 ¼ u n MÀ1;1 ; u n M;M ¼ u n MÀ1;MÀ1 . As known, the linear diffusion (ID) is well-posed (Yahya et al. 2014) [which means that ID model is stable (Weickert 1998)] and TV model is stable (Scherzer and Weickert 2000). So, the new model which is the combination of these two models is stable too. ...
Article
One of the fundamental problems in the field of image processing is denoising. The underlying goal of image denoising is to effectively suppress noise while keeping intact the significant features of the image, such as texture and edge information. The gradient of image is a famous feature descriptor in denoising models to distinguish edges and ramps. If the received signal of an image is very noisy, the gradient cannot effectively distinguish between the image edges and the image ramps. In this paper, based on the difference curvature and the gradient of the image, we introduce a new feature descriptor. For demonstrating the effectiveness of the new feature descriptor, we use it in constructing a new diffusion-based denoising model. Experimental results show the effectiveness of the method.
... where the penaliser Ψ can be linked to the diffusivity with g = Ψ [93]. The penaliser must be increasing, but not necessarily convex. ...
Preprint
Full-text available
We investigate numerous structural connections between numerical algorithms for partial differential equations (PDEs) and neural architectures. Our goal is to transfer the rich set of mathematical foundations from the world of PDEs to neural networks. Besides structural insights we provide concrete examples and experimental evaluations of the resulting architectures. Using the example of generalised nonlinear diffusion in 1D, we consider explicit schemes, acceleration strategies thereof, implicit schemes, and multigrid approaches. We connect these concepts to residual networks, recurrent neural networks, and U-net architectures. Our findings inspire a symmetric residual network design with provable stability guarantees and justify the effectiveness of skip connections in neural networks from a numerical perspective. Moreover, we present U-net architectures that implement multigrid techniques for learning efficient solutions of partial differential equation models, and motivate uncommon design choices such as trainable nonmonotone activation functions. Experimental evaluations show that the proposed architectures save half of the trainable parameters and can thus outperform standard ones with the same model complexity. Our considerations serve as a basis for explaining the success of popular neural architectures and provide a blueprint for developing new mathematically well-founded neural building blocks.
... Integrodifferential models for nonlinear diffusion predominantly involve models with Gaussian-smoothed derivatives. Most of these models [14,7,15,16] have been proposed as regularisations of the Perona-Malik filter [1]. For enhancing coherent structures, one also considers a smoothed structure tensor [17] that captures directional information to steer the diffusion process accordingly [2]. ...
... Also, there have been many methods that use diffusion which can remove speckle noise efficiently. Filtering with use of single step of time discretization in diffusion is regarded as regularization [26]. There has been work done on the combination of both wavelets and diffusion methods. ...
Article
Full-text available
Ultrasound (US) images are useful in medical diagnosis. US is preferred over other medical diagnosis technique because it is non-invasive in nature and has low cost. The presence of speckle noise in US images degrades its usefulness. A method that reduces the speckle noise in US images can help in correct diagnosis. This method also should preserve the important structural information in US images while removing the speckle noise. In this paper, a method for removing speckle noise using a combination of wavelet, total variation (TV) and morphological operations has been proposed. The proposed method achieves denoising by combining the advantages of the wavelet, TV and morphological operations along with the utilization of adaptive regularization parameter which controls the amount of smoothing during denoising. The work in this paper has the capability of reducing speckle noise while preserving the structural information in the denoised image. The proposed method demonstrates strong denoising for synthetic and real ultrasound images, which is also supported by the results of various quantitative measures and visual inspection.
... An extension by a suitable regularization can help to preserve edges in the reconstruction without the loss of small details, or the introduction of additional noise. One possibility is to use diffusion filtering [65], for example, variants of the Perona-Malik diffusion [66] in this role. Diffusion filtering was also successfully applied as a post-processing step for CT [67]. ...
Article
Full-text available
The reconstruction of computed tomography (CT) images is an active area of research. Following the rise of deep learning methods, many data-driven models have been proposed in recent years. In this work, we present the results of a data challenge that we organized, bringing together algorithm experts from different institutes to jointly work on quantitative evaluation of several data-driven methods on two large, public datasets during a ten day sprint. We focus on two applications of CT, namely, low-dose CT and sparse-angle CT. This enables us to fairly compare different methods using standardized settings. As a general result, we observe that the deep learning-based methods are able to improve the reconstruction quality metrics in both CT applications while the top performing methods show only minor differences in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). We further discuss a number of other important criteria that should be taken into account when selecting a method, such as the availability of training data, the knowledge of the physical measurement model and the reconstruction speed.
... Sparsification scale-spaces have demonstrated that there are viable scalespace concepts beyond the classical ideas that rely on partial differential equations (PDEs) [2,8,15,21] or evolutions according to pseudodifferential operators [5,17]. In the present paper, we show that quantisation methods can also imply scale-spaces and that this leads to practical consequences for inpainting-based compression. ...
Preprint
Full-text available
Recently, sparsification scale-spaces have been obtained as a sequence of inpainted images by gradually removing known image data. Thus, these scale-spaces rely on spatial sparsity. In the present paper, we show that sparsification of the co-domain, the set of admissible grey values, also constitutes scale-spaces with induced hierarchical quantisation techniques. These quantisation scale-spaces are closely tied to information theoretical measures for coding cost, and therefore particularly interesting for inpainting-based compression. Based on this observation, we propose a sparsification algorithm for the grey-value domain that outperforms uniform quantisation as well as classical clustering approaches.
Conference Paper
Tensor-driven anisotropic diffusion and regularisation have been successfully applied to a wide range of image processing and computer vision tasks such as denoising, inpainting, and optical flow. Empirically it has been shown that anisotropic models with a diffusion tensor perform better than their isotropic counterparts with a scalar-valued diffusivity function. However, the reason for this superior performance is not well understood so far. Moreover, the specific modelling of the anisotropy has been carried out in a purely heuristic way. The goal of our paper is to address these problems. To this end, we use the statistics of natural images to derive a unifying framework for eight isotropic and anisotropic diffusion filters that have a corresponding variational formulation. In contrast to previous statistical models, we systematically investigate structure-adaptive statistics by analysing the eigenvalues of the structure tensor. With our findings, we justify existing successful models and assess the relationship between accurate statistical modelling and performance in the context of image denoising.
Conference Paper
We investigate the use of fractional powers of the Laplacian for signal and image simplification. We focus both on their corresponding variational techniques and parabolic pseudodifferential equations. We perform a detailed study of the regularisation properties of energy functionals, where the smoothness term consists of various linear combinations of fractional derivatives. The associated parabolic pseudodifferential equations with constant coefficients are providing the link to linear scale-space theory. These encompass the well-known a-scale-spaces, even those with parameter values alpha > 1 known to violate common maximum-minimum principles. Nevertheless, we show that it is possible to construct positivity-preserving combinations of high and low-order filters. Numerical experiments in this direction indicate that non-integral orders play an essential role in this construction. The paper reveals the close relation between continuous and semi-discrete filters, and by that helps to facilitate efficient implementations. In additional numerical experiments we compare the variance decay rates for white noise and edge signals through the action of different filter classes.
Conference Paper
Line drawings are especially effective and natural in shape depiction. There are generally two ways to generate line drawings: object space methods and image space methods. Compared with object space methods, image space methods are much faster and independent of the 3D object, but easily affected by small noise in the rendered image. We suggest applying an edge-preserving L0 gradient smoothing step on the rendered image before line extraction. Experimental results show that our method can effectively alleviate unnecessary small scale lines, leading to results comparable to object space methods.
Article
Full-text available
Studies in biological vision have always been a great source of inspiration for design of computer vision algorithms. In the past, several successful methods were designed with varying degrees of correspondence with biological vision studies, ranging from purely functional inspiration to methods that utilise models that were primarily developed for explaining biological observations. Even though it seems well recognised that computational models of biological vision can help in design of computer vision algorithms, it is a non-trivial exercise for a computer vision researcher to mine relevant information from biological vision literature as very few studies in biology are organised at a task level. In this paper we aim to bridge this gap by providing a computer vision task centric presentation of models primarily originating in biological vision studies. Not only do we revisit some of the main features of biological vision and discuss the foundations of existing computational studies modelling biological vision, but also we consider three classical computer vision tasks from a biological perspective: image sensing, segmentation and optical flow. Using this task-centric approach, we discuss well-known biological functional principles and compare them with approaches taken by computer vision. Based on this comparative analysis of computer and biological vision, we present some recent models in biological vision and highlight a few models that we think are promising for future investigations in computer vision. To this extent, this paper provides new insights and a starting point for investigators interested in the design of biology-based computer vision algorithms and pave a way for much needed interaction between the two communities leading to the development of synergistic models of artificial and biological vision.
Conference Paper
Full-text available
The general solution of anisotropic diffusionmodel in image denoising has slow convergence rate. Toovercome the problem, a new Newton method is proposed. In the new model, first and second Gateaux derivatives are figured out firstly. Then two continuous operators are introduced to avoid the error which is arisen by directly discretizing the iteration equation. To solve the singularity problem of the image and eliminate the impact of parameter, image geometry feature is considered when computing the equation using lagged fixed pointalgorithm. The classical Rudin-Osher-Fatemi(ROF) model is taken as an example. In the numerical experiment, the denoising performance of the new Newton method is compared with gradient descent algorithm and the Newton method which is proposed by Vogel. The numerical results demonstrate that new algorithm has faster computing rate with similar denoising performance of traditional algorithms.
Chapter
Recently, sparsification scale-spaces have been obtained as a sequence of inpainted images by gradually removing known image data. Thus, these scale-spaces rely on spatial sparsity. In the present paper, we show that sparsification of the co-domain, the set of admissible grey values, also constitutes scale-spaces with induced hierarchical quantisation techniques. These quantisation scale-spaces are closely tied to information theoretical measures for coding cost, and therefore particularly interesting for inpainting-based compression. Based on this observation, we propose a sparsification algorithm for the grey-value domain that outperforms uniform quantisation as well as classical clustering approaches.
Article
Full-text available
In this paper, a denoising PDE model based on a combination of the isotropic diffusion and total variation models is presented. The new weighted model is able to be adaptive in each region in accordance with the image's information. The model performs more diffusion in the flat regions of the image, and less diffusion in the edges of the image. The new model has more ability to restore the image in terms of peak signal to noise ratio and visual quality, compared with total variation, isotropic diffusion, and some well-known models. Experimental results show that the model is able to suppress the noise effectively while preserving texture features and edge information well.
Conference Paper
Most scale-space evolutions are described in terms of partial differential equations. In recent years, however, nonlocal processes have become an important research topic in image analysis. The goal of our paper is to establish well-posedness and scale-space properties for a class of nonlocal evolutions. They are given by linear integro-differential equations with measures. In analogy to Weickert’s diffusion theory (1998), we prove existence and uniqueness, preservation of the average grey value, a maximum–minimum principle, image simplification properties in terms of Lyapunov functionals, and we establish convergence to a constant steady state. We show that our nonlocal scale-space theory covers nonlocal variants of linear diffusion. Moreover, by choosing specific discrete measures, the classical semidiscrete diffusion framework is identified as a special case of our continuous theory. Last but not least, we introduce two modifications of bilateral filtering. In contrast to previous bilateral filters, our variants create nonlocal scale-spaces that preserve the average grey value and that can be highly robust under noise. While these filters are linear, they can achieve a similar performance as nonlinear and even anisotropic diffusion equations.
Conference Paper
Many image processing methods such as corner detection, optical flow and iterative enhancement make use of image tensors. Generally, these tensors are estimated using the structure tensor. In this work we show that the gradient energy tensor can be used as an alternative to the structure tensor in several cases. We apply the gradient energy tensor to common image problem applications such as corner detection, optical flow and image enhancement. Our experimental results suggest that the gradient energy tensor enables real-time tensor-based image enhancement using the graphical processing unit (GPU) and we obtain 40 % increase of frame rate without loss of image quality.
Article
Full-text available
We discuss the resolving power of three geophysical imaging and inversion techniques, and their combination, for the reconstruction of material parameters in the Earth’s subsurface. The governing equations are those of Newton and Poisson for gravitational problems, the acoustic wave equation under Hookean elasticity for seismology, and the geodynamics equations of Stokes for incompressible steady-state flow in the mantle. The observables are the gravitational potential, the seismic displacement, and the surface velocity, all measured at the surface. The inversion parameters of interest are the mass density, the acoustic wave speed, and the viscosity. These systems of partial differential equations and their adjoints were implemented in a single Python code using the finite-element library FeNICS. To investigate the shape of the cost functions, we present a grid search in the parameter space for three end-member geological settings: a falling block, a subduction zone, and a mantle plume. The performance of a gradient-based inversion for each single observable separately, and in combination, is presented. We furthermore investigate the performance of a shape-optimizing inverse method, when the material is known, and an inversion that inverts for the material parameters of an anomaly with known shape.
Article
Automatically learning features, especially robust features, has attracted much attention in the machine learning community. In this paper, we propose a new method to learn non-linear robust features by taking advantage of the data manifold structure. We first follow the commonly used trick of the trade, that is learning robust features with artificially corrupted data, which are training samples with manually injected noise. Following the idea of the auto-encoder, we first assume features should contain much information to well reconstruct the input from its corrupted copies. However, merely reconstructing clean input from its noisy copies could make data manifold in the feature space noisy. To address this problem, we propose a new method, called Incremental Auto-Encoders, to iteratively denoise the extracted features. We assume the noisy manifold structure is caused by a diffusion process. Consequently, we reverse this specific diffusion process to further contract this noisy manifold, which results in an incremental optimization of model parameters . Furthermore, we show these learned non-linear features can be stacked into a hierarchy of features. Experimental results on real-world datasets demonstrate the proposed method can achieve better classification performances.
Preprint
Full-text available
Convolutional neural networks (CNNs) often perform well, but their stability is poorly understood. To address this problem, we consider the simple prototypical problem of signal denoising, where classical approaches such as nonlinear diffusion, wavelet-based methods and regularisation offer provable stability guarantees. To transfer such guarantees to CNNs, we interpret numerical approximations of these classical methods as a specific residual network (ResNet) architecture. This leads to a dictionary which allows to translate diffusivities, shrinkage functions, and regularisers into activation functions, and enables a direct communication between the four research communities. On the CNN side, it does not only inspire new families of nonmonotone activation functions, but also introduces intrinsically stable architectures for an arbitrary number of layers.
Article
Full-text available
In this paper, a denoising PDE model based on a combination of the isotropic diffusion and total variation models is presented. The new weighted model is able to be adaptive in each region in accordance with the image’s information. The model performs more diffusion in the flat regions of the image, and less diffusion in the edges of the image. The new model has more ability to restore the image in terms of peak signal to noise ratio and visual quality, compared with total variation, isotropic diffusion, and some well-known models. Experimental results show that the model is able to suppress the noise effectively while preserving texture features and edge information well.
Chapter
We introduce a novel scale-space concept that is inspired by inpainting-based lossy image compression and the recent denoising by inpainting method of Adam et al. (2017). In the discrete setting, the main idea behind these so-called sparsification scale-spaces is as follows: Starting with the original image, one subsequently removes a pixel until a single pixel is left. In each removal step the missing data are interpolated with an inpainting method based on a partial differential equation. We demonstrate that under fairly mild assumptions on the inpainting operator this general concept indeed satisfies crucial scale-space properties such as gradual image simplification, a discrete semigroup property or invariances. Moreover, our experiments show that it can be tailored towards specific needs by selecting the inpainting operator and the pixel sparsification strategy in an appropriate way. This may lead either to uncommitted scale-spaces or to highly committed, image-adapted ones.
Article
This paper investigates a novel variational optimization model for image denoising. Within this work, a bilevel optimization technique with a suitable mathematical background is proposed to detect automatically three crucial parameters: α0, α1 and θ. The parameters α0, α1 control the Total Generalized Variation (TGV) regularization while the parameter θ is related to the anisotropic diffusive tensor. A proper selection of these parameters represents a challenging task. Since these parameters are always related to a better approximation of the image gradient and texture, their computation plays a major role in preserving the image features. Analytically, we include results on the approximation of these parameters as well as the resolution of the encountered bilevel problem in a suitable framework. In addition, to resolve the PDE-constrained minimization problem, a modified primal-dual algorithm is proposed. Finally, numerical results are provided to remove noise and simultaneously keep safe fine details and important features with numerous comparisons to show the performance of the proposed approach.
Chapter
Full-text available
So far we have been concerned with the theory of scale-space representation and its application to feature detection in image data. A basic functionality of a computer vision system, however, is the ability to derive information about the three-dimensional shape of objects in the world.
Article
Full-text available
Lagrangian and augmented Lagrangian methods for nondifferentiable optimization problems that arise from the total bounded variation formulation of image restoration problems are analyzed. Conditional convergence of the Uzawa algorithm and unconditional convergence of the first order augmented Lagrangian schemes are discussed. A Newton type method based on an active set strategy defined by means of the dual variables is developed and analyzed. Numerical examples for blocky signals and images perturbed by very high noise are included. Résumé On analyse les méthodes de lagrangien et de lagrangien augmenté pour des problèmes d'optimisation non différentiable, provenant de la formulation de variation totale bornée en restauration d'images. La convergence conditionnelle de l'algorithme d'Uzawa et la convergence inconditionnelle des schémas de premier ordre de lagrangien augmenté sont discutées. Une méthode de type Newton basée sur une statégie d'ensemble actif, définie au moyen de variables primales et duales, est développée et analysée. Des exemples numériques sont donnés pour des signaux discontinus et des images pertubées par très fort bruit.
Article
Full-text available
A convergence rate is established for nonstationary iterated Tikhonov regularization, applied to ill-posed problems involving closed, densely defined linear operators, under general conditions on the iteration parameters. It is also shown that an order-optimal accuracy is attained when a certain a posteriori stopping rule is used to determine the iteration number.
Article
Full-text available
Spectral theory for bounded linear operators is used to develop a general class of approximation methods for the Moore-Penrose generalized inverse of a closed, densely defined linear operator. Issues of convergence and stability are addressed and the methods are modified to provide a stable class of methods for evaluation of unbounded linear operators.
Article
Full-text available
A constrained optimization type of numerical algorithm for removing noise from images is presented. The total variation of the image is minimized subject to constraints involving the statistics of the noise. The constraints are imposed using Lanrange multipliers. The solution is obtained using the gradient-projection method. This amounts to solving a time dependent partial differential equation on a manifold determined by the constraints. As t → ∞ the solution converges to a steady state which is the denoised image. The numerical algorithm is simple and relatively fast. The results appear to be state-of-the-art for very noisy images. The method is noninvasive, yielding sharp edges in the image. The technique could be interpreted as a first step of moving each level set of the image normal to itself with velocity equal to the curvature of the level set divided by the magnitude of the gradient of the image, and a second step which projects the image back onto the constraint set.
Book
Full-text available
Introduction.- Convex Analysis and the Scalar Case.- Convex Sets and Convex Functions.- Lower Semicontinuity and Existence Theorems.- The one Dimensional Case.- Quasiconvex Analysis and the Vectorial Case.- Polyconvex, Quasiconvex and Rank one Convex Functions.- Polyconvex, Quasiconvex and Rank one Convex Envelopes.- Polyconvex, Quasiconvex and Rank one Convex Sets.- Lower Semi Continuity and Existence Theorems in the Vectorial Case.- Relaxation and Non Convex Problems.- Relaxation Theorems.- Implicit Partial Differential Equations.- Existence of Minima for Non Quasiconvex Integrands.- Miscellaneous.- Function Spaces.- Singular Values.- Some Underdetermined Partial Differential Equations.- Extension of Lipschitz Functions on Banach Spaces.- Bibliography.- Index.- Notations.
Article
Full-text available
Diffusion processes create multi--scale analyses, which enable the generation of simplified pictures, where for increasing scale the image gets sketchier. In many practical applications the ``scaled image'' can be characterized via a variational formulation as the solution of a minimization problem involving unbounded operators. These unbounded operators can be evaluated by regularization techniques. We show that the theory of stable evaluation of unbounded operators can be applied to efficiently solve these minimization problems.
Article
Full-text available
A reliable and efficient computational algorithm for restoring blurred and noisy images is proposed. The restoration process is based on the minimal total variation principle introduced by Rudin et al. For discrete images, the proposed algorithm minimizes a piecewise linear l <sub>1</sub> function (a measure of total variation) subject to a single 2-norm inequality constraint (a measure of data fit). The algorithm starts by finding a feasible point for the inequality constraint using a (partial) conjugate gradient method. This corresponds to a deblurring process. Noise and other artifacts are removed by a subsequent total variation minimization process. The use of the linear l<sub>1</sub> objective function for the total variation measurement leads to a simpler computational algorithm. Both the steepest descent and an affine scaling Newton method are considered to solve this constrained piecewise linear l<sub>1</sub> minimization problem. The resulting algorithm, when viewed as an image restoration and enhancement process, has the feature that it can be used in an adaptive/interactive manner in situations when knowledge of the noise variance is either unavailable or unreliable. Numerical examples are presented to demonstrate the effectiveness of the proposed iterative image restoration and enhancement process
Article
Full-text available
Mathematical results on ill-posed and ill-conditioned problems are reviewed and the formal aspects of regularization theory in the linear case are introduced. Specific topics in early vision and their regularization are then analyzed rigorously, characterizing existence, uniqueness, and stability of solutions. A fundamental difficulty that arises in almost every vision problem is scale, that is, the resolution at which to operate. Methods that have been proposed to deal with the problem include scale-space techniques that consider the behavior of the result across a continuum of scales. From the point of view of regulation theory, the concept of scale is related quite directly to the regularization parameter λ. It suggested that methods used to obtained the optimal value of λ may provide, either directly or after suitable modification, the optimal scale associated with the specific instance of certain problems
Book
Full-text available
A basic problem when deriving information from measured data, such as images, originates from the fact that objects in the world, and hence image structures, exist as meaningful entities only over certain ranges of scale. "Scale-Space Theory in Computer Vision" describes a formal theory for representing the notion of scale in image data, and shows how this theory applies to essential problems in computer vision such as computation of image features and cues to surface shape. The subjects range from the mathematical foundation to practical computational techniques. The power of the methodology is illustrated by a rich set of examples This book is the first monograph on scale-space theory. It is intended as an introduction, reference, and inspiration for researchers, students, and system designers in computer vision as well as related fields such as image processing, photogrammetry, medical image analysis, and signal processing in general.
Conference Paper
Full-text available
. Computational vision often needs to deal with derivatives of digital images. Such derivatives are not intrinsic properties of digital data; a paradigm is required to make them well-defined. Normally, a linear filtering is applied. This can be formulated in terms of scale-space, functional minimization, or edge detection filters. The main emphasis of this paper is to connect these theories in order to gain insight in their similarities and differences. We take regularization (or functional minimization) as a starting point, and show that it boils down to Gaussian scale-space if we require scale invariance and a semi-group constraint to be satisfied. This regularization implies the minimization of a functional containing terms up to infinite order of differentiation. If the functional is truncated at second order, the Canny-Deriche filter arises. 1 Introduction Given a digital signal in one or more dimensions, we want to define its derivatives in a well-posed way. This can...
Article
Full-text available
We discuss a semidiscrete framework for nonlinear diffusion scale-spaces, where the image is sampled on a finite grid and the scale parameter is continuous. This leads to a system of nonlinear ordinary differential equations. We investigate conditions under which one can guarantee well-posedness properties, an extremum principle, average grey level invariance, smoothing Lyapunov functionals, and convergence to a constant steady-state. These properties are in analogy to previously established results for the continuous setting. Interestingly, this semidiscrete framework helps to explain the so-called Perona-Malik paradox: The Perona-Malik equation is a forward-backward diffusion equation which is widely-used in image processing since it combines intraregional smoothing with edge enhancement. Although its continuous formulation is regarded to be ill-posed, it turns out that a spatial discretization is sufficient to create a well-posed semidiscrete diffusion scale-space. We also pro...
Book
I: Parametric Minimal Surfaces.- 1. Functions of Bounded Variation and Caccioppoli Sets.- 2. Traces of BV Functions.- 3. The Reduced Boundary.- 4. Regularity of the Reduced Boundary.- 5. Some Inequalities.- 6. Approximation of Minimal Sets (I).- 7. Approximation of Minimal Sets (II).- 8. Regularity of Minimal Surfaces.- 9. Minimal Cones.- 10. The First and Second Variation of the Area.- 11. The Dimension of the Singular Set.- II: Non-Parametric Minimal Surfaces.- 12. Classical Solutions of the Minimal Surface Equation.- 13. The a priori Estimate of the Gradient.- 14. Direct Methods.- 15. Boundary Regularity.- 16. A Further Extension of the Notion of Non-Parametric Minimal Surface.- 17. The Bernstein Problem.- Appendix A.- Appendix B.- Appendix C.
Article
Shock filters for image enhancement are developed. The filters use new nonlinear time dependent partial differential equations and their discretizations. The evolution of the initial image $u_0 (x,y)$ as $t \to \infty $ into a steady state solution $u_\infty (x,y)$ through $u(x,y,t)$, $t > 0$, is the filtering process. The partial differential equations have solutions which satisfy a maximum principle. Moreover the total variation of the solution for any fixed $t > 0$ is the same as that of the initial data. The processed image is piecewise smooth, nonoscillatory, and the jumps occur across zeros of an elliptic operator (edge detector). The algorithm is relatively fast and easy to program.
Article
A new version of the Perona and Malik theory for edge detection and image restoration is proposed. This new version keeps all the improvements of the original model and avoids its drawbacks: it is proved to be stable in presence of noise, with existence and uniqueness results. Numerical experiments on natural images are presented.
Article
The authors analyze an initial-boundary value problem for the equation \[u_t = \varphi (u_x )_x + \tau \psi (u_x )xt,\] where $\tau $ is a positive parameter, $\varphi ,\psi :{\bf R} \to {\bf R}$, $\varphi $ is nonmonotone, $\psi $ is strictly increasing and uniformly bounded in ${\bf R}$, and $|\varphi '(p)| = O(\psi '(p))$ as $p \to \pm \infty $. The equation arises as a (new) model for turbulent heat or mass transfer in stably stratified shear flows, in which case $u_x $ is nonnegative, $\varphi (p) > 0$ for $p > 0$, and $\varphi (0) = \varphi ( + \infty ) = 0$. Well-posedness is proved and, in the model case, the qualitative behavior of solutions is studied. In particular it is shown that smooth solutions may become discontinuous in finite time, and that such solutions converge to a piecewise constant spatial profile as $t \to \infty $. This behavior is in agreement with experimental observations and numerical computations.
Article
Let A be an operator from a real Banach space into a real Hilbert space. In this paper we study least squares regularization methods for the ill-posed operator equation A(u) = f using nonlinear nondifferentiable penalty functionals. We introduce a notion of distributional approximation, and use constructs of distributional approximations to establish convergence and stability of approximations of bounded variation solutions of the operator equation. We also show that the results provide a framework for a rigorous analysis of numerical methods based on Euler-Lagrange equations to solve the minimization problem. This justifies many of the numerical implementation schemes of bounded variation minimization that have been recently proposed.
Article
The problem of recovering images with sharp edges by total variation denoising has recently received considerable attention in image processing. Numerical difficulties in implementing this nonlinear filter technique are partly due to the fact that it involves the stable evaluations of unbounded operators. To overcome that difficulty we propose to approximate the evaluation of the unbounded operator by a stable approximation. A convergence analysis for this regularized approach is presented.
Thesis
In this work a scale-space framework has been presented which does not require any monotony assumption (comparison principle). We have seen that, besides the fact that many global smoothing scale-space properties are maintained, new possibilities with respect to image restoration appear. Rather than deducing a unique equation from first principles, we have analyzed well-posedness and scale-space properties of a general family of regularized anisotropic diffusion filters. Existence and uniqueness results, continuous dependence of the solution on the initial image, maximum-minimum principles, invariances, Lyapunov functionals, and convergence to a constant steady state have been established. The large class of Lyapunov functionals permits to regard these filters in numerous ways as simplifying, information-reducing transformations. These global smoothing properties do not contradict seemingly opposite local effects such its edge enhancement. For this reason it is possible to design scale-spaces with restoration properties giving segmentation-like results. Prerequisites have been stated under which one can prove well-posedness and scale-space results in the continuous, semidiscrete and discrete setting. Each of these frameworks stands on its own and does not require the others. On the other hand, the prerequisites in all three settings reveal many similarities and, as a consequence, representatives of the semidiscrete class can be obtained by suitable spatial discretizations of the continuous class, while representatives of the discrete class may arise from time discretizations of semidiscrete filters. The degree of freedom within the proposed class of filters can be used to tailor the filters towards specific restoration tasks. Therefore, these scale-spaces do not need to be uncommitted; they give the user the liberty to incorporate a-priori knowledge, for instance concerning size and contrast of especially interesting features. The analyzed class comprises linear diffusion filtering and the nonlinear isotropic model of Catté, Lions, Morel, Coll and Whitaker and Pizer, but also novel approaches have been proposed: The use of diffusion tensors instead of scalar-valued diffusivities puts us in a position to design real anisotropic diffusion processes which may reveal advantages at noisy edges. Last but not least, the fact that these filters are steered by the structure tensor instead of the regularized gradient allows to adapt them to more sophisticated tasks such as the enhancement of coherent flow-like structures. In view of these results, anisotropic diffusion deserves to be regarded as much more than all ad-hoc strategy for transforming a degraded image into a more pleasant looking one. It is a flexible and mathematically sound class of methods which ties the advantages of two worlds: scale-space analysis and image restoration.
Article
Regularization with functions of bounded variation has been proven to be effective for denoising signals and images. This nonlinear regularization technique, in contrast with linear regularization techniques like Tikhonov regularization, has the advantage that discontinuities in signals and images can be located very precisely. In this paper bounded variation regularization is generalized to functions with higher order derivatives of bounded variation. This concept is applied to locate discontinuities in derivatives, which has important applications in parameter estimation problems.
Article
We study here a classical image denoising technique introduced by L. Rudin and S. Osher a few years ago, namely the constrained minimization of the total variation (TV) of the image. First, we give results of existence and uniqueness and prove the link between the constrained minimization problem and the minimization of an associated Lagrangian functional. Then we describe a relaxation method for computing the solution, and give a proof of convergence. After this, we explain why the TV-based model is well suited to the recovery of some images and not of others. We eventually propose an alternative approach whose purpose is to handle the minimization of the minimum of several convex functionals. We propose for instance a variant of the original TV minimization problem that handles correctly some situations where TV fails.
Conference Paper
We show that regularization methods can be regarded as scale-spaces where the regularization parameter serves as scale. In analogy to nonlinear diffusion filtering we establish continuity with respect to scale, causality in terms of a maximum/minimum principle, simplifica- tion properties by means of Lyapunov functionals and convergence to a constant steady-state. We identify nonlinear regularization with a single implicit time step of a diffusion process. This implies that iterated regu- larization with small regularization parameters is a numerical realization of a diffusion filter. Numerical experiments in two and three space dimen- sions illustrate the scale-space behaviour of regularization methods.
Article
A global edge detection algorithm based on variational regularization is presented and analysed. The algorithm can also be viewed as an anisotropic diffusion method. These two quite different methods are thereby unified from the original outlook. This puts anisotropic diffusion, as a method in early vision, on more solid grounds; it is just as well founded as the well-accepted standard regularization techniques. The unification also brings the anisotropic diffusion method an appealing sense of optimality, thereby intuitively explaining its extraordinary performance. The algorithm to be presented, moreover, has the following attractive properties:1. It only requires the solution of a single boundary value problem over the entire image domain — almost always a very simple (rectangular) region.2. It converges to the solution of interest.The first of these properties implies very significant advantages over other existing regularization methods; the computation cost is typically cut by an order of magnitude or more. The second property represents considerable advantages over the existing diffusion methods; it removes the problem of deciding when to stop, as well as that of actually stopping the diffusion process.
Conference Paper
Many image processing problems are ill-posed and must be regularized. Usually, a roughness penalty is imposed on the solution. The difficulty is to avoid the smoothing of edges, which are very important attributes of the image. The authors first give sufficient conditions for the design of such an edge-preserving regularization. Under these conditions, it is possible to introduce an auxiliary variable whose role is twofold. Firstly, it marks the discontinuities and ensures their preservation from smoothing. Secondly, it makes the criterion half-quadratic. The optimization is then easier. The authors propose a deterministic strategy, based on alternate minimizations on the image and the auxiliary variable. This yields two algorithms, ARTUR and LEGEND. The authors apply these algorithms to the problem of SPECT reconstruction
Article
We propose the minimization of a nonquadratic functional or, equivalently, a nonlinear diffusion model to smooth noisy image functionsg:Ω ⊂R n →R while preserving significant transitions of the data. The model is chosen such that important properties of the conventional quadratic-functional approach still hold: (1) existence of a unique solution continuously depending on the datag and (2) stability of approximations using the standard finite-element method. Relations with other global approaches for the segmentation of image data are discussed. Numerical experiments with real data illustrate this approach.
Article
This book contains both a synthesis and mathematical analysis of a wide set of algorithms and theories whose aim is the automatic segmen­ tation of digital images as well as the understanding of visual perception. A common formalism for these theories and algorithms is obtained in a variational form. Thank to this formalization, mathematical questions about the soundness of algorithms can be raised and answered. Perception theory has to deal with the complex interaction between regions and "edges" (or boundaries) in an image: in the variational seg­ mentation energies, "edge" terms compete with "region" terms in a way which is supposed to impose regularity on both regions and boundaries. This fact was an experimental guess in perception phenomenology and computer vision until it was proposed as a mathematical conjecture by Mumford and Shah. The third part of the book presents a unified presentation of the evi­ dences in favour of the conjecture. It is proved that the competition of one-dimensional and two-dimensional energy terms in a variational for­ mulation cannot create fractal-like behaviour for the edges. The proof of regularity for the edges of a segmentation constantly involves con­ cepts from geometric measure theory, which proves to be central in im­ age processing theory. The second part of the book provides a fast and self-contained presentation of the classical theory of rectifiable sets (the "edges") and unrectifiable sets ("fractals").
Article
Tikhonov regularization with a modified total variation regularization functional is used to recover an image from noisy, blurred data. This approach is appropriate for image processing in that it does not place a priori smoothness conditions on the solution image. An efficient algorithm is presented for the discretized problem that combines a fixed point iteration to handle nonlinearity with a new, effective preconditioned conjugate gradient iteration for large linear systems. Reconstructions, convergence results, and a direct comparison with a fast linear solver are presented for a satellite image reconstruction application
Article
We present a blind deconvolution algorithm based on the total variational (TV) minimization method proposed by Acar and Vogel (1994). The motivation for regularizing with the TV norm is that it is extremely effective for recovering edges of images as well as some blurring functions, e.g., motion blur and out-of-focus blur. An alternating minimization (AM) implicit iterative scheme is devised to recover the image and simultaneously identify the point spread function (PSF). Numerical results indicate that the iterative scheme is quite robust, converges very fast (especially for discontinuous blur), and both the image and the PSF can be recovered under the presence of high noise level. Finally, we remark that PSFs without sharp edges, e.g., Gaussian blur, can also be identified through the TV approach
Article
One popular method for the recovery of an ideal intensity image from corrupted or indirect measurements is regularization: minimize an objective function that enforces a roughness penalty in addition to coherence with the data. Linear estimates are relatively easy to compute but generally introduce systematic errors; for example, they are incapable of recovering discontinuities and other important image attributes. In contrast, nonlinear estimates are more accurate but are often far less accessible. This is particularly true when the objective function is nonconvex, and the distribution of each data component depends on many image components through a linear operator with broad support. Our approach is based on an auxiliary array and an extended objective function in which the original variables appear quadratically and the auxiliary variables are decoupled. Minimizing over the auxiliary array alone yields the original function so that the original image estimate can be obtained by joint minimization. This can be done efficiently by Monte Carlo methods, for example by FFT-based annealing using a Markov chain that alternates between (global) transitions from one array to the other. Experiments are reported in optical astronomy, with space telescope data, and computed tomography
Article
A novel method of reconstruction from single-photon emission computerized tomography data is proposed. This method builds on the expectation-maximization (EM) approach to maximum likelihood reconstruction from emission tomography data, but aims instead at maximum posterior probability estimation, which takes account of prior belief about smoothness in the isotope concentration. A novel modification to the EM algorithm yields a practical method. The method is illustrated by an application to data from brain scans
Article
The authors describe a model of nonlinear image filtering for noise reduction and edge enhancement using anisotropic diffusion. The method is designed to enhance not only edges, but corners and T junctions as well. The method roughly corresponds to a nonlinear diffusion process with backward heat flow across the strong edges. Such a process is ill posed, making the results depend strongly on how the algorithm differs from the diffusion process. Two ways of modifying the equations using simulations on a variable grid are studied
Article
A new definition of scale-space is suggested, and a class of algorithms used to realize a diffusion process is introduced. The diffusion coefficient is chosen to vary spatially in such a way as to encourage intraregion smoothing rather than interregion smoothing. It is shown that the `no new maxima should be generated at coarse scales' property of conventional scale space is preserved. As the region boundaries in the approach remain sharp, a high-quality edge detector which successfully exploits global information is obtained. Experimental results are shown on a number of images. Parallel hardware implementations are made feasible because the algorithm involves elementary, local operations replicated over the image
Article
. A computational algorithm is proposed for image enhancement based on total variation minimization with constraints. This constrained minimization problem is introduced by Rudin et al [13, 14, 15] to enhance blurred and noisy images. Our computational algorithm solves the constrained minimization problem directly by adapting the affine scaling method for the unconstrained l 1 problem [3]. The resulting computational scheme, when viewed as an image enhancement process, has the feature that it can be used in an interactive manner in situations where knowledge of the noise level is either unavailable or unreliable. This computational algorithm can be implemented with a conjugate gradient method. It is further demonstrated that the iterative enhancement process is efficient. Key Words. image enhancement, image reconstruction, deconvolution, minimal total variation, affine scaling algorithm, projected gradient method Department of Computer Science and Advanced Computing Research Institut...
Article
. This paper gives an overview of scale-space and image enhancement techniques which are based on parabolic partial differential equations in divergence form. In the nonlinear setting this filter class allows to integrate a-priori knowledge into the evolution. We sketch basic ideas behind the different filter models, discuss their theoretical foundations and scale-space properties, discrete aspects, suitable algorithms, generalizations, and applications. 1 Introduction During the last decade nonlinear diffusion filters have become a powerful and well-founded tool in multiscale image analysis. These models allow to include a-priori knowledge into the scale-space evolution, and they lead to an image simplification which simultaneously preserves or even enhances semantically important information such as edges, lines, or flow-like structures. Many papers have appeared proposing different models, investigating their theoretical foundations, and describing interesting applications. For a...
Article
. This paper presents an abstract analysis of bounded variation (BV) methods for ill-posed operator equations Au = z. Let T (u) def = kAu Gamma zk 2 + ffJ(u); where the penalty, or "regularization", parameter ff ? 0 and the functional J(u) is the BV norm or seminorm of u, also known as the total variation of u. Under mild restrictions on the operator A and the functional J(u), it is shown that the functional T (u) has a unique minimizer which is stable with respect to certain perturbations in the data z, the operator A, the parameter ff, and the functional J(u). In addition, convergence results are obtained which apply when these perturbations vanish and the regularization parameter is chosen appropriately. Key words. total variation, bounded variation, regularization, compact operator equations, ill-posed problems AMS(MOS) subject classifications. 49J10, 49J27, 49J45, 49K40, 65R30 1. Introduction. Consider the equation Au = z (1.1) where A is a linear operator from L p (...
Article
. In total variation denoising, one attempts to remove noise from a signal or image by solving a nonlinear minimization problem involving a total variation criterion. Several approaches based on this idea have recently been shown to be very effective, particularly for denoising functions with discontinuities. This paper analyzes the convergence of an iterative method for solving such problems. The iterative method involves a "lagged diffusivity" approach in which a sequence of linear diffusion problems are solved. Global convergence in a finite dimensional setting is established, and local convergence properties, including rates and their dependence on various parameters, are examined. Key words. denoising, total variation, convergence analysis. AMS(MOS) subject classifications. 49M05, 65K10 1. Introduction. Consider the problem of reconstructing an unknown function u, called the image, from data z satisfying z = u + ffl: (1.1) Here ffl represents error, or noise, in the data. It i...