Article

Digital Image Reconstruction: Deblurring and Denoising

Los Alamos National Laboratory, Лос-Аламос, California, United States
Annual Review of Astronomy and Astrophysics (Impact Factor: 24.04). 08/2005; 43(1):139-194. DOI: 10.1146/annurev.astro.43.112904.104850

ABSTRACT ▪ Abstract Digital image reconstruction is a robust means by which the underlying images hidden in blurry and noisy data can be revealed. The main challenge is sensitivity to measurement noise in the input data, which can be magnified strongly, resulting in large artifacts in the reconstructed image. The cure is to restrict the permitted images. This review summarizes image reconstruction methods in current use. Progressively more sophisticated image restrictions have been developed, including (a) filtering the input data, (b) regularization by global penalty functions, and (c) spatially adaptive methods that impose a variable degree of restriction across the image. The most reliable reconstruction is the most conservative one, which seeks the simplest underlying image consistent with the input data. Simplicity is context-dependent, but for most imaging applications, the simplest reconstructed image is the smoothest one. Imposing the maximum, spatially adaptive smoothing permitted by the data results in t...

Download full-text

Full-text

Available from: Amos Yahil, Jan 13, 2014
0 Followers
 · 
129 Views
  • Source
    • "Methods of the second category, that are essentially some variants of the least squares, only use the deterministic spatial or spectral structures of the image such as smoothness for regularization. For a comprehensive survey of classic deconvolution methods for image restoration and reconstruction we refer the interested readers to [1] and [2]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We investigate the problem of reconstructing signals and images from a subsampled convolution of masked snapshots and a known filter. The problem is studied in the context of coded imaging systems, where the diversity provided by the random masks makes the deconvolution problem significantly better conditioned than it would be from a set of direct measurements. We start by studying the conditioning of the forward linear measurement operator that describes the system in terms of the number of masks $K$, the dimension of image $L$, the number of sensors $N$, and certain characteristics of the blur kernel. We show that stable deconvolution is possible when $KN \geq L\log L$, meaning that the total number of sensor measurements is within a logarithmic factor of the image size. Next, we consider the scenario where the target image is known to be sparse. We show that under mild conditions on the blurring kernel, the linear system is a restricted isometry when the number of masks is within a logarithmic factor of the number of active components, making the image recoverable using any one of a number of sparse recovery techniques.
  • Source
    • "where N represents the number of data points, as discussed in [19], and τ is a fixed positive number. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we propose a new statistical stopping rule for constrained maximum likelihood iterative algorithms applied to ill-posed inverse problems. To this aim we extend the definition of Tikhonov regularization in a statistical framework and prove that the application of the proposed stopping rule to the Iterative Space Reconstruction Algorithm (ISRA) in the Gaussian case and Expectation Maximization (EM) in the Poisson case leads to well defined regularization methods according to the given definition. We also prove that, if an inverse problem is genuinely ill-posed in the sense of Tikhonov, the same definition is not satisfied when ISRA and EM are optimized by classical stopping rule like Morozov's discrepancy principle, Pearson's test and Poisson discrepancy principle. The stopping rule is illustrated in the case of image reconstruction from data recorded by the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI). First, by using a simulated image consisting of structures analogous to those of a real solar flare we validate the fidelity and accuracy with which the proposed stopping rule recovers the input image. Second, the robustness of the method is compared with the other classical stopping rules and its advantages are shown in the case of real data recorded by RHESSI during two different flaring events.
  • Source
    • "In this paper, we consider making observations of the form y = Af + n, where f is a discretized version of f , A is a (discrete) linear operator that may not be invertible, and n is additive noise that corrupts our observations. For instance, y might correspond to tomographic projections in tomography [19], interferometric measurements in radar interferometry [33], multiple blurred, low-resolution, dithered snapshots in astronomy [31], or random projections in compressed sensing systems [1] [7] [8] [11] [38]. Our goal in this y = Af + n setting is to perform level set estimation without an intermediate step involving time-consuming reconstruction of f . "
    [Show abstract] [Hide abstract]
    ABSTRACT: Estimation of the level set of a function (i.e., regions where the function exceeds some value) is an important problem with applications in digital elevation mapping, medical imaging, astronomy, etc. In many applications, the function of interest is not observed directly. Rather, it is acquired through (linear) projection measurements, such as tomographic projections, interferometric measurements, coded-aperture measurements, and random projections associated with compressed sensing. This paper describes a new methodology for rapid and accurate estimation of the level set from such projection measurements. The key defining characteristic of the proposed method, called the projective level set estimator, is its ability to estimate the level set from projection measurements without an intermediate reconstruction step. This leads to significantly faster computation relative to heuristic "plug-in" methods that first estimate the function, typically with an iterative algorithm, and then threshold the result. The paper also includes a rigorous theoretical analysis of the proposed method, which utilizes the recent results from the non-asymptotic theory of random matrices results from the literature on concentration of measure and characterizes the estimator's performance in terms of geometry of the measurement operator and 1-norm of the discretized function.
    SIAM Journal on Imaging Sciences 09/2012; 6(4). DOI:10.1137/120891927 · 2.87 Impact Factor
Show more