Chapter

Introduction to Inverse Problem in Imaging

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Article
Full-text available
This contribution deals with the focusing and time–depth conversion of GPR data in a two-layer scenario. In a layered scenario, the different propagation velocities of the electromagnetic waves in different layers can induce distortion in the reconstruction. In particular, it is not possible, with common data processing, to achieve the focusing of both the targets embedded in the upper and deeper layers at the same time. We will present a strategy for mitigating this problem. The strength points of this strategy are its simplicity and the fact that it only requires a commercial code for the GPR data processing plus some steps easily implementable through a homemade code. This work extends a contribution of the same authors at the 20th International GPR conference held in Chanchung, China, in June 2024, and the conclusions are updated with three-dimensional numerical results, including depth slices determined directly in the spatial domain. Both numerical and experimental data will be shown.
Article
Breast cancer represents one of the main reasons of death among women. As an alternative to the gold standard techniques for breast cancer diagnosis, microwave imaging has been proposed from research community and many microwave systems have been designed mainly to work at low microwave frequencies. Based both on the results of recent dielectric characterization campaigns on human breast ex-vivo tissues up to 50 GHz and on the promising feasibility studies of mm-wave imaging systems, in this article, we propose and test inverse scattering techniques as effective tool to process mm-wave data to image breast cancer. Differently from radar techniques so far adopted in conjunction with mm-wave imaging system, inverse scattering techniques turn out to be more versatile and robust with respect to the reduction of the amount of data and eventually also able to characterize the anomaly in terms of electromagnetic properties. In particular, in the above, two image reconstruction techniques, the Linear Sampling Method and the Born Approximation, are proposed and compared against both simulated and experimental data.
Article
Full-text available
This paper investigates the estimation of the interaction function for a class of McKean–Vlasov stochastic differential equations. The estimation is based on observations of the associated particle system at time T, considering the scenario where both the time horizon T and the number of particles N tend to infinity. Our proposed method recovers polynomial rates of convergence for the resulting estimator. This is achieved under the assumption of exponentially decaying tails for the interaction function. Additionally, we conduct a thorough analysis of the transform of the associated invariant density as a complex function, providing essential insights for our main results.
Chapter
Medical imaging problems, such as magnetic resonance imaging, can typically be modeled as inverse problems. A novel methodological approach that was already proven to be highly effective and widely applicable is based on the assumption that most real-life images are intrinsically of low-dimensional nature. This sparsity property can be revealed by representation systems from the area of applied harmonic analysis such as wavelets or shearlets. The inverse problem itself is then solved by sparse regularization, which in certain situations is referred to as compressed sensing. This chapter shall serve as an introduction to and a survey of mathematical methods for medical imaging problems with a specific focus on sparsity-based methods. The effectiveness of the presented methods is demonstrated with a small case study from sparse parallel magnetic resonance imaging.
Article
Full-text available
This paper deals with the imaging problem from data collected by means of a microwave photonics-based distributed radar network. The radar network is leveraged on a centralized architecture, which is composed of one central unit (CU) and two transmitting and receiving dual-band remote radar peripherals (RPs), it is capable of collecting monostatic and multistatic phase-coherent data. The imaging is herein formulated as a linear inverse scattering problem and solved in a regularized way through the truncated singular value decomposition inversion scheme. Specifically, two different imaging schemes based on an incoherent fusion of the tomographic images or a fully coherent data processing are herein developed and compared. Experimental tests carried out in a port scenario for imaging both a stationary and a moving target are reported to validate the imaging approach.
Article
Full-text available
Motivated by the recent work (Donatelli et al. Electron. Trans. Numer. Anal. 59, 157-178 2023) on a preconditioned MINRES for flipped linear systems in imaging with Dirichlet boundary conditions (BCs), in this note we extend the scope of that research for including more precise BCs such as reflective and anti-reflective ones. We prove spectral results for the matrix-sequences associated to the original deblurring problem incorporating the considered BCs. More precisely, the resulting matrix-sequences are real quasi-symmetric i.e. real symmetric up to zero-distributed perturbations and this justifies the use of MINRES in the current setting. The theoretical spectral analysis is supported by a wide variety of numerical experiments, concerning the visualization of the spectra of the original matrices in various ways. We also report numerical tests regarding the convergence speed and regularization features of the associated GMRES and MINRES methods. Conclusions and open problems end the present study.
Chapter
Large least-squares problems can be sensitive to perturbations in the data, which can either be due to model approximation or superimposed errors (noise). These unwanted perturbations in the data can lead to substantial errors in the solution which numerical analysts have been trying to estimate and control in the past decades, resulting in a body of knowledge called numerical linear algebra. One basic characterization of sensitivity to noise in the data is an index called numerical conditioning. In this chapter we discuss the numerical conditioning of linear least-squares problems and relate it to a more general characterization of conditioning of linear inverse problems of which linear least squares is just a particular example. There are two main avenues to attack ill-conditioned linear least-squares problems, both based on the idea of regularization—one based on quadratic regularization and the other based on non-smooth regularization, called the Lasso. We briefly discuss the merits of these two techniques and conclude with a summary of a third regularization technique based on local regression via local polynomial expansion using splines.
Article
Full-text available
Proximal–gradient methods are widely employed tools in imaging that can be accelerated by adopting variable metrics and/or extrapolation steps. One crucial issue is the inexact computation of the proximal operator, often implemented through a nested primal–dual solver, which represents the main computational bottleneck whenever an increasing accuracy in the computation is required. In this paper, we propose a nested primal–dual method for the efficient solution of regularized convex optimization problems. Our proposed method approximates a variable metric proximal–gradient step with extrapolation by performing a prefixed number of primal–dual iterates, while adjusting the steplength parameter through an appropriate backtracking procedure. Choosing a prefixed number of inner iterations allows the algorithm to keep the computational cost per iteration low. We prove the convergence of the iterates sequence towards a solution of the problem, under a relaxed monotonicity assumption on the scaling matrices and a shrinking condition on the extrapolation parameters. Furthermore, we investigate the numerical performance of our proposed method by equipping it with a scaling matrix inspired by the Iterated Tikhonov method. The numerical results show that the combination of such scaling matrices and Nesterov-like extrapolation parameters yields an effective acceleration towards the solution of the problem.
Article
Full-text available
This letter deals with full 3D imaging by contactless multi-monostatic Ground Penetrating Radar data by focusing on the effect of the measurement configuration on reconstruction performance. The imaging is faced as a linear inverse scattering problem and the Truncated Singular Value Decomposition is adopted to obtain the regularized solution. A criterion to determine a suitable spacing among the measurement points is provided and the effect of the data under sampling on the imaging results is also investigated. Numerical results based on the system point spread function are provided to verify the validity of the proposed criterion for the estimation of the non-redundant measurement spacing. Finally, reconstruction results relevant to a laboratory controlled experimental test validate the imaging approach as well as the derived data sampling criterion.
Article
Full-text available
Reduced-order models (ROMs) have achieved a lot of success in reducing the computational cost of traditional numerical methods across many disciplines. In fluid dynamics, ROMs have been successful in providing efficient and relatively accurate solutions for the numerical simulation of laminar flows. For convection-dominated (e.g., turbulent) flows, however, standard ROMs generally yield inaccurate results, usually affected by spurious oscillations. Thus, ROMs are usually equipped with numerical stabilization or closure models in order to account for the effect of the discarded modes. The literature on ROM closures and stabilizations is large and growing fast. In this paper, instead of reviewing all the ROM closures and stabilizations, we took a more modest step and focused on one particular type of ROM closure and stabilization that is inspired by large eddy simulation (LES), a classical strategy in computational fluid dynamics (CFD). These ROMs, which we call LES-ROMs, are extremely easy to implement, very efficient, and accurate. Indeed, LES-ROMs are modular and generally require minimal modifications to standard (“legacy”) ROM formulations. Furthermore, the computational overhead of these modifications is minimal. Finally, carefully tuned LES-ROMs can accurately capture the average physical quantities of interest in challenging convection-dominated flows in science and engineering applications. LES-ROMs are constructed by leveraging spatial filtering, which is the same principle used to build classical LES models. This ensures a modeling consistency between LES-ROMs and the approaches that generated the data used to train them. It also “bridges” two distinct research fields (LES and ROMs) that have been disconnected until now. This paper is a review of LES-ROMs, with a particular focus on the LES concepts and models that enable the construction of LES-inspired ROMs and the bridging of LES and reduced-order modeling. This paper starts with a description of a versatile LES strategy called evolve–filter–relax (EFR) that has been successfully used as a full-order method for both incompressible and compressible convection-dominated flows. We present evidence of this success. We then show how the EFR strategy, and spatial filtering in general, can be leveraged to construct LES-ROMs (e.g., EFR-ROM). Several applications of LES-ROMs to the numerical simulation of incompressible and compressible convection-dominated flows are presented. Finally, we draw conclusions and outline several research directions and open questions in LES-ROM development. While we do not claim this review to be comprehensive, we certainly hope it serves as a brief and friendly introduction to this exciting research area, which we believe has a lot of potential in the practical numerical simulation of convection-dominated flows in science, engineering, and medicine.
Article
In this paper, we establish existence, uniqueness, and convergence results for approximating fixed points using a Krasnosel’skii iterative process for generalized Bianchini mappings in Banach spaces. Additionally, we demonstrate the practical applications of our main fixed point theorems by solving variational inequality problems, split feasibility problems, and certain linear systems of equations.
Article
Label-free quantitative phase imaging can potentially measure cellular dynamics with minimal perturbation, motivating efforts to develop faster and more sensitive instrumentation. We characterize fast, single-shot quantitative phase gradient microscopy (ss-QPGM) that simultaneously acquires multiple polarization components required to reconstruct phase images. We integrate a computationally efficient least squares algorithm to provide real-time, video-rate imaging (up to 75 frames / s ). The developed instrument was used to observe changes in cellular morphology and correlate these to molecular measures commonly obtained by staining.
Article
Full-text available
Microwaves can safely and non-destructively illuminate and penetrate dielectric materials, making them an attractive solution for various medical tasks, including detection, diagnosis, classification, and monitoring. Their inherent electromagnetic properties, portability, cost-effectiveness, and the growth in computing capabilities have encouraged the development of numerous microwave sensing and imaging systems in the medical field, with the potential to complement or even replace current gold-standard methods. This review aims to provide a comprehensive update on the latest advances in medical applications of microwaves, particularly focusing on the near-field ones working within the 1–15 GHz frequency range. It specifically examines significant strides in the development of clinical devices for brain stroke diagnosis and classification, breast cancer screening, and continuous blood glucose monitoring. The technical implementation and algorithmic aspects of prototypes and devices are discussed in detail, including the transceiver systems, radiating elements (such as antennas and sensors), and the imaging algorithms. Additionally, it provides an overview of other promising cutting-edge microwave medical applications, such as knee injuries and colon polyps detection, torso scanning and image-based monitoring of thermal therapy intervention. Finally, the review discusses the challenges of achieving clinical engagement with microwave-based technologies and explores future perspectives.
Article
Full-text available
The last years have witnessed significant developments in image acquisition systems and in algorithms for extracting information from them. Nevertheless, in many scenarios, several factors can hinder the recovery of useful data from images. This is especially true and important in forensic applications, where images are often accidentally captured by an imaging system not engineered for that specific acquisition (for example, the surveillance system designed to monitor the entrance of a bank may accidentally capture the license plate of a vehicle passing outside the bank). Therefore, the acquired images often need to be processed to facilitate extracting information from them. When facing a combination of several impairment factors, such as blur and perspective distortion, several image restoration algorithms must be applied. Then, it is necessary to choose a restoration order, which means the order by which single restoration algorithms are chained together to obtain the enhanced image. This study aims to understand whether such an order may impact the final result. Of course, there exists a wide variety of image impairments; in this study, we focus on the case of an image affected by a combination of optical/motion blur, perspective distortion, and additive noise, which are all widespread artifacts in forensic image applications. To answer the question about the importance of choosing one restoration workflow over another, we first model each considered defect and its restoration operator and then analyze and compare the effects of the composition of such operators on the restored output. Such a comparison is made from both a mathematical and experimental point of view, using both images with synthetically generated impairments and pictures with real degradations. The results show that the restoration order can affect significantly the results, especially when the defects are severe.
Article
Full-text available
The paper deals with a combined time–depth conversion strategy able to improve the reconstruction of voids embedded in an opaque medium, such as cavities, caves, empty hypogeal rooms, and similar targets. The combined time–depth conversion accounts for the propagation velocity of the electromagnetic waves both in free space and in the embedding medium, and it allows better imaging and interpretation of the underground scenario. To assess the strategy’s effectiveness, ground penetrating radar (GPR) data referred to as an experimental test in controlled conditions are accounted for and processed by two different approaches to achieve focused images of the scenario under test. The first approach is based on a classical migration algorithm, while the second one faces the imaging as a linear inverse scattering approach. The results corroborate that the combined time–depth conversion improves the imaging in both cases.
Article
The dielectric properties of biological tissues are key parameters that support the design and usability of a wide range of electromagnetic-based medical applications, including for diagnostics and therapeutics, and allow the determination of safety and health effects due to exposure to electromagnetic fields. While an extensive body of literature exists that reports on values of these properties for different tissue types under different measurement conditions, it is now evident that there are large uncertainties and inconsistencies between measurement reports. Due to varying measurement techniques, limited measurement validation strategies, and lack of metadata reporting and confounder control, reported dielectric properties suffer from a lack of repeatability and questionable accuracy. Recently, the American Society of Mechanical Engineers (ASME) Thermal Medicine Standards Committee was formed, which included a Tissue Properties working group. This effort aims to support the translation and commercialization of medical technologies, through the development of a standard lexicon and standard measurement protocols. In this work, we present initial results from the Electromagnetic Tissue Properties subgroup. Specifically, this paper reports a critical gap analysis facing the standardization pathway for the dielectric measurement of biological tissues. All established measurement techniques are examined and compared, and emerging ones are assessed. Perspectives on the importance and challenges in measurement validation, accuracy calculation, metadata collection, and reporting are also discussed.
Article
Full-text available
Stroke is a leading cause of death and disability worldwide, and early diagnosis and prompt medical intervention are thus crucial. Frequent monitoring of stroke patients is also essential to assess treatment efficacy and detect complications earlier. While computed tomography (CT) and magnetic resonance imaging (MRI) are commonly used for stroke diagnosis, they cannot be easily used onsite, nor for frequent monitoring purposes. To meet those requirements, an electromagnetic imaging (EMI) device, which is portable, non-invasive, and non-ionizing, has been developed. It uses a headset with an antenna array that irradiates the head with a safe low-frequency EM field and captures scattered fields to map the brain using a complementary set of physics-based and data-driven algorithms, enabling quasi-real-time detection, two-dimensional localization, and classification of strokes. This study reports clinical findings from the first time the device was used on stroke patients. The clinical results on 50 patients indicate achieving an overall accuracy of 98% in classification and 80% in two-dimensional quadrant localization. With its lightweight design and potential for use by a single para-medical staff at the point of care, the device can be used in intensive care units, emergency departments, and by paramedics for onsite diagnosis.
Article
Full-text available
Majorization–minimization (MM) is a versatile optimization technique that operates on surrogate functions satisfying tangency and domination conditions. Our focus is on differentiable optimization using inexact MM with quadratic surrogates, which amounts to approximately solving a sequence of symmetric positive definite systems. We begin by investigating the convergence properties of this process, from subconvergence to R-linear convergence, with emphasis on tame objectives. Then we provide a numerically stable implementation based on truncated conjugate gradient. Applications to multidimensional scaling and regularized inversion are discussed and illustrated through numerical experiments on graph layout and X-ray tomography. In the end, quadratic MM not only offers solid guarantees of convergence and stability, but is robust to the choice of its control parameters.
Article
Full-text available
Invisibility was long thought to be exclusively the domain of science fiction and fantasy authors, but in recent years it has been the subject of extensive theoretical and experimental research. In this retrospective we look back on the evolution of invisibility in science, from the earliest hints of invisible objects in the late 19th century up to the modern concepts of cloaking, and some of the connections between them.
Article
Full-text available
The accurate quantitative estimation of the electromagnetic properties of tissues can serve important diagnostic and therapeutic medical purposes. Quantitative microwave tomography is an imaging modality that can provide maps of the in-vivo electromagnetic properties of the imaged tissues, i.e . both the permittivity and the electric conductivity. A multi-step microwave tomography approach is proposed for the accurate retrieval of such spatial maps of biological tissues. The underlying idea behind the new imaging approach is to progressively add details to the maps in a step-wise fashion starting from single-frequency qualitative reconstructions. Multi-frequency microwave data is utilized strategically in the final stage. The approach results in improved accuracy of the reconstructions compared to inversion of the data in a single step. As a case study, the proposed workflow was tested on an experimental microwave data set collected for the imaging of the human forearm. The human forearm is a good test case as it contains several soft tissues as well as bone, exhibiting a wide range of values for the electrical properties.
Article
Full-text available
In the food industry, there is a growing demand for cost-effective methods for the inline inspection of food items able to non-invasively detect small foreign bodies that may have contaminated the product during the production process. Microwave imaging may be a valid alternative to the existing technologies, thanks to its inherently low-cost and its capability of sensing low-density contaminants. In this paper, a simple microwave imaging system specifically designed to enable the inspection of a large variety of food products is presented. The system consists of two circularly loaded antipodal Vivaldi antennas with a very large operative band, from 1 to 15 GHz, thus allowing a suitable spatial resolution for different food products, from mostly fatty to high water-content foods. The antennas are arranged in such a way as to collect a signal that can be used to exploit a recently proposed real-time microwave imaging strategy, leveraging the inherent symmetries that usually characterize food items. The system is experimentally characterized, and the achieved results compare favorably with the design specifications and numerical simulations. Relying on these positive results, the first experimental proof of the effectiveness of the entire system is presented confirming its efficacy.
Article
The commonly adopted route to control a dynamic system and make it follow the desired behavior consists of two steps. First, a model of the system is learned from input–output data, a task known as system identification in the engineering literature. Here, an important point is not only to derive a nominal model of the plant but also confidence bounds around it. The information coming from the first step is then exploited to design a controller that should guarantee a certain performance also under the uncertainty affecting the model. This classical way to control dynamic systems has recently been the subject of new intense research, thanks to an interesting cross-fertilization with the field of machine learning. New system identification and control techniques have been developed with links to function estimation and mathematical foundations in reproducing kernel Hilbert spaces (RKHSs) and Gaussian processes (GPs). This has become known as the Gaussian regression (kernel-based) approach to system identification and control . It is the purpose of this article to give an overview of this development (see “Summary”).
Article
Full-text available
One of the main challenges faced by iris recognition systems is to be able to work with people in motion, where the sensor is at an increasing distance (more than 1 m) from the person. The ultimate goal is to make the system less and less intrusive and require less cooperation from the person. When this scenario is implemented using a single static sensor, it will be necessary for the sensor to have a wide field of view and for the system to process a large number of frames per second (fps). In such a scenario, many of the captured eye images will not have adequate quality (contrast or resolution). This paper describes the implementation in an MPSoC (multiprocessor system-on-chip) of an eye image detection system that integrates, in the programmable logic (PL) part, a functional block to evaluate the level of defocus blur of the captured images. In this way, the system will be able to discard images that do not have the required focus quality in the subsequent processing steps. The proposals were successfully designed using Vitis High Level Synthesis (VHLS) and integrated into an eye detection framework capable of processing over 57 fps working with a 16 Mpixel sensor. Using, for validation, an extended version of the CASIA-Iris-distance V4 database, the experimental evaluation shows that the proposed framework is able to successfully discard unfocused eye images. But what is more relevant is that, in a real implementation, this proposal allows discarding up to 97% of out-of-focus eye images, which will not have to be processed by the segmentation and normalised iris pattern extraction blocks.
Article
Full-text available
The inverse scattering problem has numerous significant applications, including in geophysical explorations, medical imaging, and radar imaging. To achieve better performance of the imaging system, theoretical knowledge of the resolution of the algorithm is required for most of these applications. However, analytical investigations about the resolution presently feel inadequate. In order to estimate the achievable resolution, we address the point spread function (PSF) evaluation of the scattered field for a single frequency and the multi-view case both for the near and the far fields and the scalar case when the angular domain of the incident field and observation ranges is a round angle. Instead of the common free space condition, an inhomogeneous background medium, consisting of a homogeneous dielectric cylinder with a circular cross-section in free space, is assumed. In addition, since the exact evaluation of the PSF can only be accomplished numerically, an analytical approximation of the resolution is also considered. For the sake of its comparison, the truncated singular value decomposition (TSVD) algorithm can be used to implement the exact PSF. We show how the behavior of the singular values and the resolution change by varying the permittivity of the background medium. The usefulness of the theoretical discussion is demonstrated in localizing point-like scatterers within a dielectric cylinder, so mimicking a scenario that may occur in breast cancer imaging. Numerical results are provided to validate the analytical investigations.
Article
We investigate the capabilities of Deep Learning (DL) based on a Convolutional Neural Network (CNN) to improve the solution of an electromagnetic inverse source problem against a classical regularization scheme, the Truncated Singular Value Decomposition (TSVD) in particular. We consider a planar, scalar source and a far-zone observation domain, for which the unknown-to-data relation is provided by a 2D Fourier-like operator. The exploited a priori information is weak geometrical information for TSVD, whereas for CNN a priori information is the one embedded during the training stage. As long as the objects belong to a subset matching the information used for the training stage, the non-linear processing of the NN outperforms the linear processing of the TSVD by extrapolating out-of-band harmonics. On the other side, the NN performs poorly when the object does not match the a priori information. The results are of general interest for problems where the Fourier inversion is considered.
Article
Full-text available
In linear inverse scattering, the performance of the imaging system is sometimes evaluated in terms of its resolution, i.e., its capability to reconstruct a point-like scatterer. However, there is still a lack of analytical studies on the achievable resolution. To address this, we consider the point spread function (PSF) evaluation of the scattered near field for the single frequency and multi-view/multistatic case in homogeneous medium. Instead of numerically computing the PSF, we propose and discuss an approximate closed form under series expansions according to the angular ranges of both source and receiver location. In order to assess the effectiveness of the proposed approximation, we consider two cases including both full and limited view angles for the incident field and observation ranges. In addition, we provide a localization application to show the usefulness of the theoretical discussion. Numerical results confirmed the analytical investigations.
Article
Full-text available
In this paper, we are concerned with microwave subsurface imaging achieved by inverting the linearized scattering operator arising from the Born approximation. In particular, we consider the important question of reducing the required data to achieve imaging. This can help to reduce the radar system’s cost and complexity and mitigate the imaging algorithm’s computational burden and the needed storage resources. To cope with these issues, in the framework of a multi-monostatic/multi-frequency configuration, we introduce a new spatial sampling scheme, named the warping method, that allows for a significant reduction in spatial measurements compared to other literature approaches. The basic idea is to introduce some variable transformations that “warp” the measurement space so that the reconstruction point-spread function obtained by adjoint inversion is recast as a Fourier-like transformation, which provides insights into how to achieve the sampling. In our previous contributions, we focused on presenting and checking the theoretical background with simple numerical examples. In this contribution, we briefly review the key components of the warping method and present its experimental validation by considering a realistic subsurface scattering scenario for the case of a buried water pipe. Essentially, we show that the latter succeeds in reducing the number of data compared to other approaches in the literature, without significantly affecting the reconstruction results.
Article
Full-text available
This paper presents a parallel implementation of a fixed-point algorithm for wrapped phase denoising. The model based on total variation efficiently estimates discontinuous phase maps and further incorporates the Pythagorean trigonometric identity between the real and imaginary parts of the phase map, enhancing the quality of the restored phase. In this work, two parallel C/C++ implementations of the sequential version of algorithms were developed. The implementations include execution in a multi-core CPU and a GPU using OpenMP and CUDA, respectively. We show performance comparisons of the parallel implementations with advanced methods using synthetic and experimental data. Results show that our parallel implementations achieve speedups over the serial implementation of 12×12×12\times for multi-core CPU and 110×110×110\times for GPU.
Article
We present a statistical framework to benchmark the performance of reconstruction algorithms for linear inverse problems, in particular, neural-network-based methods that require large quantities of training data. We generate synthetic signals as realizations of sparse stochastic processes, which makes them ideally matched to variational sparsity-promoting techniques. We derive Gibbs sampling schemes to compute the minimum mean-square error estimators for processes with Laplace, Student's t, and Bernoulli-Laplace innovations. These allow our framework to provide quantitative measures of the degree of optimality (in the mean-square-error sense) for any given reconstruction method. We showcase our framework by benchmarking the performance of some well-known variational methods and convolutional neural network architectures that perform direct nonlinear reconstructions in the context of deconvolution and Fourier sampling. Our experimental results support the understanding that, while these neural networks outperform the variational methods and achieve near-optimal results in many settings, their performance deteriorates severely for signals associated with heavy-tailed distributions.
Article
Full-text available
Blind image super-resolution (Blind-SR) is the process of leveraging a low-resolution (LR) image, with unknown degradation, to generate its high-resolution (HR) version. Most of the existing blind SR techniques use a degradation estimator network to explicitly estimate the blur kernel to guide the SR network with the supervision of ground truth (GT) kernels. To solve this issue, it is necessary to design an implicit estimator network that can extract discriminative blur kernel representation without relying on the supervision of ground-truth blur kernels. We design a lightweight approach for blind super-resolution (Blind-SR) that estimates the blur kernel and restores the HR image based on a deep convolutional neural network (CNN) and a deep super-resolution residual convolutional generative adversarial network. Since the blur kernel for blind image SR is unknown, following the image formation model of blind super-resolution problem, we firstly introduce a neural network-based model to estimate the blur kernel. This is achieved by (i) a Super Resolver that, from a low-resolution input, generates the corresponding SR image; and (ii) an Estimator Network generating the blur kernel from the input datum. The output of both models is used in a novel loss formulation. The proposed network is end-to-end trainable. The methodology proposed is substantiated by both quantitative and qualitative experiments. Results on benchmarks demonstrate that our computationally efficient approach (12x fewer parameters than the state-of-the-art models) performs favorably with respect to existing approaches and can be used on devices with limited computational capabilities.
Article
Full-text available
In this paper, a deep learning technique for tumor detection in a microwave tomography framework is proposed. Providing an easy and effective imaging technique for breast cancer detection is one of the main focuses for biomedical researchers. Recently, microwave tomography gained a great attention due to its ability to reconstruct the electric properties maps of the inner breast tissues, exploiting nonionizing radiations. A major drawback of tomographic approaches is related to the inversion algorithms, since the problem at hand is nonlinear and ill-posed. In recent decades, numerous studies focused on image reconstruction techniques, in same cases exploiting deep learning. In this study, deep learning is exploited to provide information about the presence of tumors based on tomographic measures. The proposed approach has been tested with a simulated database showing interesting performances, in particular for scenarios where the tumor mass is particularly small. In these cases, conventional reconstruction techniques fail in identifying the presence of suspicious tissues, while our approach correctly identifies these profiles as potentially pathological. Therefore, the proposed method can be exploited for early diagnosis purposes, where the mass to be detected can be particularly small.
Article
Full-text available
Segmenting outdoor images in the presence of haze, fog or smog (which fades the colors and diminishes the contrast of the observed objects) has been a challenging task in image processing with several important applications. In this paper, we propose a new fractional-order variational model that will be able to de-haze and segment a given image simultaneously. The proposed method incorporates the atmospheric veil estimation based on the dark channel prior (DCP). This transmission map can reduce significantly the edge artifacts and enhance estimation precision in the resulting image. The transmission map is then changed over to the high-quality depth map, with which the new fractional-order variational model can be framed to look for the haze free segmenting image for both grey and color outdoor images. An explicit gradient descent scheme is employed to find efficiently the minimizer of the proposed energy functional. Experimental tests on real world scenes show that the proposed method can jointly de-haze and segment hazy or foggy images effectively and efficiently.
Article
Full-text available
Context. The internal structure of small Solar System bodies (SSSBs) is still poorly understood, although it can provide important information about the formation process of asteroids and comets. Space radars can provide direct observations of this structure. Aims. In this study, we investigate the possibility to infer the internal structure with a simple and fast inversion procedure applied to radar measurements. We consider a quasi-monostatic configuration with multiple measurements over a wide frequency band, which is the most common configuration for space radars. This is the first part (Paper I) of a joint study considering methods to analyse and invert quasi-monostatic microwave measurements of an asteroid analogue. This paper focuses on the frequency domain, while a separate paper focuses on time-domain methods. Methods. We carried out an experiment in the laboratory equivalent to the probing of an asteroid using the microwave analogy (multiplying the wavelength and the target dimension by the same factor). Two analogues based on the shape of the asteroid 25143 Itokawa were constructed with different interiors. The electromagnetic interaction with these analogues was measured in an anechoic chamber using a multi-frequency radar and a quasi-monostatic configuration. The electric field was measured on 2372 angular positions (corresponding to a sampling offering complete information). We then inverted these data with two classical imaging procedures, allowing us to reach the structural information of the analogues interior. We also investigated reducing the number of radar measurements used in the imaging procedures, that is both the number of transmitter-receiver pairs and the number of frequencies. Results. The results show that the 3D map of the analogues can be reconstructed without the need for a reference target. Internal structural differences are distinguishable between the analogues. This imaging can be achieved even with a reduced number of measurements. With only 35 well-selected frequencies over 321 and 1257 transmitter-receiver pairs, the reconstructions are similar to those obtained with the entire frequency band.
ResearchGate has not been able to resolve any references for this publication.