Article

Joint deconvolution and unsupervised source separation for data on the sphere

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Tackling unsupervised source separation jointly with an additional inverse problem such as deconvolution is central for the analysis of multi-wavelength data. This becomes highly challenging when applied to large data sampled on the sphere such as those provided by wide-field observations in astrophysics, whose analysis requires the design of dedicated robust and yet effective algorithms. We therefore investigate a new joint deconvolution/sparse blind source separation method dedicated for data sampled on the sphere, coined SDecGMCA. It is based on a projected alternate least-squares minimization scheme, whose accuracy is proved to strongly rely on some regularization scheme in the present joint deconvolution/blind source separation setting. To this end, a regularization strategy is introduced that allows designing a new robust and effective algorithm, which is key to analyze large spherical data. Numerical experiments are carried out on toy examples and realistic astronomical data.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Indeed, GMCA offers a flexible framework to tackle specific separation subproblems; it has incidentally be the subject of several extensions, e.g. DecGMCA to tackle joint deconvolution and separation when dealing with inhomogeneous observations [24], including on the sphere [25]. ...
Preprint
Full-text available
Blind source separation (BSS) algorithms are unsupervised methods, which are the cornerstone of hyperspectral data analysis by allowing for physically meaningful data decompositions. BSS problems being ill-posed, the resolution requires efficient regularization schemes to better distinguish between the sources and yield interpretable solutions. For that purpose, we investigate a semi-supervised source separation approach in which we combine a projected alternating least-square algorithm with a learning-based regularization scheme. In this article, we focus on constraining the mixing matrix to belong to a learned manifold by making use of generative models. Altogether, we show that this allows for an innovative BSS algorithm, with improved accuracy, which provides physically interpretable solutions. The proposed method, coined sGMCA, is tested on realistic hyperspectral astrophysical data in challenging scenarios involving strong noise, highly correlated spectra and unbalanced sources. The results highlight the significant benefit of the learned prior to reduce the leakages between the sources, which allows an overall better disentanglement.
... Most existing cleaning methods do not directly use any beam information during the component separation process, while our results highlight the need for a more accurate treatment of the beam. More sophisticated strategies are possible, for example performing component separation and deconvolution simultaneously (e.g., Carloni Gertosio & Bobin 2021). ...
Preprint
Full-text available
Neutral Hydrogen Intensity Mapping (HI IM) surveys will be a powerful new probe of cosmology. However, strong astrophysical foregrounds contaminate the signal and their coupling with instrumental systematics further increases the data cleaning complexity. In this work, we simulate a realistic single-dish HI IM survey of a 5000~deg2^2 patch in the 9501400950 - 1400 MHz range, with both the MID telescope of the SKA Observatory (SKAO) and MeerKAT, its precursor. We include a state-of-the-art HI simulations and explore different foreground models and instrumental effects such as non-homogeneous thermal noise and beam side-lobes. We perform the first Blind Foreground Subtraction Challenge for HI IM on these synthetic data-cubes, aiming to characterise the performance of available foreground cleaning methods with no prior knowledge of the sky components and noise level. Nine foreground cleaning pipelines joined the Challenge, based on statistical source separation algorithms, blind polynomial fitting, and an astrophysical-informed parametric fit to foregrounds. We devise metrics to compare the pipeline performances quantitatively. In general, they can recover the input maps' 2-point statistics within 20 per cent in the range of scales least affected by the telescope beam. However, spurious artefacts appear in the cleaned maps due to interactions between the foreground structure and the beam side-lobes. We conclude that it is fundamental to develop accurate beam deconvolution algorithms and test data post-processing steps carefully before cleaning. This study was performed as part of SKAO preparatory work by the HI IM Focus Group of the SKA Cosmology Science Working Group.
Article
Blind source separation (BSS) algorithms are unsupervised methods, which are the cornerstone of hyperspectral data analysis by allowing for physically meaningful data decompositions. BSS problems being ill-posed, the resolution requires efficient regularization schemes to better distinguish between the sources and yield interpretable solutions. For that purpose, we investigate a semi-supervised source separation approach in which we combine a projected alternating least-square algorithm with a learning-based regularization scheme. In this article, we focus on constraining the mixing matrix to belong to a learned manifold by making use of generative models. Altogether, we show that this allows for an innovative BSS algorithm, with improved accuracy, which provides physically interpretable solutions. The proposed method, coined sGMCA, is tested on realistic hyperspectral astrophysical data in challenging scenarios involving strong noise, highly correlated spectra and unbalanced sources. The results highlight the significant benefit of the learned prior to reduce the leakages between the sources, which allows an overall better disentanglement.
Article
Neutral Hydrogen Intensity Mapping (H i IM) surveys will be a powerful new probe of cosmology. However, strong astrophysical foregrounds contaminate the signal and their coupling with instrumental systematics further increases the data cleaning complexity. In this work, we simulate a realistic single-dish H i IM survey of a 5000 deg2 patch in the 950 − 1400 MHz range, with both the MID telescope of the SKA Observatory (SKAO) and MeerKAT, its precursor. We include a state-of-the-art H i simulations and explore different foreground models and instrumental effects such as non-homogeneous thermal noise and beam side-lobes. We perform the first Blind Foreground Subtraction Challenge for H i IM on these synthetic data-cubes, aiming to characterise the performance of available foreground cleaning methods with no prior knowledge of the sky components and noise level. Nine foreground cleaning pipelines joined the Challenge, based on statistical source separation algorithms, blind polynomial fitting, and an astrophysical-informed parametric fit to foregrounds. We devise metrics to compare the pipeline performances quantitatively. In general, they can recover the input maps’ 2-point statistics within 20 per cent in the range of scales least affected by the telescope beam. However, spurious artefacts appear in the cleaned maps due to interactions between the foreground structure and the beam side-lobes. We conclude that it is fundamental to develop accurate beam deconvolution algorithms and test data post-processing steps carefully before cleaning. This study was performed as part of SKAO preparatory work by the H i IM Focus Group of the SKA Cosmology Science Working Group.
Article
Full-text available
In high-energy astronomy, spectro-imaging instruments such as X-ray detectors allow investigation of the spatial and spectral properties of extended sources including galaxy clusters, galaxies, diffuse interstellar medium, supernova remnants, and pulsar wind nebulae. In these sources, each physical component possesses a different spatial and spectral signature, but the components are entangled. Extracting the intrinsic spatial and spectral information of the individual components from this data is a challenging task. Current analysis methods do not fully exploit the 2D-1D ( x , y , E ) nature of the data, as spatial information is considered separately from spectral information. Here we investigate the application of a blind source separation (BSS) algorithm that jointly exploits the spectral and spatial signatures of each component in order to disentangle them. We explore the capabilities of a new BSS method (the general morphological component analysis; GMCA), initially developed to extract an image of the cosmic microwave background from Planck data, in an X-ray context. The performance of the GMCA on X-ray data is tested using Monte-Carlo simulations of supernova remnant toy models designed to represent typical science cases. We find that the GMCA is able to separate highly entangled components in X-ray data even in high-contrast scenarios, and can extract the spectrum and map of each physical component with high accuracy. A modification of the algorithm is proposed in order to improve the spectral fidelity in the case of strongly overlapping spatial components, and we investigate a resampling method to derive realistic uncertainties associated to the results of the algorithm. Applying the modified algorithm to the deep Chandra observations of Cassiopeia A, we are able to produce detailed maps of the synchrotron emission at low energies (0.6–2.2 keV), and of the red- and blueshifted distributions of a number of elements including Si and Fe K.
Article
Full-text available
Blind Source Separation (BSS) is a challenging matrix factorization problem that plays a central role in multichannel imaging science. In a large number of applications, such as astrophysics, current unmixing methods are limited since real-world mixtures are generally affected by extra instrumental effects like blurring. Therefore, BSS has to be solved jointly with a deconvolution problem, which requires tackling a new inverse problem: deconvolution BSS (DBSS). In this article, we introduce an innovative DBSS approach, called DecGMCA, based on sparse signal modeling and an efficient alternative projected least square algorithm. Numerical results demonstrate that the DecGMCA algorithm performs very well on simulations. It further highlights the importance of jointly solving BSS and deconvolution instead of considering these two problems independently. Furthermore, the performance of the proposed DecGMCA algorithm is demonstrated on simulated radio-interferometric data.
Article
Full-text available
Blind source separation (BSS) is a very popular technique to analyze multichannel data. In this context, the data are modeled as the linear combination of sources to be retrieved. For that purpose, standard BSS methods all rely on some discrimination principle, whether it is statistical independence or morphological diversity, to distinguish between the sources. However, dealing with real-world data reveals that such assumptions are rarely valid in practice: the signals of interest are more likely partially correlated, which generally hampers the performances of standard BSS methods. In this article, we introduce a novel sparsity-enforcing BSS method coined Adaptive Morphological Component Analysis (AMCA), which is designed to retrieve sparse and partially correlated sources. More precisely, it makes profit of an adaptive re-weighting scheme to favor/penalize samples based on their level of correlation. Extensive numerical experiments have been carried out which show that the proposed method is robust to the partial correlation of sources while standard BSS techniques fail. The AMCA algorithm is evaluated in the field of astrophysics for the separation of physical components from microwave data.
Article
Full-text available
This paper considers the problem of efficient computation of the spherical harmonic expansion, or Fourier transform, of functions defined on the two dimensional sphere, S2. The resulting algorithms are applied to the efficient computation of convolutions of functions on the sphere. We begin by proving convolution theorems generalizing well known and useful results from the abelian case. These convolution theorems are then used to develop a sampling theorem on the sphere. which reduces the calculation of Fourier transforms and convolutions of band-limited functions to discrete computations. We show how to perform these efficiently, starting with an O(n(log n)2) time algorithm for computing the Legendre transform of a function defined on the interval [-1,1] sampled at n points there. Theoretical and experimental results on the effects of finite precision arithmetic are presented. The Legendre transform algorithm is then generalized to obtain an algorithm for the Fourier transform, requiring O(n(log n)2) time, and an algorithm for its inverse in O(n1.5) time, where n is the number of points on the sphere at which the function is sampled. This improves the naive O(n2) bound, which is the best previously known. These transforms give an O(n1.5) algorithm for convolving two functions on the sphere.
Article
Full-text available
This paper considers regularized block multi-convex optimization, where the feasible set and objective function are generally non-convex but convex in each block of variables. We review some of its interesting examples and propose a generalized block coordinate descent method. Under certain conditions, we show that any limit point satisfies the Nash equi-librium conditions. Furthermore, we establish its global convergence and estimate its asymptotic convergence rate by assuming a property based on the Kurdyka-Lojasiewicz inequality. The proposed algorithms are adapted for factorizing nonnegative matrices and tensors, as well as completing them from their incomplete observations. The algorithms were tested on synthetic data, hyperspectral data, as well as image sets from the CBCL and ORL databases. Compared to the existing state-of-the-art algorithms, the proposed algorithms demonstrate superior performance in both speed and solution quality. The Matlab code of nonnegative matrix/tensor decomposition and completion, along with a few demos, are accessible from the authors' homepages.
Article
Full-text available
We introduce a proximal alternating linearized minimization (PALM) algorithm for solving a broad class of nonconvex and nonsmooth minimization problems. Building on the powerful Kurdyka–Łojasiewicz property, we derive a self-contained convergence analysis framework and establish that each bounded sequence generated by PALM globally converges to a critical point. Our approach allows to analyze various classes of nonconvex-nonsmooth problems and related nonconvex proximal forward–backward algorithms with semi-algebraic problem’s data, the later property being shared by many functions arising in a wide variety of fundamental applications. A by-product of our framework also shows that our results are new even in the convex setting. As an illustration of the results, we derive a new and simple globally convergent algorithm for solving the sparse nonnegative matrix factorization problem.
Article
Full-text available
We propose and analyze a new model for Hyperspectral Images (HSI) based on the assumption that the whole signal is composed of a linear combination of few sources, each of which has a specific spectral signature, and that the spatial abundance maps of these sources are themselves piecewise smooth and therefore efficiently encoded via typical sparse models. We derive new sampling schemes exploiting this assumption and give theoretical lower bounds on the number of measurements required to reconstruct HSI data and recover their source model parameters. This allows us to segment hyperspectral images into their source abundance maps directly from compressed measurements. We also propose efficient optimization algorithms and perform extensive experimentation on synthetic and real datasets, which reveals that our approach can be used to encode HSI with far less measurements and computational effort than traditional CS methods.
Conference Paper
Full-text available
In this paper, sparse representation (factorization) of a data matrix is first discussed. An overcomplete basis matrix is estimated by using the K means method. We have proved that for the estimated overcom- plete basis matrix, the sparse solution (coefficient matrix) with minimum l1 norm is unique with probability of one, which can be obtained using a linear programming algorithm. The comparisons of the l1 norm so- lution and the l0 norm solution are also presented, which can be used in recoverability analysis of blind source separation (BSS). Next, we ap- ply the sparse matrix factorization approach to BSS in the overcomplete case. Generally, if the sources are not sufficiently sparse, we perform blind separation in the time-frequency domain after preprocessing the observed data using the wavelet packets transformation. Third, an EEG experimental data analysis example is presented to illustrate the useful- ness of the proposed approach and demonstrate its performance. Two almost independent components obtained by the sparse representation method are selected for phase synchronization analysis, and their peri- ods of significant phase synchronization are found which are related to tasks. Finally, concluding remarks review the approach and state areas that require further study.
Conference Paper
Full-text available
In the paper we present new Alternating Least Squares (ALS) algorithms for Nonnegative Matrix Factorization (NMF) and their extensions to 3D Nonnegative Tensor Factorization (NTF) that are ro- bust in the presence of noise and have many potential applications, including multi-way Blind Source Separation (BSS), multi-sensory or multi-dimensional data analysis, and nonnegative neural sparse coding. We propose to use local cost functions whose simultaneous or sequential (one by one) minimization leads to a very simple ALS algorithm which works under some sparsity constraints both for an under-determined (a system which has less sensors than sources) and over-determined model. The extensive experimental results confirm the validity and high performance of the developed algorithms, especially with usage of the multi-layer hierarchical NMF. Extension of the proposed algorithm to multidimensional Sparse Component Analysis and Smooth Component Analysis is also proposed.
Article
Full-text available
This work studies the problem of simultaneously separating and reconstructing signals from compressively sensed linear mixtures. We assume that all source signals share a common sparse representation basis. The approach combines classical Compressive Sensing (CS) theory with a linear mixing model. It allows the mixtures to be sampled independently of each other. If samples are acquired in the time domain, this means that the sensors need not be synchronized. Since Blind Source Separation (BSS) from a linear mixture is only possible up to permutation and scaling, factoring out these ambiguities leads to a minimization problem on the so-called oblique manifold. We develop a geometric conjugate subgradient method that scales to large systems for solving the problem. Numerical results demonstrate the promising performance of the proposed algorithm compared to several state of the art methods.
Article
Full-text available
Nonnegative matrix factorization (NMF) is a data analysis technique used in a great variety of applications such as text mining, image processing, hyperspectral data analysis, computational biology, and clustering. In this letter, we consider two well-known algorithms designed to solve NMF problems: the multiplicative updates of Lee and Seung and the hierarchical alternating least squares of Cichocki et al. We propose a simple way to significantly accelerate these schemes, based on a careful analysis of the computational cost needed at each iteration, while preserving their convergence properties. This acceleration technique can also be applied to other algorithms, which we illustrate on the projected gradient method of Lin. The efficiency of the accelerated algorithms is empirically demonstrated on image and text data sets and compares favorably with a state-of-the-art alternating nonnegative least squares algorithm.
Article
Full-text available
We present in this paper new multiscale transforms on the sphere, namely the isotropic undecimated wavelet transform, the pyramidal wavelet transform, the ridgelet transform and the curvelet transform. All of these transforms can be inverted i.e. we can exactly reconstruct the original data from its coefficients in either representation. Several applications are described. We show how these transforms can be used in denoising and especially in a Combined Filtering Method, which uses both the wavelet and the curvelet transforms, thus benefiting from the advantages of both transforms. An application to component separation from multichannel data mapped to the sphere is also described in which we take advantage of moving to a wavelet representation.
Article
Full-text available
In this paper, we discuss the evaluation of blind audio source separation (BASS) algorithms. Depending on the exact application, different distortions can be allowed between an estimated source and the wanted true source. We consider four different sets of such allowed distortions, from time-invariant gains to time-varying filters. In each case, we decompose the estimated source into a true source part plus error terms corresponding to interferences, additive noise, and algorithmic artifacts. Then, we derive a global performance measure using an energy ratio, plus a separate performance measure for each error term. These measures are computed and discussed on the results of several BASS problems with various difficulty levels
Article
Full-text available
Over the last few years, the development of multichannel sensors motivated interest in methods for the coherent processing of multivariate data. Some specific issues have already been addressed as testified by the wide literature on the so-called blind source separation (BSS) problem. In this context, as clearly emphasized by previous work, it is fundamental that the sources to be retrieved present some quantitatively measurable diversity. Recently, sparsity and morphological diversity have emerged as a novel and effective source of diversity for BSS. Here, we give some new and essential insights into the use of sparsity in source separation, and we outline the essential role of morphological diversity as being a source of diversity or contrast between the sources. This paper introduces a new BSS method coined generalized morphological component analysis (GMCA) that takes advantages of both morphological diversity and sparsity, using recent sparse overcomplete or redundant signal representations. GMCA is a fast and efficient BSS method. We present arguments and a discussion supporting the convergence of the GMCA algorithm. Numerical results in multivariate image and signal processing are given illustrating the good performance of GMCA and its robustness to noise.
Article
Full-text available
HEALPix -- the Hierarchical Equal Area iso-Latitude Pixelization -- is a versatile data structure with an associated library of computational algorithms and visualization software that supports fast scientific applications executable directly on very large volumes of astronomical data and large area surveys in the form of discretized spherical maps. Originally developed to address the data processing and analysis needs of the present generation of cosmic microwave background (CMB) experiments (e.g. BOOMERanG, WMAP), HEALPix can be expanded to meet many of the profound challenges that will arise in confrontation with the observational output of future missions and experiments, including e.g. Planck, Herschel, SAFIR, and the Beyond Einstein CMB polarization probe. In this paper we consider the requirements and constraints to be met in order to implement a sufficient framework for the efficient discretization and fast analysis/synthesis of functions defined on the sphere, and summarise how they are satisfied by HEALPix.
Article
Blind Source Separation (BSS) is a key machine learning method, which has been successfully applied to analyze multichannel data in various domains ranging from medical imaging to astrophysics. Being an ill-posed matrix factorization problem, it is necessary to introduce extra regularizing priors on the sources. While using sparsity has led to improved factorization results, the quality of the separation process turns out to be dramatically dependent on the minimization strategy and the regularization parameters. In this scope, the Proximal Alternating Linearized Minimization (PALM) has recently attracted a lot of interest as a generic, fast and highly flexible algorithm. Using PALM for sparse BSS is theoretically well grounded, but getting good empirical results requires a fine tuning of the involved regularization parameters, which might be too computationally expensive with real-world large-scale data, therefore mandating automatic parameter choice strategies. In this article, we first investigate the empirical limitations of using the PALM algorithm to perform sparse BSS and we explain their origin. Based on this, we further study and justify an alternative two-step algorithmic framework combining PALM with a heuristic approach, namely the Generalized Morphological Component Analysis (GMCA). This method enables an automatic parameter choice for the PALM step. Numerical experiments with comparisons to standard algorithms are carried out on two realistic experiments in spectroscopy and astrophysics.
Article
This article does not present new mathematical results, it solely aims at discussing some numerical experiments with MALDI Imaging data. However, these experiments are based on and could not be done without the mathematical results obtained in the UNLocX project. They tackle two obstacles which presently prevent clinical routine applications of MALDI Imaging technology. In the last decade, matrix-assisted laser desorption/ionization imaging mass spectrometry (MALDI-IMS) has developed into a powerful bioanalytical imaging modality. MALDI imaging data consists of a set of mass spectra, which are measured at different locations of a flat tissue sample. Hence, this technology is capable of revealing the full metabolic structure of the sample under investigation. Sampling resolution as well as spectral resolution is constantly increasing, presently a conventional 2D MALDI Imaging data requires up to 100 GB per dataset. A major challenge towards routine applications of MALDI Imaging in pharmaceutical or medical workflows is the high computational cost for evaluating and visualizing the information content of MALDI imaging data. This becomes even more critical in the near future when considering cohorts or 3D applications. Due to its size and complexity MALDI Imaging constitutes a challenging test case for high performance signal processing. In this article we will apply concepts and algorithms, which were developed within the UNLocX project, to MALDI Imaging data. In particular we will discuss a suitable phase space model for such data and report on implementations of the resulting transform coders using GPU technology. Within the MALDI Imaging workflow this leads to an efficient baseline removal and peak picking. The final goal of data processing in MALDI Imaging is the discrimination of regions having different metabolic structures. We introduce and discuss so-called soft-segmentation maps which are obtained by non-negative matrix factorization incorporating sparsity constraints.
Article
A new variant ‘PMF’ of factor analysis is described. It is assumed that X is a matrix of observed data and σ is the known matrix of standard deviations of elements of X. Both X and σ are of dimensions n × m. The method solves the bilinear matrix problem X = GF + E where G is the unknown left hand factor matrix (scores) of dimensions n × p, F is the unknown right hand factor matrix (loadings) of dimensions p × m, and E is the matrix of residuals. The problem is solved in the weighted least squares sense: G and F are determined so that the Frobenius norm of E divided (element-by-element) by σ is minimized. Furthermore, the solution is constrained so that all the elements of G and F are required to be non-negative. It is shown that the solutions by PMF are usually different from any solutions produced by the customary factor analysis (FA, i.e. principal component analysis (PCA) followed by rotations). Usually PMF produces a better fit to the data than FA. Also, the result of PF is guaranteed to be non-negative, while the result of FA often cannot be rotated so that all negative entries would be eliminated. Different possible application areas of the new method are briefly discussed. In environmental data, the error estimates of data can be widely varying and non-negativity is often an essential feature of the underlying models. Thus it is concluded that PMF is better suited than FA or PCA in many environmental applications. Examples of successful applications of PMF are shown in companion papers.
Article
It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2)that this can be done by constrained ℓ 1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms ℓ 1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted ℓ 1-minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed near-sparsity in overcomplete representations—not by reweighting the ℓ 1 norm of the coefficient sequence as is common, but by reweighting the ℓ 1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as Compressive Sensing.
Article
We study the convergence properties of a (block) coordinate descent method applied to minimize a nondifferentiable (nonconvex) function f(x 1, . . . , x N ) with certain separability and regularity properties. Assuming that f is continuous on a compact level set, the subsequence convergence of the iterates to a stationary point is shown when either f is pseudoconvex in every pair of coordinate blocks from among N-1 coordinate blocks or f has at most one minimum in each of N-2 coordinate blocks. If f is quasiconvex and hemivariate in every coordinate block, then the assumptions of continuity of f and compactness of the level set may be relaxed further. These results are applied to derive new (and old) convergence results for the proximal minimization algorithm, an algorithm of Arimoto and Blahut, and an algorithm of Han. They are applied also to a problem of blind source separation.
Article
We propose an efficient, hybrid Fourier-wavelet regularized deconvolution (ForWaRD) algorithm that performs noise regularization via scalar shrinkage in both the Fourier and wavelet domains. The Fourier shrinkage exploits the Fourier transform's economical representation of the colored noise inherent in deconvolution, whereas the wavelet shrinkage exploits the wavelet domain's economical representation of piecewise smooth signals and images. We derive the optimal balance between the amount of Fourier and wavelet regularization by optimizing an approximate mean-squared error (MSE) metric and find that signals with more economical wavelet representations require less Fourier shrinkage. ForWaRD is applicable to all ill-conditioned deconvolution problems, unlike the purely wavelet-based wavelet-vaguelette deconvolution (WVD); moreover, its estimate features minimal ringing, unlike the purely Fourier-based Wiener deconvolution. Even in problems for which the WVD was designed, we prove that ForWaRD's MSE decays with the optimal WVD rate as the number of samples increases. Further, we demonstrate that over a wide range of practical sample-lengths, ForWaRD improves on WVD's performance.
Article
Shrinkage is a well known and appealing denoising technique, introduced originally by Donoho and Johnstone in 1994. The use of shrinkage for denoising is known to be optimal for Gaussian white noise, provided that the sparsity on the signal's representation is enforced using a unitary transform. Still, shrinkage is also practiced with nonunitary, and even redundant representations, typically leading to very satisfactory results. In this correspondence we shed some light on this behavior. The main argument in this work is that such simple shrinkage could be interpreted as the first iteration of an algorithm that solves the basis pursuit denoising (BPDN) problem. While the desired solution of BPDN is hard to obtain in general, we develop a simple iterative procedure for the BPDN minimization that amounts to stepwise shrinkage. We demonstrate how the simple shrinkage emerges as the first iteration of this novel algorithm. Furthermore, we show how shrinkage can be iterated, turning into an effective algorithm that minimizes the BPDN via simple shrinkage steps, in order to further strengthen the denoising effect
Blind source separation with relative Newton method
  • Zibulevski
Handbook of blind source separation
  • P Comon
  • C Jutten
P. Comon, C. Jutten, Handbook of blind source separation, Academic Press, 2010.
Blind source separation with relative newton method, Proceedings of ICA, Independent Component Analysis
  • M Zibulevski
M. Zibulevski, Blind source separation with relative newton method, Proceedings of ICA, Independent Component Analysis, 2003897-902 (2003).
  • J Kobarg
  • J Maass
  • P Oetje
  • O Tropp
  • E Hirsch
  • C Sagiv
  • M Golbabaee
  • P Vandergheynst
J. Kobarg, J. Maass, P. qnd Oetje, O. Tropp, E. Hirsch, C. Sagiv, M. Golbabaee, P. Vandergheynst, Numerical experiments with maldi imaging data, Computational Mathematics 40 (3) (2014).