Table 1 - uploaded by Jean-Luc Starck
Content may be subject to copyright.
Source publication
Blind Source Separation (BSS) is a challenging matrix factorization problem that plays a central role in multichannel imaging science. In a large number of applications, such as astrophysics, current unmixing methods are limited since real-world mixtures are generally affected by extra instrumental effects like blurring. Therefore, BSS has to be so...
Context in source publication
Context 1
... by computing residuals between estimated sources and ground-truth sources, Figure 9 displays the error map of DecGMCA (left column) and ForWaRD+GMCA (right column). We also compare their relative errors which are shown in Table 1. We can see that DecGMCA is very accurate with the relative errors for three sources 0.14%, 0.27% and 0.36% respectively, which shows that our estimated sources have a good agreement with the ground-truth. ...
Similar publications
Tackling unsupervised source separation jointly with an additional inverse problem such as deconvolution is central for the analysis of multi-wavelength data. This becomes highly challenging when applied to large data sampled on the sphere such as those provided by wide-field observations in astrophysics, whose analysis requires the design of dedic...
Citations
... The authors analyzed the effects of the observation process on the minimum-volume simplex (MVS) enclosing the data and showed that a deconvolution step is necessary to unmix the data correctly. While Jiang et al. [32] proposed a new approach called DecGMCA to jointly solve the blind source separation (BSS) and deconvolution problems based on sparse signal modeling and efficient alternative projected least square algorithm. They highlighted the gained enhancements on the unmixing by jointly solving BSS and deconvolution instead of considering these two problems independently. ...
This paper presents novel unmixing and demosaicing methods for snapshot spectral imaging (SSI) systems utilizing Fabry-Perot filters. Unlike conventional approaches that perform unmixing after image restoration or demosaicing, our proposed methods leverage Fabry-Perot filter deconvolution and extend the “pure pixel” framework to the SSI sensor patch level, enabling improved unmixing accuracy and introducing the concept of localized spectral purity. Through extensive experimentation on synthetically generated data and real images captured by SSI cameras, we demonstrate the superiority of our methods over state-of-the-art techniques. Furthermore, our results showcase the effectiveness of the proposed approach over our recently proposed joint unmixing and demosaicing method based on low-rank matrix completion.
... Building on the extensive literature on compressed sensing 84-86 and sparsity-based component separation 87,88 and imaging 89-92 , we thus developed an algorithm for taking full advantage of both the different and, in the case of the SZ effect, well-constrained spectral behaviour of the measured signals, as well as the information on the different spatial-correlation properties. In particular, we assume the total surface brightness I ν in a given direction on the plane of sky (x, y) and at a given frequency ν to be described as Here I RS (x, y) is the surface brightness of the radio source computed at the reference frequency ν 0 , whereas α(x, y) is the corresponding spatially varying spectral index. ...
Galaxy clusters are the most massive gravitationally bound structures in the Universe, comprising thousands of galaxies and pervaded by a diffuse, hot intracluster medium (ICM) that dominates the baryonic content of these systems. The formation and evolution of the ICM across cosmic time ¹ is thought to be driven by the continuous accretion of matter from the large-scale filamentary surroundings and energetic merger events with other clusters or groups. Until now, however, direct observations of the intracluster gas have been limited only to mature clusters in the later three-quarters of the history of the Universe, and we have been lacking a direct view of the hot, thermalized cluster atmosphere at the epoch when the first massive clusters formed. Here we report the detection (about 6 σ ) of the thermal Sunyaev–Zeldovich (SZ) effect ² in the direction of a protocluster. In fact, the SZ signal reveals the ICM thermal energy in a way that is insensitive to cosmological dimming, making it ideal for tracing the thermal history of cosmic structures ³ . This result indicates the presence of a nascent ICM within the Spiderweb protocluster at redshift z = 2.156, around 10 billion years ago. The amplitude and morphology of the detected signal show that the SZ effect from the protocluster is lower than expected from dynamical considerations and comparable with that of lower-redshift group-scale systems, consistent with expectations for a dynamically active progenitor of a local galaxy cluster.
... Building on the extensive literature on compressed sensing 84-86 and sparsity-based component separation 87,88 and imaging 89-92 , we thus developed an algorithm for taking full advantage of both the different and, in the case of the SZ effect, well-constrained spectral behaviour of the measured signals, as well as the information on the different spatial-correlation properties. In particular, we assume the total surface brightness I ν in a given direction on the plane of sky (x, y) and at a given frequency ν to be described as ...
Galaxy clusters are the most massive gravitationally bound structures in the Universe, comprising thousands of galaxies and pervaded by a diffuse, hot ``intracluster medium'' (ICM) that dominates the baryonic content of these systems. The formation and evolution of the ICM across cosmic time is thought to be driven by the continuous accretion of matter from the large-scale filamentary surroundings and dramatic merger events with other clusters or groups. Until now, however, direct observations of the intracluster gas have been limited only to mature clusters in the latter three-quarters of the history of the Universe, and we have been lacking a direct view of the hot, thermalized cluster atmosphere at the epoch when the first massive clusters formed. Here we report the detection (about ) of the thermal Sunyaev-Zeldovich (SZ) effect in the direction of a protocluster. In fact, the SZ signal reveals the ICM thermal energy in a way that is insensitive to cosmological dimming, making it ideal for tracing the thermal history of cosmic structures. This result indicates the presence of a nascent ICM within the Spiderweb protocluster at redshift z=2.156, around 10 billion years ago. The amplitude and morphology of the detected signal show that the SZ effect from the protocluster is lower than expected from dynamical considerations and comparable with that of lower-redshift group-scale systems, consistent with expectations for a dynamically active progenitor of a local galaxy cluster.
... Indeed, GMCA offers a flexible framework to tackle specific separation subproblems; it has incidentally be the subject of several extensions, e.g. DecGMCA to tackle joint deconvolution and separation when dealing with inhomogeneous observations [24], including on the sphere [25]. ...
Blind source separation (BSS) algorithms are unsupervised methods, which are the cornerstone of hyperspectral data analysis by allowing for physically meaningful data decompositions. BSS problems being ill-posed, the resolution requires efficient regularization schemes to better distinguish between the sources and yield interpretable solutions. For that purpose, we investigate a semi-supervised source separation approach in which we combine a projected alternating least-square algorithm with a learning-based regularization scheme. In this article, we focus on constraining the mixing matrix to belong to a learned manifold by making use of generative models. Altogether, we show that this allows for an innovative BSS algorithm, with improved accuracy, which provides physically interpretable solutions. The proposed method, coined sGMCA, is tested on realistic hyperspectral astrophysical data in challenging scenarios involving strong noise, highly correlated spectra and unbalanced sources. The results highlight the significant benefit of the learned prior to reduce the leakages between the sources, which allows an overall better disentanglement.
... Note that, in this model, physical sources with similar spectral behaviour are considered as one "source" defining one column of the matrix S. Recall that solving for S and H would explicitly imply a source separation problem, that is a non-linear non-convex problem [66]. ...
Upcoming radio telescopes such as the Square Kilometre Array (SKA) will provide sheer amounts of data, allowing large images of the sky to be reconstructed at an unprecedented resolution and sensitivity over thousands of frequency channels. In this regard, wideband radio-interferometric imaging consists in recovering a 3D image of the sky from incomplete and noisy Fourier data, that is a highly ill-posed inverse problem. To regularize the inverse problem, advanced prior image models need to be tailored. Moreover, the underlying algorithms should be highly parallelized to scale with the vast data volumes provided and the Petabyte image cubes to be reconstructed for SKA. The research developed in this thesis leverages convex optimization techniques to achieve precise and scalable imaging for wideband radio interferometry and further assess the degree of confidence in particular 3D structures present in the reconstructed cube.
In the context of image reconstruction, we propose a new approach that decomposes the image cube into regular spatio-spectral facets, each is associated with a sophisticated hybrid prior image model. The approach is formulated as an optimization problem with a multitude of facet-based regularization terms and block-specific data-fidelity terms. The underpinning algorithmic structure benefits from well-established convergence guarantees and exhibits interesting functionalities such as preconditioning to accelerate the convergence speed. Furthermore, it allows for parallel processing of all data blocks and image facets over a multiplicity of CPU cores, allowing the bottleneck induced by the size of the image and data cubes to be efficiently addressed via parallelization. The precision and scalability potential of the proposed approach are confirmed through the reconstruction of a 15 GB image cube of the Cyg A radio galaxy.
In addition, we propose a new method that enables analyzing the degree of confidence in particular 3D structures appearing in the reconstructed cube. This analysis is crucial due to the high ill-posedness of the inverse problem. Besides, it can help in making scientific decisions on the structures under scrutiny (\emph{e.g.}, confirming the existence of a second black hole in the Cyg A galaxy). The proposed method is posed as an optimization problem and solved efficiently with a modern convex optimization algorithm with preconditioning and splitting functionalities. The simulation results showcase the potential of the proposed method to scale to big data regimes.
... In this case, coping with now heterogeneous data requires tackling an extra deconvolution step, thus leading to a joint deconvolution and blind source separation (DBSS) problem. A mathematically similar problem arises when the observations are composed of incomplete measurements such as in interferometric measurements [8,9] or compressive hyperspectral imaging [10,11]. The above mixture model is then substituted with the following: ...
... However, the proposed method is not compatible in our case since it only applies to compressively sensed measurements. More recently, we introduced the first joint DBSS method [9]. The proposed DecGMCA algorithm enforces the sparsity of the sources in some domain, that shall be represented by its transfer matrix Φ, by seeking a stationary point of the following cost function: ...
... In contrast to the standard case, analyzing spherical data raises extra difficulties due to the high computational cost of their manipulation, which makes essential the design of a computationally efficient and reliable algorithm. We therefore first aim at extending the algorithm DecGMCA [9] to tackle joint deconvolution and separation problems from spherical data. As described in Section 2, the method is based on a projected alternate least-squares minimization in order to combine rapidity and precision. ...
Tackling unsupervised source separation jointly with an additional inverse problem such as deconvolution is central for the analysis of multi-wavelength data. This becomes highly challenging when applied to large data sampled on the sphere such as those provided by wide-field observations in astrophysics, whose analysis requires the design of dedicated robust and yet effective algorithms. We therefore investigate a new joint deconvolution/sparse blind source separation method dedicated for data sampled on the sphere, coined SDecGMCA. It is based on a projected alternate least-squares minimization scheme, whose accuracy is proved to strongly rely on some regularization scheme in the present joint deconvolution/blind source separation setting. To this end, a regularization strategy is introduced that allows designing a new robust and effective algorithm, which is key to analyze large spherical data. Numerical experiments are carried out on toy examples and realistic astronomical data.
... In this case, coping with now heterogeneous data requires tackling an extra deconvolution step, thus leading to a joint deconvolution and blind source separation (DBSS) problem. A mathematically similar problem arises when the observations are composed of incomplete measurements such as in interferometric measurements [8,9] or compressive hyperspectral imaging [10,11]. The above mixture model is then substituted with the following: ...
... However, the proposed method is not compatible in our case since it only applies to compressively sensed measurements. More recently, we introduced the first joint DBSS method [9]. The proposed DecGMCA algorithm enforces the sparsity of the sources in some domain, that shall be represented by its transfer matrix Φ, by seeking a stationary point of the following cost function: ...
... In contrast to the standard case, analyzing spherical data raises extra difficulties due to the high computational cost of their manipulation, which makes essential the design of a computationally efficient and reliable algorithm. We therefore first aim at extending the algorithm DecGMCA [9] to tackle joint deconvolution and separation problems from spherical data. As described in Section 2, the method is based on a projected alternate least-squares minimization in order to combine rapidity and precision. ...
Tackling unsupervised source separation jointly with an additional inverse problem such as deconvolution is central for the analysis of multi-wavelength data. This becomes highly challenging when applied to large data sampled on the sphere such as those provided by wide-field observations in astrophysics, whose analysis requires the design of dedicated robust and yet effective algorithms. We therefore investigate a new joint deconvolution/sparse blind source separation method dedicated for data sampled on the sphere, coined SDecGMCA. It is based on a projected alternate least-squares minimization scheme, whose accuracy is proved to strongly rely on some regularization scheme in the present joint deconvolution/blind source separation setting. To this end, a regularization strategy is introduced that allows designing a new robust and effective algorithm, which is key to analyze large spherical data. Numerical experiments are carried out on toy examples and realistic astronomical data.
... In [19] , a fast alternating minimization was proposed to deblur images using an augmented Lagranian structure, and the image was transformed into the Fourier domain. In [20] , a novel deconvolution blind source separation, DecGMCA (deconvolved generalized morphological component analysis), was developed based on an alternative projected least squares method. ...
In this paper, we propose a blind channel deconvolution method based on a sparse reconstruction framework exploiting a wideband dictionary under the (relatively weak) assumption that the transmitted signal may be assumed to be well modelled as a sum of sinusoids. Using a Toeplitz structured formulation of the received signal, we form an iterative blind deconvolution scheme, alternatively estimating the underwater impulse response and the transmitted waveform. The resulting optimization problems are convex, and we formulate a computationally efficient solver using the Alternating Direction Method of Multipliers (ADMM). We illustrate the performance of the resulting estimator using both simulated and measured underwater signals.
... Furthermore, pALS allows for simple and robust heuristics to fix the sparse regularization parameters Λ [8]. Hence, and following the architecture of DecGMCA [7], the proposed algorithm will build upon a sparsity-enforcing pALS, which iterates are the following: ...
... {ε n,l } are the regularization coefficients, which depend on the frequency l and on the source n. In [7], these parameters were fixed to an ad hoc small value (e.g. 10 −3 ). However, these parameters largely impact the quality of the separation. ...
... • Strategy #1 (naive strategy): the regularization parameters are chosen independently of the frequency l and the source n: ε n,l = c. • Strategy #2 (strategy used in DecGMCA [7]): ε n,l = c λ max (M[l]), where λ max (·) returns the greatest eigenvalue. ...
Blind source separation is one of the major analysis tool to extract relevant information from multichannel data. While being central, joint deconvolution and blind source separation (DBSS) methods are scarce. To that purpose, a DBSS algorithm coined SDecGMCA is proposed. It is designed to process data sampled on the sphere, allowing large-field data analysis in radio-astronomy.
... This developed TF-BSS algorithm can accurately determine the number of overlapping sources at each auto-source TF point by applying the Gerschgorin disk estimation (GDE) algorithm on some clustered TF blocks, which helps to recover multipath signals on basis of the initial mixing matrix. Finally, the least-squares (LS) algorithm [41] is applied to optimize the estimation of the mixing matrix with the recovered sources. The updated mixing matrix at the last step shall be beneficial to the source recovery, thus the estimations of mixing matrix and sources can be gradually improved by iteratively implementing the above process until the predefined conditions are satisfied. ...
... As discussed earlier, because of the strong noise in real wireless communication environment, the initial ˆ A by JADE algorithm is rough. Inspired by the works in [41,[53][54][55] where an alternating minimization strategy is employed to estimate A and x ( t ), the estimated sources ˆ x (t) through the second stage can be employed to further refine the estimation of A through ...
Blind separation of multipath fading signals with impulsive interference and Gaussian noise is a very challenging issue due to multipath effects, which are often encountered in practical scenarios. Since the strong coherence among multipath signals leads to the extreme superposition in time-frequency (TF) domain, this paper proposes an iterative three-stage blind source separation (ITS-BSS) algorithm for the separation of coherent multipath signals in the presence of impulsive and Gaussian noise. Specifically, an initial estimation of mixing matrix is firstly implemented by some non-TF based algorithms. Secondly, a subspace-based TF-BSS algorithm is developed to determine the number of sources contributing at each auto-source TF point and then reconstruct corresponding sources. Thirdly, the reconstructed sources at current iteration are used to further improve the estimation accuracy of mixing matrix based on the least-squares (LS) algorithm. The last two stages are repeated by iteratively updating mixing matrix and sources until satisfied performance is achieved or a predefined number of iterations is done. Numerical results on multipath phase-shift keying (PSK) and quadrature amplitude modulation (QAM) signals plus impulsive noise under various signal-to-noise ratio (SNR) conditions are provided to demonstrate the feasibility and effectiveness of the proposed ITS-BSS algorithm.