Figure 6 - uploaded by Jean-Luc Starck
Content may be subject to copyright.
Illustration of masked PSFs (in Fourier space): the resolution ratio is 3 and the percentage of active data is 50%).  

Illustration of masked PSFs (in Fourier space): the resolution ratio is 3 and the percentage of active data is 50%).  

Source publication
Article
Full-text available
Blind Source Separation (BSS) is a challenging matrix factorization problem that plays a central role in multichannel imaging science. In a large number of applications, such as astrophysics, current unmixing methods are limited since real-world mixtures are generally affected by extra instrumental effects like blurring. Therefore, BSS has to be so...

Context in source publication

Context 1
... addition, the noise level is fixed to 60 dB. Figure 6 illustrates two masked PSFs (the best resolved one and the worst resolved one) in Fourier space and Figure 7 gives an example of 1 mixture out of 20. We can see that sources are drown in the "dirty" image and mixed with each other. ...

Similar publications

Preprint
Full-text available
Tackling unsupervised source separation jointly with an additional inverse problem such as deconvolution is central for the analysis of multi-wavelength data. This becomes highly challenging when applied to large data sampled on the sphere such as those provided by wide-field observations in astrophysics, whose analysis requires the design of dedic...

Citations

... Building on the extensive literature on compressed sensing 84-86 and sparsity-based component separation 87,88 and imaging 89-92 , we thus developed an algorithm for taking full advantage of both the different and, in the case of the SZ effect, well-constrained spectral behaviour of the measured signals, as well as the information on the different spatial-correlation properties. In particular, we assume the total surface brightness I ν in a given direction on the plane of sky (x, y) and at a given frequency ν to be described as Here I RS (x, y) is the surface brightness of the radio source computed at the reference frequency ν 0 , whereas α(x, y) is the corresponding spatially varying spectral index. ...
Article
Full-text available
Galaxy clusters are the most massive gravitationally bound structures in the Universe, comprising thousands of galaxies and pervaded by a diffuse, hot intracluster medium (ICM) that dominates the baryonic content of these systems. The formation and evolution of the ICM across cosmic time ¹ is thought to be driven by the continuous accretion of matter from the large-scale filamentary surroundings and energetic merger events with other clusters or groups. Until now, however, direct observations of the intracluster gas have been limited only to mature clusters in the later three-quarters of the history of the Universe, and we have been lacking a direct view of the hot, thermalized cluster atmosphere at the epoch when the first massive clusters formed. Here we report the detection (about 6 σ ) of the thermal Sunyaev–Zeldovich (SZ) effect ² in the direction of a protocluster. In fact, the SZ signal reveals the ICM thermal energy in a way that is insensitive to cosmological dimming, making it ideal for tracing the thermal history of cosmic structures ³ . This result indicates the presence of a nascent ICM within the Spiderweb protocluster at redshift z = 2.156, around 10 billion years ago. The amplitude and morphology of the detected signal show that the SZ effect from the protocluster is lower than expected from dynamical considerations and comparable with that of lower-redshift group-scale systems, consistent with expectations for a dynamically active progenitor of a local galaxy cluster.
... Building on the extensive literature on compressed sensing 84-86 and sparsity-based component separation 87,88 and imaging 89-92 , we thus developed an algorithm for taking full advantage of both the different and, in the case of the SZ effect, well-constrained spectral behaviour of the measured signals, as well as the information on the different spatial-correlation properties. In particular, we assume the total surface brightness I ν in a given direction on the plane of sky (x, y) and at a given frequency ν to be described as ...
Preprint
Full-text available
Galaxy clusters are the most massive gravitationally bound structures in the Universe, comprising thousands of galaxies and pervaded by a diffuse, hot ``intracluster medium'' (ICM) that dominates the baryonic content of these systems. The formation and evolution of the ICM across cosmic time is thought to be driven by the continuous accretion of matter from the large-scale filamentary surroundings and dramatic merger events with other clusters or groups. Until now, however, direct observations of the intracluster gas have been limited only to mature clusters in the latter three-quarters of the history of the Universe, and we have been lacking a direct view of the hot, thermalized cluster atmosphere at the epoch when the first massive clusters formed. Here we report the detection (about 6σ6\sigma) of the thermal Sunyaev-Zeldovich (SZ) effect in the direction of a protocluster. In fact, the SZ signal reveals the ICM thermal energy in a way that is insensitive to cosmological dimming, making it ideal for tracing the thermal history of cosmic structures. This result indicates the presence of a nascent ICM within the Spiderweb protocluster at redshift z=2.156, around 10 billion years ago. The amplitude and morphology of the detected signal show that the SZ effect from the protocluster is lower than expected from dynamical considerations and comparable with that of lower-redshift group-scale systems, consistent with expectations for a dynamically active progenitor of a local galaxy cluster.
... Indeed, GMCA offers a flexible framework to tackle specific separation subproblems; it has incidentally be the subject of several extensions, e.g. DecGMCA to tackle joint deconvolution and separation when dealing with inhomogeneous observations [24], including on the sphere [25]. ...
Preprint
Full-text available
Blind source separation (BSS) algorithms are unsupervised methods, which are the cornerstone of hyperspectral data analysis by allowing for physically meaningful data decompositions. BSS problems being ill-posed, the resolution requires efficient regularization schemes to better distinguish between the sources and yield interpretable solutions. For that purpose, we investigate a semi-supervised source separation approach in which we combine a projected alternating least-square algorithm with a learning-based regularization scheme. In this article, we focus on constraining the mixing matrix to belong to a learned manifold by making use of generative models. Altogether, we show that this allows for an innovative BSS algorithm, with improved accuracy, which provides physically interpretable solutions. The proposed method, coined sGMCA, is tested on realistic hyperspectral astrophysical data in challenging scenarios involving strong noise, highly correlated spectra and unbalanced sources. The results highlight the significant benefit of the learned prior to reduce the leakages between the sources, which allows an overall better disentanglement.
... Note that, in this model, physical sources with similar spectral behaviour are considered as one "source" defining one column of the matrix S. Recall that solving for S and H would explicitly imply a source separation problem, that is a non-linear non-convex problem [66]. ...
Thesis
Full-text available
Upcoming radio telescopes such as the Square Kilometre Array (SKA) will provide sheer amounts of data, allowing large images of the sky to be reconstructed at an unprecedented resolution and sensitivity over thousands of frequency channels. In this regard, wideband radio-interferometric imaging consists in recovering a 3D image of the sky from incomplete and noisy Fourier data, that is a highly ill-posed inverse problem. To regularize the inverse problem, advanced prior image models need to be tailored. Moreover, the underlying algorithms should be highly parallelized to scale with the vast data volumes provided and the Petabyte image cubes to be reconstructed for SKA. The research developed in this thesis leverages convex optimization techniques to achieve precise and scalable imaging for wideband radio interferometry and further assess the degree of confidence in particular 3D structures present in the reconstructed cube. In the context of image reconstruction, we propose a new approach that decomposes the image cube into regular spatio-spectral facets, each is associated with a sophisticated hybrid prior image model. The approach is formulated as an optimization problem with a multitude of facet-based regularization terms and block-specific data-fidelity terms. The underpinning algorithmic structure benefits from well-established convergence guarantees and exhibits interesting functionalities such as preconditioning to accelerate the convergence speed. Furthermore, it allows for parallel processing of all data blocks and image facets over a multiplicity of CPU cores, allowing the bottleneck induced by the size of the image and data cubes to be efficiently addressed via parallelization. The precision and scalability potential of the proposed approach are confirmed through the reconstruction of a 15 GB image cube of the Cyg A radio galaxy. In addition, we propose a new method that enables analyzing the degree of confidence in particular 3D structures appearing in the reconstructed cube. This analysis is crucial due to the high ill-posedness of the inverse problem. Besides, it can help in making scientific decisions on the structures under scrutiny (\emph{e.g.}, confirming the existence of a second black hole in the Cyg A galaxy). The proposed method is posed as an optimization problem and solved efficiently with a modern convex optimization algorithm with preconditioning and splitting functionalities. The simulation results showcase the potential of the proposed method to scale to big data regimes.
... In this case, coping with now heterogeneous data requires tackling an extra deconvolution step, thus leading to a joint deconvolution and blind source separation (DBSS) problem. A mathematically similar problem arises when the observations are composed of incomplete measurements such as in interferometric measurements [8,9] or compressive hyperspectral imaging [10,11]. The above mixture model is then substituted with the following: ...
... However, the proposed method is not compatible in our case since it only applies to compressively sensed measurements. More recently, we introduced the first joint DBSS method [9]. The proposed DecGMCA algorithm enforces the sparsity of the sources in some domain, that shall be represented by its transfer matrix Φ, by seeking a stationary point of the following cost function: ...
... In contrast to the standard case, analyzing spherical data raises extra difficulties due to the high computational cost of their manipulation, which makes essential the design of a computationally efficient and reliable algorithm. We therefore first aim at extending the algorithm DecGMCA [9] to tackle joint deconvolution and separation problems from spherical data. As described in Section 2, the method is based on a projected alternate least-squares minimization in order to combine rapidity and precision. ...
Preprint
Full-text available
Tackling unsupervised source separation jointly with an additional inverse problem such as deconvolution is central for the analysis of multi-wavelength data. This becomes highly challenging when applied to large data sampled on the sphere such as those provided by wide-field observations in astrophysics, whose analysis requires the design of dedicated robust and yet effective algorithms. We therefore investigate a new joint deconvolution/sparse blind source separation method dedicated for data sampled on the sphere, coined SDecGMCA. It is based on a projected alternate least-squares minimization scheme, whose accuracy is proved to strongly rely on some regularization scheme in the present joint deconvolution/blind source separation setting. To this end, a regularization strategy is introduced that allows designing a new robust and effective algorithm, which is key to analyze large spherical data. Numerical experiments are carried out on toy examples and realistic astronomical data.
... In this case, coping with now heterogeneous data requires tackling an extra deconvolution step, thus leading to a joint deconvolution and blind source separation (DBSS) problem. A mathematically similar problem arises when the observations are composed of incomplete measurements such as in interferometric measurements [8,9] or compressive hyperspectral imaging [10,11]. The above mixture model is then substituted with the following: ...
... However, the proposed method is not compatible in our case since it only applies to compressively sensed measurements. More recently, we introduced the first joint DBSS method [9]. The proposed DecGMCA algorithm enforces the sparsity of the sources in some domain, that shall be represented by its transfer matrix Φ, by seeking a stationary point of the following cost function: ...
... In contrast to the standard case, analyzing spherical data raises extra difficulties due to the high computational cost of their manipulation, which makes essential the design of a computationally efficient and reliable algorithm. We therefore first aim at extending the algorithm DecGMCA [9] to tackle joint deconvolution and separation problems from spherical data. As described in Section 2, the method is based on a projected alternate least-squares minimization in order to combine rapidity and precision. ...
Article
Tackling unsupervised source separation jointly with an additional inverse problem such as deconvolution is central for the analysis of multi-wavelength data. This becomes highly challenging when applied to large data sampled on the sphere such as those provided by wide-field observations in astrophysics, whose analysis requires the design of dedicated robust and yet effective algorithms. We therefore investigate a new joint deconvolution/sparse blind source separation method dedicated for data sampled on the sphere, coined SDecGMCA. It is based on a projected alternate least-squares minimization scheme, whose accuracy is proved to strongly rely on some regularization scheme in the present joint deconvolution/blind source separation setting. To this end, a regularization strategy is introduced that allows designing a new robust and effective algorithm, which is key to analyze large spherical data. Numerical experiments are carried out on toy examples and realistic astronomical data.
... In [19] , a fast alternating minimization was proposed to deblur images using an augmented Lagranian structure, and the image was transformed into the Fourier domain. In [20] , a novel deconvolution blind source separation, DecGMCA (deconvolved generalized morphological component analysis), was developed based on an alternative projected least squares method. ...
Article
In this paper, we propose a blind channel deconvolution method based on a sparse reconstruction framework exploiting a wideband dictionary under the (relatively weak) assumption that the transmitted signal may be assumed to be well modelled as a sum of sinusoids. Using a Toeplitz structured formulation of the received signal, we form an iterative blind deconvolution scheme, alternatively estimating the underwater impulse response and the transmitted waveform. The resulting optimization problems are convex, and we formulate a computationally efficient solver using the Alternating Direction Method of Multipliers (ADMM). We illustrate the performance of the resulting estimator using both simulated and measured underwater signals.
... Furthermore, pALS allows for simple and robust heuristics to fix the sparse regularization parameters Λ [8]. Hence, and following the architecture of DecGMCA [7], the proposed algorithm will build upon a sparsity-enforcing pALS, which iterates are the following: ...
... {ε n,l } are the regularization coefficients, which depend on the frequency l and on the source n. In [7], these parameters were fixed to an ad hoc small value (e.g. 10 −3 ). However, these parameters largely impact the quality of the separation. ...
... • Strategy #1 (naive strategy): the regularization parameters are chosen independently of the frequency l and the source n: ε n,l = c. • Strategy #2 (strategy used in DecGMCA [7]): ε n,l = c λ max (M[l]), where λ max (·) returns the greatest eigenvalue. ...
Preprint
Blind source separation is one of the major analysis tool to extract relevant information from multichannel data. While being central, joint deconvolution and blind source separation (DBSS) methods are scarce. To that purpose, a DBSS algorithm coined SDecGMCA is proposed. It is designed to process data sampled on the sphere, allowing large-field data analysis in radio-astronomy.
... This developed TF-BSS algorithm can accurately determine the number of overlapping sources at each auto-source TF point by applying the Gerschgorin disk estimation (GDE) algorithm on some clustered TF blocks, which helps to recover multipath signals on basis of the initial mixing matrix. Finally, the least-squares (LS) algorithm [41] is applied to optimize the estimation of the mixing matrix with the recovered sources. The updated mixing matrix at the last step shall be beneficial to the source recovery, thus the estimations of mixing matrix and sources can be gradually improved by iteratively implementing the above process until the predefined conditions are satisfied. ...
... As discussed earlier, because of the strong noise in real wireless communication environment, the initial ˆ A by JADE algorithm is rough. Inspired by the works in [41,[53][54][55] where an alternating minimization strategy is employed to estimate A and x ( t ), the estimated sources ˆ x (t) through the second stage can be employed to further refine the estimation of A through ...
Article
Blind separation of multipath fading signals with impulsive interference and Gaussian noise is a very challenging issue due to multipath effects, which are often encountered in practical scenarios. Since the strong coherence among multipath signals leads to the extreme superposition in time-frequency (TF) domain, this paper proposes an iterative three-stage blind source separation (ITS-BSS) algorithm for the separation of coherent multipath signals in the presence of impulsive and Gaussian noise. Specifically, an initial estimation of mixing matrix is firstly implemented by some non-TF based algorithms. Secondly, a subspace-based TF-BSS algorithm is developed to determine the number of sources contributing at each auto-source TF point and then reconstruct corresponding sources. Thirdly, the reconstructed sources at current iteration are used to further improve the estimation accuracy of mixing matrix based on the least-squares (LS) algorithm. The last two stages are repeated by iteratively updating mixing matrix and sources until satisfied performance is achieved or a predefined number of iterations is done. Numerical results on multipath phase-shift keying (PSK) and quadrature amplitude modulation (QAM) signals plus impulsive noise under various signal-to-noise ratio (SNR) conditions are provided to demonstrate the feasibility and effectiveness of the proposed ITS-BSS algorithm.
... Ultimately, this further deconvolution is a loss of smallest angular information available in the observed maps. We do not discuss this issue here, since a version of GMCA that performs the beam deconvolution at the same time as the blind source separation has been tested on two-dimensional data (decGMCA, Jiang et al. 2017) and effort is ongoing for extending decGMCA on data sampled on the sphere (Carloni-Gertosio et al. 2020); thus, the results of this paper would generically hold for a decGMCA application, with the advantage of retaining the fully-available small scale information. ...
... Concerning the beam and the noise choices, in this work we consider a single-dish experiment with characteristics of a radiotelescope like the MeerKAT. Nevertheless our analysis is meaningful for other experimental set-ups, also including interferometrydriven 21 cm intensity mapping experiment as CHIME 10 , Tianlai 11 , HIRAX 12 or the proposed PUMA 13 : as the decGMCA version of the algorithm performs deconvolution at the same time as the source separation (Jiang et al. 2017;Carloni-Gertosio et al. 2020), it is possible to work directly with the visibility data. This constitutes another interesting line of work. ...
Preprint
21 cm intensity mapping has emerged as a promising technique to map the large-scale structure of the Universe. However, the presence of foregrounds with amplitudes orders of magnitude larger than the cosmological signal constitutes a critical challenge. Here, we test the sparsity-based algorithm Generalised Morphological Component Analysis (GMCA) as a blind component separation technique for this class of experiments. We test the GMCA performance against realistic full-sky mock temperature maps that include, besides astrophysical foregrounds, also a fraction of the polarized part of the signal leaked into the unpolarized one, a very troublesome foreground to subtract, usually referred to as polarization leakage. To our knowledge, this is the first time the removal of such component is performed with no prior assumption. We assess the success of the cleaning by comparing the true and recovered power spectra, in the angular and radial directions. In the best scenario looked at, GMCA is able to recover the input angular (radial) power spectrum with an average bias of 5%\sim 5\% for >25\ell>25 (2030%20 - 30 \% for k0.02h1k_{\parallel} \gtrsim 0.02 \,h^{-1}Mpc), in the presence of polarization leakage. Our results are robust also when up to 40%40\% of channels are missing, mimicking a Radio Frequency Interference (RFI) flagging of the data. In perspective, we endorse the improvement on both cleaning methods and data simulations, the second being more and more realistic and challenging the first ones, to make 21 cm intensity mapping competitive.