Underdetermined Sparse Blind Source Separation of Nonnegative and Partially Overlapped Data.
ABSTRACT We study the solvability of sparse blind separation of $n$ nonnegative sources from $m$ linear mixtures in the underdetermined regime $m<n$. The geometric properties of the mixture matrix and the sparseness structure of the source matrix are closely related to the identification of the mixing matrix. We first illustrate and establish necessary and sufficient conditions for the unique separation for the case of $m$ mixtures and $m+1$ sources, and develop a novel algorithm based on data geometry, source sparseness, and $\ell_1$ minimization. Then we extend the results to any order $m\times n$, $3\leq m<n$, based on the degree of degeneracy of the columns of the mixing matrix. Numerical results substantiate the proposed solvability conditions and show satisfactory performance of our approach.
- SourceAvailable from: ArXiv[Show abstract] [Hide abstract]
ABSTRACT: In this paper, we develop a novel blind source separation (BSS) method for nonnegative and correlated data, particularly for the nearly degenerate data. The motivation lies in nuclear magnetic resonance (NMR) spectroscopy, where a multiple mixture NMR spectra are recorded to identify chemical compounds with similar structures (degeneracy). There have been a number of successful approaches for solving BSS problems by exploiting the nature of source signals. For instance, independent component analysis (ICA) is used to separate statistically independent (orthogonal) source signals. However, signal orthogonality is not guaranteed in many real-world problems. This new BSS method developed here deals with nonorthogonal signals. The independence assumption is replaced by a condition which requires dominant interval(s) (DI) from each of source signals over others. Additionally, the mixing matrix is assumed to be nearly singular. The method first estimates the mixing matrix by exploiting geometry in data clustering. Due to the degeneracy of the data, a small deviation in the estimation may introduce errors (spurious peaks of negative values in most cases) in the output. To resolve this challenging problem and improve robustness of the separation, methods are developed in two aspects. One technique is to find a better estimation of the mixing matrix by allowing a constrained perturbation to the clustering output, and it can be achieved by a quadratic programming. The other is to seek sparse source signals by exploiting the DI condition, and it solves an $\ell_1$ optimization. We present numerical results of NMR data to show the performance and reliability of the method in the applications arising in NMR spectroscopy.10/2011;
- [Show abstract] [Hide abstract]
ABSTRACT: Nonnegative matrix factorization (NMF) has become a very popular technique in machine learning because it automatically extracts meaningful features through a sparse and part-based representation. However, NMF has the drawback of being highly ill-posed, that is, there typically exist many different but equivalent factorizations. In this paper, we introduce a completely new way to obtaining more well-posed NMF problems whose solutions are sparser. Our technique is based on the preprocessing of the nonnegative input data matrix, and relies on the theory of M-matrices and the geometric interpretation of NMF. This approach provably leads to optimal and sparse solutions under the separability assumption of Donoho and Stodden (NIPS, 2003), and, for rank-three matrices, makes the number of exact factorizations finite. We illustrate the effectiveness of our technique on several image datasets.04/2012;
- [Show abstract] [Hide abstract]
ABSTRACT: In this paper, we study the nonnegative matrix factorization problem under the separability assumption (that is, there exists a cone spanned by a small subset of the columns of the input nonnegative data matrix containing all columns), which is equivalent to the hyperspectral unmixing problem under the linear mixing model and the pure-pixel assumption. We present a family of fast recursive algorithms, and prove they are robust under any small perturbations of the input data matrix. This family generalizes several existing hyperspectral unmixing algorithms and hence provides for the first time a theoretical justification of their better practical performance.IEEE Transactions on Pattern Analysis and Machine Intelligence 08/2012; · 4.80 Impact Factor