ChapterPDF Available

Recent Advances in Hyperspectral Unmixing Using Sparse Techniques and Deep Learning

Authors:

Abstract

Spectral unmixing is an important technique for remotely sensed hyperspectral image interpretation that expresses each (possibly mixed) pixel vector as a combination of pure spectral signatures (endmembers) and their fractional abundances. Recently, sparse unmixing and deep learning have emerged as two powerful approaches for spectral unmixing. In this chapter, we focus on two particularly innovative contributions. First, we provide an overview of recent advances in semi-supervised sparse unmixing algorithms, with particular emphasis on techniques that include spatial–contextual information for a better scene interpretation. These algorithms require a spectral library of signatures available a priori to conduct the unmixing. Then, we describe new developments in the use of deep learning for spectral unmixing purposes, focusing on a new fully unsupervised deep auto-encoder network (DAEN) method. Our experiments with simulated and real hyperspectral datasets demonstrate the competitive advantages of these innovative approaches over some well-established unmixing methods, revealing that these methods are currently at the forefront of hyperspectral unmixing.
A preview of the PDF is not available
... Autoencoder (AE) neural networks have recently emerged as a framework to enhance the precision of hyperspectral unmixing in remote sensing, spurred by the availability of standardized benchmark datasets for model evaluation (e.g., Urban, Samson, AVARIS Cuprite) [50,51,52,53,54]. Yet, despite initial explorations [55,56], the utility of unmixing AEs for Raman spectroscopy data remains largely unexplored. ...
Preprint
Full-text available
Raman spectroscopy is widely used across scientific domains to characterize the chemical composition of samples in a non-destructive, label-free manner. Many applications entail the unmixing of signals from mixtures of molecular species to identify the individual components present and their proportions, yet conventional methods for chemometrics often struggle with complex mixture scenarios encountered in practice. Here, we develop hyperspectral unmixing algorithms based on autoencoder neural networks, and we systematically validate them using both synthetic and experimental benchmark datasets created in-house. Our results demonstrate that unmixing autoencoders provide improved accuracy, robustness and efficiency compared to standard unmixing methods. We also showcase the applicability of autoencoders to complex biological settings by showing improved biochemical characterization of volumetric Raman imaging data from a monocytic cell.
... In this context, many tools have been developed to automatize the programming and execution of deep learning algorithms in GPU-based architectures, among which TensorFlow is the most popular option [108]. This has contributed to the extensive use of GPUs for deep learning applied to RS for many operations [109], [110] including, for example, object detection [111] or classification [112]- [116]. ...
Article
The High-Performance and Disruptive Computing in Remote Sensing (HDCRS) Working Group (WG) was recently established under the IEEE Geoscience and Remote Sensing Society (GRSS) Earth Science Informatics (ESI) Technical Committee to connect a community of interdisciplinary researchers in remote sensing (RS) who specialize in advanced computing technologies, parallel programming models, and scalable algorithms. HDCRS focuses on three major research topics in the context of RS: 1) supercomputing and distributed computing, 2) specialized hardware computing, and 3) quantum computing (QC). This article presents these computing technologies as they play a major role for the development of RS applications. The HDCRS disseminates information and knowledge through educational events and publication activities which will also be introduced in this article.
Article
Hyperspectral unmixing (HU) is a fundamental and critical task in various hyperspectral image (HSI) applications. Over the past few years, the linear mixing model (LMM) has received widely attention for its high efficiency, definite physical meaning, and being amenable to mathematical treatment. Among the various linear unmixing methods, the autoencoder unmixing network has achieved superior performance and presented more significant potential because of the powerful data fitting ability and deep feature acquisition. However, the autoencoder unmixing network, focusing on the pixel-level associations, ignores the overall distribution and long-range dependencies of materials. Inspired by the receptive field mechanism and the effectiveness of multi-stage framework, we propose a multi-stage convolutional autoencoder network for hyperspectral linear unmixing, called MSNet. MSNet is capable of learning broad contextual information without losing the detailed features by the progressively multi-stage unmixing network in the unmixing process. Compared with the conventional single-stage unmixing methods, the multi-stage framework is more robust in solving the ill-posed unmixing problem. The proposed MSNet performs more effectively and competitively than state-of-the-art algorithms by comparison experiments on synthetic and real hyperspectral datasets. The source code is available at https://github.com/yuyang95/JAG-MSNet.
Article
Full-text available
Spectral unmixing is a technique for remotely sensed image interpretation that expresses each (possibly mixed) pixel as a combination of pure spectral signatures (endmembers) and their fractional abundances. In this paper, we develop a new technique for unsupervised unmixing which is based on a deep autoencoder network (DAEN). Our newly developed DAEN consists of two parts. The first part of the network adopts stacked autoencoders (SAEs) to learn spectral signatures, so as to generate a good initialization for the unmixing process. In the second part of the network, a variational autoencoder (VAE) is employed to perform blind source separation, aimed at obtaining the endmember signatures and abundance fractions simultaneously. By taking advantage from the SAEs, the robustness of the proposed approach is remarkable as it can unmix data sets with outliers and low signal-to-noise ratio. Moreover, the multihidden layers of the VAE ensure the required constraints (nonnegativity and sum-to-one) when estimating the abundances. The effectiveness of the proposed method is evaluated using both synthetic and real hyperspectral data. When compared with other unmixing methods, the proposed approach demonstrates very competitive performance.
Article
Linear spectral unmixing is the practice of decomposing the mixed pixel into a linear combination of the constituent endmembers and the estimated abundances. This paper focuses on unsupervised spectral unmixing where the endmembers are unknown a priori. Conventional approaches use either geometrical- or statistical-based approaches. In this paper, we address the challenges of spectral unmixing with unsupervised deep learning models, in specific, the autoencoder models, where the decoder serves as the endmembers and the hidden layer output serves as the abundances. In several recent attempts, part-based autoencoders have been designed to solve the unsupervised spectral unmixing problem. However, the performance has not been satisfactory. In this paper, we first discuss some important findings we make on issues with part-based autoencoders. By proof of counterexample, we show that all existing part-based autoencoder networks with nonnegative and tied encoder and decoder are inherently defective by making these inappropriate assumptions on the network structure. As a result, they are not suitable for solving the spectral unmixing problem. We propose a so-called untied denoising autoencoder with sparsity, in which the encoder and decoder of the network are independent, and only the decoder of the network is enforced to be nonnegative. Furthermore, we make two critical additions to the network design. First, since denoising is an essential step for spectral unmixing, we propose to incorporate the denoising capacity into the network optimization in the format of a denoising constraint rather than cascading another denoising preprocessor in order to avoid the introduction of additional reconstruction error. Second, to be more robust to the inaccurate estimation of a number of endmembers, we adopt an $l₂₁-norm on the encoder of the network to reduce the redundant endmembers while decreasing the reconstruction error simultaneously. The experimental results demonstrate that the proposed approach outperforms several state-of-the-art methods, especially for highly noisy data.
Article
Spectral unmixing aims at estimating the fractional abundances of a set of pure spectral materials (endmembers) in each pixel of a hyperspectral image. The wide availability of large spectral libraries has fostered the role of sparse regression techniques in the task of characterizing mixed pixels in remotely sensed hyperspectral images. A general solution for sparse unmixing methods consists of using the l <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> regularizer to control the sparsity, resulting in a very promising performance but also suffering from sensitivity to large and small sparse coefficients. A recent trend to address this issue is to introduce weighting factors to penalize the nonzero coefficients in the unmixing solution. While most methods for this purpose focus on analyzing the hyperspectral data by considering the pixels as independent entities, it is known that there exists a strong spatial correlation among features in hyperspectral images. This information can be naturally exploited in order to improve the representation of pixels in the scene. In order to take advantage of the spatial information for hyperspectral unmixing, in this paper, we develop a new spectral-spatial weighted sparse unmixing (S <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> WSU) framework, which uses both spectral and spatial weighting factors, further imposing sparsity on the solution. Our experimental results, conducted using both simulated and real hyperspectral data sets, illustrate the good potential of the proposed S <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> WSU, which can greatly improve the abundance estimation results when compared with other advanced spectral unmixing methods.
Article
Changes in mangrove forests are mostly caused by natural disasters, anthropogenic forces, and uncontrolled population growth. These phenomena often lead to a strong competition between existing mangrove species for their survival. Due to its enhanced spectral resolution, remotely sensed hyperspectral image data can play an important role in the task of mapping and monitoring changes in mangrove forests. In this paper, we use EO-1 Hyperion hyperspectral data to model the abundance of pure and mixed mangrove species at subpixel levels. This information is then used to analyze the interspecies competition in a study area over a period of time. Different characteristics, including the rate of growth, rate of reproduction, mortality rate, and other processes that control the coexistence of plant species are discussed. The obtained results, verified through field visits, illustrate that the proposed approach can interpret the dominance of certain mangrove species and provide insights about their state of equilibrium or disequilibrium over a fixed time frame. IEEE
Article
Most blind hyperspectral unmixing methods exploit convex geometry properties of hyperspectral data. The minimum volume simplex analysis (MVSA) is one of such methods, which, as many others, estimates the minimum volume (MV) simplex where the measured vectors live. MVSA was conceived to circumvent the matrix factorization step often implemented by MV-based algorithms and also to cope with outliers, which compromise the results produced by MV algorithms. Inspired by the recently proposed robust MV enclosing simplex (RMVES) algorithm, we herein introduce the robust MVSA (RMVSA), which is a version of MVSA robust to noise. As in RMVES, the robustness is achieved by employing chance constraints, which control the volume of the resulting simplex. RMVSA differs, however, substantially from RMVES in the way optimization is carried out. In this paper, we develop a linearization relaxation of the nonlinear chance constraints, which can greatly lighten the computational complex of chance constraint problems. The effectiveness of RMVSA is illustrated by comparing its performance with the state of the art.
Article
Spectral unmixing is an important technique in hyperspectral image applications. Recently, sparse regression has been widely used in hyperspectral unmixing, but its performance is limited by the high mutual coherence of spectral libraries. To address this issue, a new sparse unmixing algorithm, called double reweighted sparse unmixing and total variation (TV), is proposed in this letter. Specifically, the proposed algorithm enhances the sparsity of fractional abundances in both spectral and spatial domains through the use of double weights, where one is used to enhance the sparsity of endmembers in spectral library, and the other is introduced to improve the sparsity of fractional abundances. Moreover, a TV-based regularization is further adopted to explore the spatial-contextual information. As such, the simultaneous utilization of both double reweighted l <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> minimization and TV regularizer can significantly improve the sparse unmixing performance. Experimental results on both synthetic and real hyperspectral data sets demonstrate the effectiveness of the proposed algorithm both visually and quantitatively.