Conference Paper

Semi-Supervised Hyperspectral Image Classification using Spatial-Spectral Features and Superpixel-Based Sparse Codes

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... According to the constraint of the picture fuzzy set, all the grades are dependent on each other and make us unable to assign the values of these grades independently from [0, 1]. Considering these limitations Andekah et al. [30] presented the new idea of a spherical fuzzy set (SFS) with the constraint 0 ≤T 2 +Ȗ 2 +K 2 ≤ 1. To remove the restrictions of SFS, they established another structure named as T-spherical fuzzy set (T-SFS) satisfying the constraint 0 ≤T n +Ȗ n +K n ≤ 1. ...
... Fuzzy systems and IFSs cannot be extended to decision-making difficulties in certain cases. The notion of picture fuzzy sets (PFS) was suggested by Cuong et al. [29], Andekah et al. [30] and Perić et al. [31] to eliminate this downside. Similar to prior ones this system is similar to human existence and addresses real-life scenarios. ...
... In image enhancement, image reconstruction, image compression, and image detection, optical image processing may be very helpful. Researchers who have worked on various image recognition systems include medical imaging [8,9], radar image analysis [16][17][18], biometric and iris recognition [19,20], human detection [30,31]. ...
Article
Full-text available
The existing concepts of picture fuzzy sets (PFS), spherical fuzzy sets (SFSs), T-spherical fuzzy sets (T-SFSs) and neutrosophic sets (NSs) have numerous applications in decision-making problems, but they have various strict limitations for their satisfaction, dissatisfaction, abstain or refusal grades. To relax these strict constraints, we introduce the concept of spherical linear Diophantine fuzzy sets (SLDFSs) with the inclusion of reference or control parameters. A SLDFS with parameterizations process is very helpful for modeling uncertainties in the multi-criteria decision making (MCDM) process. SLDFSs can classify a physical system with the help of reference parameters. We discuss various real-life applications of SLDFSs towards digital image processing, network systems, vote casting, electrical engineering, medication, and selection of optimal choice. We show some drawbacks of operations of picture fuzzy sets and their corresponding aggregation operators. Some new operations on picture fuzzy sets are also introduced. Some fundamental operations on SLDFSs and different types of score functions of spherical linear Diophantine fuzzy numbers (SLDFNs) are proposed. New aggregation operators named spherical linear Diophantine fuzzy weighted geometric aggregation (SLDFWGA) and spherical linear Diophantine fuzzy weighted average aggregation (SLDFWAA) operators are developed for a robust MCDM approach. An application of the proposed methodology with SLDF information is illustrated. The comparison analysis of the final ranking is also given to demonstrate the validity, feasibility, and efficiency of the proposed MCDM approach.
... In a number of studies, researchers used NIR spectrum data through the vegetation index for monitoring and studying the temporal and spatial variations of vegetation [14]. For example, in some separate works, Kadhim [15], Konstantinidis [16], Huang [17], Manandhar [18], Alasvand Andekah [19] and Ok [20] took advantage of the normalized difference vegetation index (NDVI) computed based on comparison between red and NIR channels. In another study, Iovan et.al [21] used the data of infra-red and three RGB color channels for training a SVM classifier. ...
... As aforementioned, the proposed algorithm uses only RGB channels for vegetation detection. For further comparison, Table II presents the overall quality of the algorithms developed by Zhao et al. [10], Li et al. [4], and Andekah et al. [19]. The first two methods used IR channel in addition to RGB data while the last one employed hyperspectral images with 224 channels. ...
... Andekah et al. [19] ...
Conference Paper
Evaluation of vegetation cover by using the remote sensing data can provide enhanced results with less time and expense. In this paper, we propose a new automatic algorithm for detection of vegetation regions in high-resolution satellite/aerial images. It uses only color channels of the image and involves two modeling and evaluation phases. In the modeling phase, after extracting color and texture features for the pixels within a few manually-indicated vegetation areas, a model is obtained based on principle component analysis (PCA). In the evaluation phase, the same color and texture features are primarily computed for every pixel of the specified image. Then, an error value is computed for each pixel which determines the model mismatch score. Finally, by thresholding the error image consisting of all error values, the vegetation regions can be distinguished from the background. Experimental results demonstrated outstanding solution quality for the proposed algorithm (so-called PCAM, standing for PCA-based modeling) compared to a number of counterpart methods.
... All of the grades are reliant on each other, in consonance with the restriction of the picture fuzzy set, and we are unable to provide the standards of these grades separately from zero to one. Andekah et al. [14] and Mahmood et al. [15,16] introduced a novel concept of a SFS with restriction 0 ≤ m 2 + a 2 + n 2 ≤ 1 to overcome the drawbacks stated in PFS. To address the limitations of SFS, they devised a new structure called T-SFS, which satisfies requirement 0 ≤ m t + a t + n t ≤ 1, where t = 1, 2, 3, . . . ...
Article
Full-text available
The theory of spherical linear Diophantine fuzzy sets (SLDFS) boasts several advantages over existing fuzzy set (FS) theories such as Picture fuzzy sets (PFS), spherical fuzzy sets (SFS), and T-spherical fuzzy sets (T-SFS). Notably, SLDFS offers a significantly larger portrayal space for acceptable triplets, enabling it to encompass a wider range of ambiguous and uncertain knowledge data sets. This paper delves into the regularity of spherical linear Diophantine fuzzy graphs (SLDFGs), establishing their fundamental concepts. We provide a geometrical interpretation of SLDFGs within a spherical context and define the operations of complement, union, and join, accompanied by illustrative examples. Additionally, we introduce the novel concept of a spherical linear Diophantine isomorphic fuzzy graph and showcase its application through a social network scenario. Furthermore, we explore how this amplified depiction space can be utilized for the study of various graph theoretical topics.
... As a very effective solution, semi-supervised learning (SSL) has attracted great attention for HSI classification [7]. Relevant studies have shown the effectiveness of spectral-spatial information-based SSL methods [8][9][10][11]. ...
Article
Full-text available
For both traditional classification and current popular deep learning methods, the limited sample classification problem is very challenging, and the lack of samples is an important factor affecting the classification performance. Our work includes two aspects. First, the unsupervised data augmentation for all hyperspectral samples not only improves the classification accuracy greatly with the newly added training samples, but also further improves the classification accuracy of the classifier by optimizing the augmented test samples. Second, an effective spectral structure extraction method is designed, and the effective spectral structure features have a better classification accuracy than the true spectral features.
... Tan et al. [35] presented a semisupervised method by the spatial neighborhood information and classifier combination. Andekah et al. [36] used the spectral-spatial features and superpixel-based sparse codes for classification. Recently, deep learning methods have also been used for semi-supervised classification. ...
Article
Full-text available
Convolutional neural networks (CNN) have achieved excellent performance for the hyperspectral image (HSI) classification problem due to better extracting spectral and spatial information. However, CNN can only perform convolution calculations on Euclidean data sets. To solve this problem, recently, graph convolutional neural network (GCN) is proposed, which can be applied to the semi-supervised HSI classification problem. However, the GCN is a direct push learning method, which requires all nodes to participate in training process to get the node embedding. This may bring great computational for the hyperspectral data with large number of pixels. Therefore, in this paper, we propose a inductive learning method to solve the problem. It constructs the graph by sampling and aggregating (graphSAGE) feature from a node's local neighbourhood. This could greatly reduce the space complexity. Moreover, to enhance the classification performance, we also construct graph using spectral and spatial information (spectral-spatial GraphSAGE). Experiments on several hyperspectral image datasets show that the proposed method can achieve better classification performance compared with state-of-the-art HSI classification methods.
... Hyperspectral images (HSI) contain rich spectral information that can be used in various fields of earth science studies such as identifying the different material, geology, forest, and investigating the changes of earth surface landcovers. Due to the high dimensionality of the HSI (large numbers of spectral bands) and the limited size of the training samples, supervised classification of the HSI is challenging and commonly leads to curse of dimensionality [1]. Besides, inter-class spectral variability and intra-class spectral similarity may deteriorate the classification accuracy. ...
Article
Previous studies show that the incorporation of spatial features in the classification process of hyperspectral images (HSI) improves classification accuracy. Although different spatial-spectral methods are proposed in the literature for the classification of the HSI, they almost have a slow, complex, and parameter-dependent structure. This paper proposes, a simple, fast and efficient two-stage spatial-spectral method for the classification of the HSI based on extended morphological profiles (EMP) and the guided filter. The proposed method consists of four major stages. In the first stage, principal component analysis (PCA) is used to smooth the HSI to extract the low-dimensional informative features. In the second stage, EMP is produced from the first three PCs. Stacked feature vectors, consisting of PCs and EMP, are classified via support vector machines (SVM) in the third step. Finally, a post-processing stage based on a guided filter is applied to classified maps to further improve the classification accuracy and to refine the noisy classified pixels. Experimental results on two famous hyperspectral images named Indian Pines and Pavia University in a very small training sample size situation show that the proposed method can reach the high level of accuracies which are superior to some recent state-of-the-art methods.
... Similarly, for image compression, the metric can be used to indicate whether the compression is overdone. Not limited to processing natural images [41], the metric also has important applications in the processing of images from different fields, such as borehole images [39], hyper-spectral images [4,5] and SAR images [3]. ...
Article
Full-text available
The image quality degradation due to the loss of high-frequency components of images is often seen in real scenarios, such as artifacts caused by image compression and image blur caused by camera shake or out of focus. Quantifying such degradation is very useful for many tasks that are related to image quality. In this paper, an effective approach is proposed for the image quality assessment on images with blur as well as images with compression artifacts. Based on the relation between the dictionaries of the degraded image and the reference image, we build up a hybrid dictionary learning model to characterize the space of patches of the reference image as well as that of the degraded image. The image quality is then measured by the difference between the two resulting dictionaries. Combined with a simple sparse-coding-based metric, the proposed method shows competitive performance on five benchmark datasets, which demonstrates its effectiveness.
... Typical feature extraction methods include principal component analysis (PCA) [11], independent component analysis [12], Fisher discriminant analysis [13], local linear embedding [14] and so on. In [15], the authors proposed a semisupervised support vector machine to classify the HSI, where sparse coding and dictionary learning were used to extract discriminant features. In [16], a hybrid feature extraction method has been proposed for synthetic aperture radar image registration. ...
Article
Full-text available
Abstract: The deep convolutional neural network (CNN) has recently attracted the researchers for classification of hyperspectral remote sensing images. The CNN mainly consists of convolution layer, pooling layer and fully connected layer. The pooling is a regularisation technique and improves the performance of CNN while reducing the computation time. Various pooling strategies have been developed in literature. This study shows the effect of pooling strategy on the performance of deep CNN for classification of hyperspectral remote sensing images. The authors have compared the performance of various pooling strategies such as max pooling, average pooling, stochastic pooling, rank-based average pooling and rank-based weighted pooling. The experiments were performed on three well-known hyperspectral remote sensing datasets: Indian Pines, University of Pavia and Kennedy Space Center. The proposed experimental results show that max pooling has produced better results for all the three considered datasets.
... The preprocessing is done based on pixels by pixels [14] or based on superpixels (group of connected pixels with similar gray levels) [34]. Here, the bilateral filter is used as a preprocessing algorithm to enhance the edges of the striatum. ...
Article
The non-degenerative variant called scans without evidence of dopaminergic deficit (SWEDD) is clinically analyzed and wrongly understood as Parkinson’s disease (PD) that results ineffective diagnosis of PD in the early stage. The present work is designed to improve the diagnostic accuracy at the early stage of PD from SWEDD and healthy control (HC). The volume rendering image slices are used as a novel method to achieve better diagnostic accuracy. These image slices are chosen from the single-photon emission computed tomography (SPECT) images based on their striatal uptake region, which contributes appreciated information on the shape of the striatum. Features related to surface and the shape of the striatum are calculated from the segmented region of the chosen image slices to illustrate the good amount of variations among early PD, SWEDD, and HC. Among these feature sets, the most optimized feature is selected using genetic algorithm. The performance of the classifiers like linear, radial basis function-support vector machine (RBF-SVM), extreme learning machine (ELM) activation functions, and RBF-ELM are investigated and compared based on the most optimized feature. It is noted that the RBF-ELM offers better performance with an accuracy of 98.23% than the other classifiers. This also proves that the present work is better than the previous studies. Hence, the proposed approach could act as an aid in the detection of early stage of PD.
... Also, unsupervised algorithm using Gabor filter bank and spectral regression [3] for texture-based SAR image segmentation has been implemented. Several other improved algorithms for feature extraction [4], hyperspectral image classification [5] and segmentation [6] have made more progress in SAR images processing. ...
Article
Some spectroscopic measurements are limited by multiple scattering, appearing as a restricting factor, particularly in the determination of extinction coefficient for optically dense solutions. The development of experimental and computational strategies to deal with this problem seems extremely important. This could have positive effects that would impact discipline such as medicine, where the accurate determination of optical properties in biological tissues helps in-situ diagnostics. The work reported in this paper involves a double structuring of the incident radiation by diffraction gratings, thus forming moiré patterns which are then applied to an existing technique known as Structured Laser Illuminating Planar Imaging (SLIPI). After an image processing based on Fourier transforms, these strategies allowed to optimize standard SLIPI measurements by suppressing more of the multiple scattering photons for accurate calculation of extinction coefficients of dense chlorophyll solutions probed at 450 nm and 638 nm.
... Image classification as a technique of image processing has also received much attention. In [9], to increase classification accuracy, Andekah et al. proposed an appropriate method for acquiring more discriminant features by using sparse coding and dictionary learning. In addition, there were many new image processing techniques such as level set [7], spectral clustering [10], cellular learning automata and adaptive chains [11], ensemble clustering [12], MLPNN/PSO [5], and so on. ...
Article
Full-text available
The technology of image recovery, as a part of image processing, becomes more and more important. The robust principle component analysis (RPCA) serves as a key problem for low‐rank matrix recovery. However, the existing methods for solving the RPCA are mostly based on nuclear norm minimisation. These methods have the disadvantage that matrix's rank cannot be well approximated because they minimise all singular values. Moreover, these methods will lead to instability when images are highly correlated. In this study, the authors set up a novel model which combines the truncated nuclear norm with Frobenius norm to enhance solution accuracy and stability. Based on the idea of elastic‐net, which is composed of l1 norm and l2 norm, this model is based on elastic‐net of the singular values can be composed of the truncated nuclear norm and the square of Frobenius norm. In order to solve this problem, the TNNF algorithm was proposed based on the alternating direction method of multipliers. Compared with traditional methods, the numerical simulation results show that the proposed TNNF can achieve higher accuracy and better stability.
... This is due to the fact that labeling data points is tedious and sometimes costly work. This problem has also gotten attention in the Hyperspectral Imaging community and various techniques have been proposed in order to solve it , Wu et al. (2016), Zhang and Crawford (2016), Jamshidpour et al. (2016), Alhichri et al. (2015), Wu and Prasad (2018), Xu et al. (2018), Fang et al. (2018b), Liu et al. (2017b), Andekah et al. (2017), Zhan et al. (2018)]. A lot of those techniques make use of the unlabeled training data by pretraining in an unsupervised setting and afterwards performing supervised learning on the labeled examples. ...
Technical Report
We used the Ladder Network [Rasmus et al. (2015)] to perform Hyperspectral Image Classification in a semi-supervised setting. The Ladder Network distinguishes itself from other semi-supervised methods by jointly optimizing a supervised and unsupervised cost. In many settings this has proven to be more successful than other semi-supervised techniques, such as pretraining using unlabeled data. We furthermore show that the convolutional Ladder Network outperforms most of the current techniques used in hyperspectral image classification and achieves new state-of-the-art performance on the Pavia University dataset given only 5 labeled data points per class.
... 4 In addition, the learning content of EW is always changing in scope because of recent researches conducted in the field of learning. In the recent time, efficient and appropriate ways of learning such as machine vision, 5 image processing applications, [6][7][8][9][10][11][12][13][14] and remote sensing analysis 15,16 have been incorporated as some of the emerging issues in the field of electronics. All these studies have influenced the learning content of EW program designed to reflect a functional philosophy of basic electronic engineering education. ...
Article
The purpose of this study was to investigate the effectiveness of problem-based and lecture-based learning environments on students’ achievement in electronic works. The design was randomized subjects with pretest and posttest control group design. The participants (N = 148) were randomized to treatment and control conditions. Repeated measures analysis of variance and univariate analysis of variance were conducted by the researchers to compare changes across the treatment and control group participants. To test for differences in categorical data representing characteristics of the participants, the researchers used Chi-square (χ²) statistic. Results show that the experimental group achieved higher achievement scores than the control group for electronic works achievement test at the posttest and follow-up test stages. Furthermore, the study found that there was no significant difference (P > 0.05) in the achievement of students in the different ability levels and genders after the treatment. Hence, problem-based learning was advocated for teachers of electronic works in Nigeria.
... During the past decades, machine vision has greatly become a new research field of electronics in which different applications are performed including medical image processing and diagnosis [1,2], radar image analysis [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17], biometrics and iris recognition [18,19], human detection [20,21]. There are different levels of image processing in a machine vision system including low-, medium-and high-level processing [22]. ...
Article
Full-text available
Human opinion cannot be restricted to yes or no as depicted by conventional fuzzy set (FS) and intuitionistic fuzzy set (IFS) but it can be yes, abstain, no and refusal as explained by picture fuzzy set (PFS). In this article, the concept of spherical fuzzy set (SFS) and T-spherical fuzzy set (T-SFS) is introduced as a generalization of FS, IFS and PFS. The novelty of SFS and T-SFS is shown by examples and graphical comparison with early established concepts. Some operations of SFSs and T-SFSs along with spherical fuzzy relations are defined, and related results are conferred. Medical diagnostics and decision-making problem are discussed in the environment of SFSs and T-SFSs as practical applications.
... Statistical methods used for defining image intensity include Fourier power spectra and Markov random field as well as multi-resolution filtering techniques such as Gabor and Wavelet transforms [26]. Recent works in image processing and computer vision are widely employing texture analysis [27][28][29][30][31][32][33][34][35][36]. Features based on shape: the most techniques based on shape descriptors are: rectilinear shapes, polygonal approximation, finite element models, Fourier-based, region-based and 3D shape methods [22,26]. ...
Article
Full-text available
Protecting the personal patient’s information in distributed health infrastructures seems to be a crucial task. As a solution of this issue, image watermarking is widely used to secure and prevent the content alteration. Moreover, it is necessary to find a new blind watermarking technique way that could decrease the latency and preserve the quality of medical images. This paper presents a new watermarking technique for medical image. Our technique consists in combining the DCT transform, Weber descriptors (WDs) and Arnold chaotic map. This combination brings three significant steps. First, the watermark image is scrambled using Arnold chaotic map. Second, the DCT is performed on each medical image block, and the watermark data are embedded in the DCT middle- band coefficients of each block. Finally, a new embedding and extracting technique is proposed, based on WDs without any loss by selecting the right coefficients. We improve the robustness of the proposed algorithm against several scenarios of attacks such as noising, filtering and JPEG compression. The obtained results make our algorithm eligible to be practicable.
Article
Full-text available
Classical linear discriminant analysis (LDA) has been applied to machine learning and pattern recognition successfully, and many variants based on LDA are proposed. However, the traditional LDA has several disadvantages as follows: Firstly, since the features selected by feature selection have good interpretability, LDA has poor performance in feature selection. Secondly, there are many redundant features or noisy data in the original data, but LDA has poor robustness to noisy data and outliers. Lastly, LDA only utilizes the global discriminant information, without consideration for the local discriminant structure. In order to overcome the above problems, we present a robust sparse manifold discriminant analysis (RSMDA) method. In RSMDA, by introducing the L2,1 norm, the most discriminant features can be selected for discriminant analysis. Meanwhile, the local manifold structure is used to capture the local discriminant information of the original data. Due to the introduction of L2,1 constraints and local discriminant information, the proposed method has excellent robustness to noisy data and has the potential to perform better than other methods. A large number of experiments on different data sets have proved the good effectiveness of RSMDA.
Article
Full-text available
Modeling uncertainties with spherical linear Diophantine fuzzy sets (SLDFSs) is a robust approach towards engineering, information management, medicine, multi-criteria decision-making (MCDM) applications. The existing concepts of neutrosophic sets (NSs), picture fuzzy sets (PFSs), and spherical fuzzy sets (SFSs) are strong models for MCDM. Nevertheless, these models have certain limitations for three indexes, satisfaction (membership), dissatisfaction (non-membership), refusal/abstain (indeterminacy) grades. A SLDFS with the use of reference parameters becomes an advanced approach to deal with uncertainties in MCDM and to remove strict limitations of above grades. In this approach the decision makers (DMs) have the freedom for the selection of above three indexes in [0,1]. The addition of reference parameters with three index/grades is a more effective approach to analyze DMs opinion. We discuss the concept of spherical linear Diophantine fuzzy numbers (SLDFNs) and certain properties of SLDFSs and SLDFNs. These concepts are illustrated by examples and graphical representation. Some score functions for comparison of LDFNs are developed. We introduce the novel concepts of spherical linear Diophantine fuzzy soft rough set (SLDFSRS) and spherical linear Diophantine fuzzy soft approximation space. The proposed model of SLDFSRS is a robust hybrid model of SLDFS, soft set, and rough set. We develop new algorithms for MCDM of suitable clean energy technology. We use the concepts of score functions, reduct, and core for the optimal decision. A brief comparative analysis of the proposed approach with some existing techniques is established to indicate the validity, flexibility, and superiority of the suggested MCDM approach.
Article
Full-text available
Hyperspectral image (HSI) classification is an important research topic in detailed analysis of the Earth’s surface. However, the performance of the classification is often hampered by the high-dimensionality features and limited training samples of the HSIs which has fostered research about semi-supervised learning (SSL). In this paper, we propose a shape adaptive neighborhood information (SANI) based SSL (SANI-SSL) method that takes full advantage of the adaptive spatial information to select valuable unlabeled samples in order to improve the classification ability. The improvement of the classification mainly relies on two aspects: (1) the improvement of the feature discriminability, which is accomplished by exploiting spectral-spatial information, and (2) the improvement of the training samples’ representativeness which is accomplished by exploiting the SANI for both labeled and unlabeled samples. First, the SANI of labeled samples is extracted, and the breaking ties (BT) method is used in order to select valuable unlabeled samples from the labeled samples’ neighborhood. Second, the SANI of unlabeled samples are also used to find more valuable samples, with the classifier combination method being used as a strategy to ensure confidence and the adaptive interval method used as a strategy to ensure informativeness. The experimental comparison results tested on three benchmark HSI datasets have demonstrated the significantly superior performance of our proposed method.
Article
Tracking and authentication at a low cost and in a convenient way is an important issue nowadays as opposed to using high cost for specified hardware and complex calibration process. In the paper, the prime concentration is divided into two: (i) eye tracking and (ii) parallelism in execution. First part has been subdivided as: Firstly, human face is taken from video sequence, and this is divided into frames. Eye regions are recognized from image frames. Secondly, iris region and pupils (central point) are being identified. In identifying pupil, the energy intensity and edge strength are taken into account together. Iris and eye corners are considered as tracking points. The famous sinusoidal head model is used for 3-D head shape, and adaptive probability density function (APDF) is proposed for estimating pose in facial features extraction. Thirdly, iris (pupil) tracking is completed by integrating eye vector and head movement information gained from APDF. The second part focuses on reducing execution time (ET) by applying pipeline architecture (PA). So the entire process has been subdivided into three phases. Phase-I is tracking eyes and pupils and preparing an eye vector. Phase-II is completing the calibration process. Phase-III is for matching and feature extraction. PA is integrated to improve performance in terms of ET because three phases are running in parallel. The experiment is done based on CK + image sets. Indeed, by using PA, the ET is reduced by (66.68%) two-thirds. The average accuracy in tracking (91% in left eye pupil (LEP)) and 93.6% in right eye pupil (REP)) is a robust one.
Article
Satellite images are vital nowadays, and it is used in remote sensing, weather forecasting, monitor day to day national security issues, etc. Due to large capacity, resolution of the earth imaging sensors and image acquisition techniques, image storage, and retrieval of images are becoming a problem nowadays. Images are retrieved using text-based image retrieval that was unable to provide accurate results. Content-based image retrieval (CBIR) for remote information system provides the solution for the retrieval problem. Existing CBIR system based on the low-level feature extraction techniques such as color, shape, and texture remains an open problem. In this research, we proposed and implemented a new fusion technique for extracting the features of the images, based on the hybrid model of the extraction, which includes sparse auto-encoding of gray-level co-occurrence matrix-based features from principal component analysis and discrete wavelet transform derived coefficients, with prediction of information using neural network by applying the fuzzy inference system approach for the classification of the retrieved data. Results obtained from the proposed technique are compared with the existing techniques gray-level co-occurrence matrix, scale invariant feature transform, and speeded up robust features. The experimental results show that the performance of the proposed system is better in terms of accuracy, precision, and error in comparison with previous techniques.
Article
An adaptive geometric filter approach based on average brightness of the image and discrete cosine transform (DCT) coefficient adjustment is proposed in this paper for gray and color image enhancement. Adaptive geometric filter is employed for smoothing the histogram peaks before histogram equalization and DCT coefficient adjustment is applied for enhancing the fine details of the image. The proposed method is compared with the state-of-the-art other contrast enhancement techniques. Experimental results show that proposed technique exceeds the other compared methods both visually and quantitatively. Three metrics are evaluated for numerical assessment that are entropy, quality index and measure of enhancement (EME). Also color quality enhancement metric is tested for color images. After thorough study, it is concluded that our method intensifies the image contrast very well and also preserves the original characteristics of the input images.
Article
This paper introduces a 340 GHz polarization converter. The polarization converter consists of a bilayer metamaterial, which is processed by flexible thin film lithography method. Two dimensional metamaterial offer such advantages over traditional quarter-wave-plates as compactness, low profile and simple fabrication process, but the insertion loss is comparably high. In order to lower the insertion loss, the 10μm ultrathin polyimide substrate and a bilayer structure separated by a quarter wavelength air impedance match layer are adopted. After the horn antenna’s far-field test by vector network analyzer and the Gaussian beam test based on THz - TDS, the results show that in the band of 325-350 GHz, the transmission loss is less than 2 dB, and the axial ratio is less than 3dB, besides, simulation and test insertion loss performance difference is less than 1dB, which indicates that the method we adopt is effective. In addition, based on transmission line theory, we analyze its working principle and the influence from the fabrication error.
Article
Full-text available
In the Bayesian context, the rationale for using a particular probability density function (pdf) for a synthetic aperture radar (SAR) image improves the performance of unsupervised change detection (CD). So, this work is concentrated on promoting the consequence of selecting an empirical pdf and its parameter estimation technique. Rayleigh pdf is used to model approximation and detail sub-bands of a dual tree complex wavelet transform (DTCWT) of the log ratio difference image. The chosen pdf has a potential impact on impulsive SAR images. The scale parameter of Rayleigh pdf is estimated using first-order moments by exploiting Mellin transform. This iteration-free parameter estimation technique, made the method faster. In each sub-band at each level of DTCWT decomposition, the binary threshold is estimated to represent the change signal at different scales. The performance in the final change map is improved by combining intra-scale along with inter-scale sub-band coefficients. The proposed method is tested on three flooding SAR image data sets acquired from European Space Agency environmental satellite, European remote-sensing satellite (ERS)-2 and ERS-1 SAR instruments. The experimental results show that the proposed method achieves good CD performance in terms of false alarm rate and overall detection accuracy in less computing time.
Article
Semantic segmentation is a fundamental research in remote sensing image processing. Because of the complex maritime environment, the sea-land segmentation is a challenging task. Although the neural network has achieved excellent performance in semantic segmentation in the last years, there are a few of works using CNN for sea-land segmentation and the results could be further improved. This paper proposes a novel deep convolution neural network named DeepUNet. Like the U-Net, its structure has a contracting path and an expansive path to get high resolution output. But differently, the DeepUNet uses DownBlocks instead of convolution layers in the contracting path and uses UpBlock in the expansive path. The two novel blocks bring two new connections that are U-connection and Plus connection. They are promoted to get more precise segmentation results. To verify our network architecture, we made a new challenging sea-land dataset and compare the DeepUNet on it with the SegNet and the U-Net. Experimental results show that DeepUNet achieved good performance compared with other architectures, especially in high-resolution remote sensing imagery.
Conference Paper
Full-text available
This paper presents a superpixel-based classifier for landcover mapping of hyperspectral image data. The approach relies on the sparse representation of each pixel by a weighted linear combination of the training data. Spatial information is incor-porated by using a coarse patch-based neighborhood around each pixel as well as data-adapted superpixels. The classifica-tion is done via a hierarchical conditional random field, which utilizes the sparse-representation output and models spatial and hierarchical structures in the hyperspectral image. The experiments show that the proposed approach results in supe-rior accuracies in comparison to sparse-representation based classifiers that solely use a patch-based neighborhood.
Article
Full-text available
We propose and analyze a new model for Hyperspectral Images (HSI) based on the assumption that the whole signal is composed of a linear combination of few sources, each of which has a specific spectral signature, and that the spatial abundance maps of these sources are themselves piecewise smooth and therefore efficiently encoded via typical sparse models. We derive new sampling schemes exploiting this assumption and give theoretical lower bounds on the number of measurements required to reconstruct HSI data and recover their source model parameters. This allows us to segment hyperspectral images into their source abundance maps directly from compressed measurements. We also propose efficient optimization algorithms and perform extensive experimentation on synthetic and real datasets, which reveals that our approach can be used to encode HSI with far less measurements and computational effort than traditional CS methods.
Article
Full-text available
This paper presents a structured dictionary-based model for hyperspectral data that incorporates both spectral and contextual characteristics of a spectral sample, with the goal of hyperspectral image classification. The idea is to partition the pixels of a hyperspectral image into a number of spatial neighborhoods called contextual groups and to model each pixel with a linear combination of a few dictionary elements learned from the data. Since pixels inside a contextual group are often made up of the same materials, their linear combinations are constrained to use common elements from the dictionary. To this end, dictionary learning is carried out with a joint sparse regularizer to induce a common sparsity pattern in the sparse coefficients of each contextual group. The sparse coefficients are then used for classification using a linear SVM. Experimental results on a number of real hyperspectral images confirm the effectiveness of the proposed representation for hyperspectral image classification. Moreover, experiments with simulated multispectral data show that the proposed model is capable of finding representations that may effectively be used for classification of multispectral-resolution samples.
Article
Full-text available
Because hyperspectral imagery is generally low resolution, it is possible for one pixel in the image to contain several materials. The process of determining the abundance of representative materials in a single pixel is called spectral unmixing. We discuss the L1 unmixing model and fast computational approaches based on Bregman iteration. We then use the unmixing information and Total Variation (TV) minimization to produce a higher resolution hyperspectral image in which each pixel is driven towards a "pure" material. This method produces images with higher visual quality and can be used to indicate the subpixel location of features.
Article
Full-text available
A new sparsity-based algorithm for the classification of hyperspectral imagery is proposed in this paper. The proposed algorithm relies on the observation that a hyperspectral pixel can be sparsely represented by a linear combination of a few training samples from a structured dictionary. The sparse representation of an unknown pixel is expressed as a sparse vector whose nonzero entries correspond to the weights of the selected training samples. The sparse vector is recovered by solving a sparsity-constrained optimization problem, and it can directly determine the class label of the test sample. Two different approaches are proposed to incorporate the contextual information into the sparse recovery optimization problem in order to improve the classification performance. In the first approach, an explicit smoothing constraint is imposed on the problem formulation by forcing the vector Laplacian of the reconstructed image to become zero. In this approach, the reconstructed pixel of interest has similar spectral characteristics to its four nearest neighbors. The second approach is via a joint sparsity model where hyperspectral pixels in a small neighborhood around the test pixel are simultaneously represented by linear combinations of a few common training samples, which are weighted with a different set of coefficients for each pixel. The proposed sparsity-based algorithm is applied to several real hyperspectral images for classification. Experimental results show that our algorithm outperforms the classical supervised classifier support vector machines in most cases.
Article
Full-text available
The spectral features in hyperspectral imagery (HSI) contain significant structure that, if properly characterized, could enable more efficient data acquisition and improved data analysis. Because most pixels contain reflectances of just a few materials, we propose that a sparse coding model is well-matched to HSI data. Sparsity models consider each pixel as a combination of just a few elements from a larger dictionary, and this approach has proven effective in a wide range of applications. Furthermore, previous work has shown that optimal sparse coding dictionaries can be learned from a dataset with no other a priori information (in contrast to many HSI “endmember” discovery algorithms that assume the presence of pure spectra or side information). We modified an existing unsupervised learning approach and applied it to HSI data (with significant ground truth labeling) to learn an optimal sparse coding dictionary. Using this learned dictionary, we demonstrate three main findings: 1) the sparse coding model learns spectral signatures of materials in the scene and locally approximates nonlinear manifolds for individual materials; 2) this learned dictionary can be used to infer HSI-resolution data with very high accuracy from simulated imagery collected at multispectral-level resolution, and 3) this learned dictionary improves the performance of a supervised classification algorithm, both in terms of the classifier complexity and generalization from very small training sets.
Conference Paper
Full-text available
Transductive SVM is a semi-supervised method, which can capture the intrinsic properties of each class' structure in feature space with the help of large number of unlabeled data. It can optimize the classification effect with little and poor representative labeled samples. A weakness of this method is one need determine the number of unlabeled samples which belongs to a specific class before iteration, and the labeling efficiency is very low. We proposed an unlabeled samples labeling method of TSVM for remote sensing image. With this method, we need not know the ratio of the unlabeled samples among classes any more. The first step of our method is clustering, and the number of the clusters must be more than 5 times as the number of classes to be classified. After clustering we get the mean value and the standard deviation of every cluster. Then we labeled the unlabeled samples which contained in the hyper-ball with the mean value as ball-center and the standard deviation as radius a time, instead of labeled one pair of unlabeled samples a time. The classification experiments results prove that the proposed method is not only effective but also can improve the classification accuracy to some extent.
Article
Full-text available
This paper presents a semi-supervised graph-based method for the classification of hyperspectral images. The method is designed to handle the special characteristics of hyperspectral images, namely, high-input dimension of pixels, low number of labeled samples, and spatial variability of the spectral signature. To alleviate these problems, the method incorporates three ingredients, respectively. First, being a kernel-based method, it combats the curse of dimensionality efficiently. Second, following a semi-supervised approach, it exploits the wealth of unlabeled samples in the image, and naturally gives relative importance to the labeled ones through a graph-based methodology. Finally, it incorporates contextual information through a full family of composite kernels. Noting that the graph method relies on inverting a huge kernel matrix formed by both labeled and unlabeled samples, we originally introduce the Nystro umlm method in the formulation to speed up the classification process. The presented semi-supervised-graph-based method is compared to state-of-the-art support vector machines in the classification of hyperspectral data. The proposed method produces better classification maps, which capture the intrinsic structure collectively revealed by labeled and unlabeled points. Good and stable accuracy is produced in ill-posed classification problems (high dimensional spaces and low number of labeled samples). In addition, the introduction of the composite-kernel framework drastically improves results, and the new fast formulation ranks almost linearly in the computational cost, rather than cubic as in the original method, thus allowing the use of this method in remote-sensing applications.
Article
Full-text available
This letter presents a framework of composite kernel machines for enhanced classification of hyperspectral images. This novel method exploits the properties of Mercer's kernels to construct a family of composite kernels that easily combine spatial and spectral information. This framework of composite kernels demonstrates: 1) enhanced classification accuracy as compared to traditional approaches that take into account the spectral information only: 2) flexibility to balance between the spatial and spectral information in the classifier; and 3) computational efficiency. In addition, the proposed family of kernel classifiers opens a wide field for future developments in which spatial and spectral information can be easily integrated.
Article
Spectral and spatial regularized semi-supervised learning is investigated for classification of hyperspectral images. In spectral regularizer, sum of minimum distance (SMD) is introduced to determine graph adjacency relationships and local manifold learning (LML) is employed for edge weighting. The resulted SMD_LML regularizer is able to constrain the prediction vectors to preserve the local geometry of each neighborhood. In spatial regularizer, spatial neighbors with similar spectra are enforced to have similar predictions. The two regularizers capture different relations of data points and play complementary roles. By combining SMD_LML regularizer and spatial regularizer in graph based semi-supervised learning, the local properties of both spectral neighborhood and spatial neighborhood can be preserved in the prediction domain. Experiments with AVIRIS and ROSIS hyperspectral images demonstrated that SMD can produce more accurate adjacency relations than the other two popular distance measurements. The classification with SMD_LML and spatial regularizer achieved significant improvements than several spectral and spatial based methods.
Article
High-dimensional features and limited labeled training samples often lead to dimensionality disaster for hyperspectral image classification. Semisupervised learning has shown great significance in hyperspectral image processing. A semisupervised classification algorithm based on spatial-spectral clustering (SC-S2C) was proposed. In the proposed framework, spatial information extracted by Gabor filter was first stacked with spectral information. After that, an active learning (AL) algorithm was used to select the most informative unlabeled samples. Then a probability model based support vector machine combined with the SC-S2C technique was used to predict the labels of the selected unlabeled data. The proposed algorithm was experimentally validated on real hyperspectral datasets, indicating that the proposed framework can utilize the unlabeled data effectively and achieve high accuracy compared with state-of-the-art algorithms when small labeled data are available.
Article
The articles in this special issue represent advances in hyperspectral image-processing research. Specifically, it focuses on novel algorithmic approaches related to the representation and analysis of hyperspectral images, as well as their application to real-world image-analysis needs. It also delves into theoretical insights, performance limits, and trade-offs on the class of emerging image-processing approaches tailored to hyperspectral images. This issue covers a gamut of broad topics, from representation and sensing (image coding, compressive sensing), feature extraction and channel selection, to image analysis (spectral unmixing, classification, and detection.
Article
In this paper, we propose a superpixel-level sparse representation classification framework with multitask learning for hyperspectral imagery. The proposed algorithm exploits the class-level sparsity prior for multiple-feature fusion, and the correlation and distinctiveness of pixels in a spatial local region. Compared with some of the state-of-the-art hyperspectral classifiers, the superiority of the multiple-feature combination, the spatial prior utilization, and the computational complexity are maintained at the same time in the proposed method. The proposed classification algorithm was tested on three hyperspectral images. The experimental results suggest that the proposed algorithm performs better than the other sparse (collaborative) representation-based algorithms and some popular hyperspectral multiple-feature classifiers.
Article
Recent advances in spectral-spatial classification of hyperspectral images are presented in this paper. Several techniques are investigated for combining both spatial and spectral information. Spatial information is extracted at the object (set of pixels) level rather than at the conventional pixel level. Mathematical morphology is first used to derive the morphological profile of the image, which includes characteristics about the size, orientation, and contrast of the spatial structures present in the image. Then, the morphological neighborhood is defined and used to derive additional features for classification. Classification is performed with support vector machines (SVMs) using the available spectral information and the extracted spatial information. Spatial postprocessing is next investigated to build more homogeneous and spatially consistent thematic maps. To that end, three presegmentation techniques are applied to define regions that are used to regularize the preliminary pixel-wise thematic map. Finally, a multiple-classifier (MC) system is defined to produce relevant markers that are exploited to segment the hyperspectral image with the minimum spanning forest algorithm. Experimental results conducted on three real hyperspectral images with different spatial and spectral resolutions and corresponding to various contexts are presented. They highlight the importance of spectral-spatial strategies for the accurate classification of hyperspectral images and validate the proposed methods.
Article
The classification of high-dimensional data with too few labeled samples is a major challenge which is difficult to meet unless some special characteristics of the data can be exploited. In remote sensing, the problem is particularly serious because of the difficulty and cost factors involved in assignment of labels to high-dimensional samples. In this paper, we exploit certain special properties of hyperspectral data and propose an 1\ell^{1}-minimization -based sparse representation classification approach to overcome this difficulty in hyperspectral data classification. We assume that the data within each hyperspectral data class lies in a very low-dimensional subspace. Unlike traditional supervised methods, the proposed method does not have separate training and testing phases and, therefore, does not need a training procedure for model creation. Further, to prove the sparsity of hyperspectral data and handle the computational intensiveness and time demand of general-purpose linear programming (LP) solvers, we propose a Homotopy-based sparse classification approach, which works efficiently when data is highly sparse. The approach is not only time efficient, but it also produces results, which are comparable to the traditional methods. The proposed approaches are tested for our difficult classification problem of hyperspectral data with few labeled samples. Extensive experiments on four real hyperspectral data sets prove that hyperspectral data is highly sparse in nature, and the proposed approaches are robust across different databases, offer more classification accuracy, and are more efficient than state-of-the-art methods.
Article
This paper presents a new semisupervised segmentation algorithm, suited to high-dimensional data, of which remotely sensed hyperspectral image data sets are an example. The algorithm implements two main steps: 1) semisupervised learning of the posterior class distributions followed by 2) segmentation, which infers an image of class labels from a posterior distribution built on the learned class distributions and on a Markov random field. The posterior class distributions are modeled using multinomial logistic regression, where the regressors are learned using both labeled and, through a graph-based technique, unlabeled samples. Such unlabeled samples are actively selected based on the entropy of the corresponding class label. The prior on the image of labels is a multilevel logistic model, which enforces segmentation results in which neighboring labels belong to the same class. The maximum a posteriori segmentation is computed by the α-expansion min-cut-based integer optimization algorithm. Our experimental results, conducted using synthetic and real hyperspectral image data sets collected by the Airborne Visible/Infrared Imaging Spectrometer system of the National Aeronautics and Space Administration Jet Propulsion Laboratory over the regions of Indian Pines, IN, and Salinas Valley, CA, reveal that the proposed approach can provide classification accuracies that are similar or higher than those achieved by other supervised methods for the considered scenes. Our results also indicate that the use of a spatial prior can greatly improve the final results with respect to a case in which only the learned class densities are considered, confirming the importance of jointly considering spatial and spectral information in hyperspectral image segmentation.
Article
A framework for semisupervised remote sensing image classification based on neural networks is presented. The methodology consists of adding a flexible embedding regularizer to the loss function used for training neural networks. Training is done using stochastic gradient descent with additional balancing constraints to avoid falling into local minima. The method constitutes a generalization of both supervised and unsupervised methods and can handle millions of unlabeled samples. Therefore, the proposed approach gives rise to an operational classifier, as opposed to previously presented transductive or Laplacian support vector machines (TSVM or LapSVM, respectively). The proposed methodology constitutes a general framework for building computationally efficient semisupervised methods. The method is compared with LapSVM and TSVM in semisupervised scenarios, to SVM in supervised settings, and to online and batch k -means for unsupervised learning. Results demonstrate the improved classification accuracy and scalability of this approach on several hyperspectral image classification problems.
Book
A comprehensive review of an area of machine learning that deals with the use of unlabeled data in classification problems: state-of-the-art algorithms, a taxonomy of the field, applications, benchmark experiments, and directions for future research. In the field of machine learning, semi-supervised learning (SSL) occupies the middle ground, between supervised learning (in which all training examples are labeled) and unsupervised learning (in which no label data are given). Interest in SSL has increased in recent years, particularly because of application domains in which unlabeled data are plentiful, such as images, text, and bioinformatics. This first comprehensive overview of SSL presents state-of-the-art algorithms, a taxonomy of the field, selected applications, benchmark experiments, and perspectives on ongoing and future research.Semi-Supervised Learning first presents the key assumptions and ideas underlying the field: smoothness, cluster or low-density separation, manifold structure, and transduction. The core of the book is the presentation of SSL methods, organized according to algorithmic strategies. After an examination of generative models, the book describes algorithms that implement the low-density separation assumption, graph-based methods, and algorithms that perform two-step learning. The book then discusses SSL applications and offers guidelines for SSL practitioners by analyzing the results of extensive benchmark experiments. Finally, the book looks at interesting directions for SSL research. The book closes with a discussion of the relationship between semi-supervised learning and transduction.
Article
The fundamental basis for space-based remote sensing is that information is potentially available from the electromagnetic energy field arising from the Earth's surface and, in particular, from the spatial, spectral, and temporal variations in that field. Rather than focusing on the spatial variations, which imagery perhaps best conveys, why not move on to look at how the spectral variations might be used. The idea was to enlarge the size of a pixel until it includes an area that is characteristic from a spectral response standpoint for the surface cover to be discriminated. The article includes an example of an image space representation, using three bands to simulate a color IR photograph of an airborne hyperspectral data set over the Washington, DC, mall
Article
This paper introduces a semisupervised classification method that exploits both labeled and unlabeled samples for addressing ill-posed problems with support vector machines (SVMs). The method is based on recent developments in statistical learning theory concerning transductive inference and in particular transductive SVMs (TSVMs). TSVMs exploit specific iterative algorithms which gradually search a reliable separating hyperplane (in the kernel space) with a transductive process that incorporates both labeled and unlabeled samples in the training phase. Based on an analysis of the properties of the TSVMs presented in the literature, a novel modified TSVM classifier designed for addressing ill-posed remote-sensing problems is proposed. In particular, the proposed technique: 1) is based on a novel transductive procedure that exploits a weighting strategy for unlabeled patterns, based on a time-dependent criterion; 2) is able to mitigate the effects of suboptimal model selection (which is unavoidable in the presence of small-size training sets); and 3) can address multiclass cases. Experimental results confirm the effectiveness of the proposed method on a set of ill-posed remote-sensing classification problems representing different operative conditions