Generalized discriminant analysis: a matrix exponential approach.
ABSTRACT Linear discriminant analysis (LDA) is well known as a powerful tool for discriminant analysis. In the case of a small training data set, however, it cannot directly be applied to high-dimensional data. This case is the so-called small-sample-size or undersampled problem. In this paper, we propose an exponential discriminant analysis (EDA) technique to overcome the undersampled problem. The advantages of EDA are that, compared with principal component analysis (PCA) + LDA, the EDA method can extract the most discriminant information that was contained in the null space of a within-class scatter matrix, and compared with another LDA extension, i.e., null-space LDA (NLDA), the discriminant information that was contained in the non-null space of the within-class scatter matrix is not discarded. Furthermore, EDA is equivalent to transforming original data into a new space by distance diffusion mapping, and then, LDA is applied in such a new space. As a result of diffusion mapping, the margin between different classes is enlarged, which is helpful in improving classification accuracy. Comparisons of experimental results on different data sets are given with respect to existing LDA extensions, including PCA + LDA, LDA via generalized singular value decomposition, regularized LDA, NLDA, and LDA via QR decomposition, which demonstrate the effectiveness of the proposed EDA method.
- SourceAvailable from: research.ijcaonline.org[Show abstract] [Hide abstract]
ABSTRACT: In this paper, we compared the performance of various combinations of edge operators and linear subspace methods to determine the best combination for pose classification. To evaluate the performance, we have carried out experiments on CMU-PIE database which contains images with wide variation in illumination and pose. We found that the performance of pose classification depends on the choice of edge operator and linear subspace method. The best classification accuracy is obtained with Prewitt edge operator and Eigenfeature regularization method. In order to handle illumination variation, we used adaptive histogram equalization as a preprocessing step resulting into significant improvement in performance except for Roberts operator.International Journal of Computer Applications 01/2012; 37(1):14-19. DOI:10.5120/4571-6565 · 0.82 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: Locality preserving projections (LPP) is a widely used manifold reduced dimensionality technique. However, it suffers from two problems: (1) small sample size problem and (2) the performance is sensitive to the neighborhood size k. In order to address these problems, we propose an exponential locality preserving projections (ELPP) by introducing the matrix exponential in this paper. ELPP avoids the singular of the matrices and obtains more valuable information for LPP. The experiments are conducted on three public face databases, ORL, Yale and Georgia Tech. The results show that the performances of ELPP is better than those of LPP and the state-of-the-art LPP Improved1. (C) 2011 Elsevier B.V. All rights reserved.Neurocomputing 10/2011; 74(17-17):3654-3662. DOI:10.1016/j.neucom.2011.07.007 · 2.01 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: The linear discriminant analysis (LDA) is a very popular linear feature extraction approach. The algorithms of LDA usually perform well under the following two assumptions. The first assumption is that the global data structure is consistent with the local data structure. The second assumption is that the input data classes are Gaussian distributions. However, in real-world applications, these assumptions are not always satisfied. In this paper, we propose an improved LDA framework, the local LDA (LLDA), which can perform well without needing to satisfy the above two assumptions. Our LLDA framework can effectively capture the local structure of samples. According to different types of local data structure, our LLDA framework incorporates several different forms of linear feature extraction approaches, such as the classical LDA and principal component analysis. The proposed framework includes two LLDA algorithms: a vector-based LLDA algorithm and a matrix-based LLDA (MLLDA) algorithm. MLLDA is directly applicable to image recognition, such as face recognition. Our algorithms need to train only a small portion of the whole training set before testing a sample. They are suitable for learning large-scale databases especially when the input data dimensions are very high and can achieve high classification accuracy. Extensive experiments show that the proposed algorithms can obtain good classification results.IEEE Transactions on Neural Networks 08/2011; DOI:10.1109/TNN.2011.2152852 · 2.95 Impact Factor