Article

Generalized discriminant analysis: a matrix exponential approach.

Department of Computer Science, Chongqing University, Chongqing 400030, China.
IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics: a publication of the IEEE Systems, Man, and Cybernetics Society (Impact Factor: 3.01). 08/2009; 40(1):186-97. DOI: 10.1109/TSMCB.2009.2024759
Source: PubMed

ABSTRACT Linear discriminant analysis (LDA) is well known as a powerful tool for discriminant analysis. In the case of a small training data set, however, it cannot directly be applied to high-dimensional data. This case is the so-called small-sample-size or undersampled problem. In this paper, we propose an exponential discriminant analysis (EDA) technique to overcome the undersampled problem. The advantages of EDA are that, compared with principal component analysis (PCA) + LDA, the EDA method can extract the most discriminant information that was contained in the null space of a within-class scatter matrix, and compared with another LDA extension, i.e., null-space LDA (NLDA), the discriminant information that was contained in the non-null space of the within-class scatter matrix is not discarded. Furthermore, EDA is equivalent to transforming original data into a new space by distance diffusion mapping, and then, LDA is applied in such a new space. As a result of diffusion mapping, the margin between different classes is enlarged, which is helpful in improving classification accuracy. Comparisons of experimental results on different data sets are given with respect to existing LDA extensions, including PCA + LDA, LDA via generalized singular value decomposition, regularized LDA, NLDA, and LDA via QR decomposition, which demonstrate the effectiveness of the proposed EDA method.

0 Bookmarks
 · 
97 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: The traditional vectorized classifier is supposed to incorporate the class structural information but ignore the individual structure of single pattern. In contrast, the matrixized classifier is supposed to consider both the class and the individual structures, and thus gets a superior performance to the vectorized classifier. In this paper, we explore one middle granularity named the cluster between the class and individual, and introduce the cluster structure that means the structure within each class into the matrixized classifier design. Doing so can simultaneously utilize the class, the cluster, and the individual structures in the way that is from global to point. Therefore, the proposed classifier design here owns the three-fold structural information, and can bring the classification performance to an improving trend. In practice, we adopt the Modification of Ho–Kashyap algorithm with Squared approximation of the misclassification errors (MHKS) as the learning paradigm and develop a Three-fold Structured MHKS named TSMHKS. The advantage of the three-fold structural learning framework is considering different close degrees between samples so as to improve the performance. The experimental results demonstrate the feasibility and effectiveness of the TSMHKS. Furthermore, we discuss the theoretical and experimental generalization bound of the proposed algorithm.
    Pattern Recognition 06/2013; 46(6):1532–1555. · 2.58 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Multi-view learning was supposed to process data with multiple information sources. Our previous work extended multi-view learning and proposed one effective learning machine named MultiV-MHKS. MultiV-MHKS firstly changed a base classifier into M different sub-classifiers, and then designed one joint learning process for the generated M sub-ones. Each sub-classifier was taken as one view of MultiV-MHKS. However, MultiV-MHKS assumed that each sub-classifier should play an equal role in the ensemble. Thus the weight values rqrq, q=1…Mq=1…M for each sub-classifier were set to the equal value. In practice, this hypothesis was neither flexible nor appropriate since rqs should reflect different effects of their corresponding views. In order to make rqs flexible and appropriate, in this paper we propose a regularized multi-view learning machine named RMultiV-MHKS with the optimized rqs. In this case, we optimize rqs through using the Response Surface Technique (RST) on cross-validation data and thus can obtain a regularized multi-view learning machine. Doing so can assign a certain view with zero weight in the combination, which means that this specific view does not carry discriminative information for the problem and hence can be pruned. The experimental results here validate the effectiveness of the proposed RMultiV-MHKS and meanwhile explore the effect of some important parameters. The characters of the RMultiV-MHKS are: (1) distributing more weight to the favorable views which can reflect the property of the problem; (2) owning a tighter generalization risk bound than its corresponding single-view learning machine in terms of the Rademacher complexity; (3) having a statistically superior classification performance to the original MultiV-MHKS.
    Neurocomputing. 11/2012; 97:201–213.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Locality preserving projections (LPP) is a widely used manifold reduced dimensionality technique. However, it suffers from two problems: (1) small sample size problem and (2) the performance is sensitive to the neighborhood size k. In order to address these problems, we propose an exponential locality preserving projections (ELPP) by introducing the matrix exponential in this paper. ELPP avoids the singular of the matrices and obtains more valuable information for LPP. The experiments are conducted on three public face databases, ORL, Yale and Georgia Tech. The results show that the performances of ELPP is better than those of LPP and the state-of-the-art LPP Improved1. (C) 2011 Elsevier B.V. All rights reserved.
    Neurocomputing. 01/2011; 74(17):3654-3662.