Ensemble-based discriminant learning with boosting for face recognition

The Edward S. Rogers Sr. Department of Electrical and Computer Engineering, University of Toronto, ON M5S 3G4, Canada.
IEEE Transactions on Neural Networks (Impact Factor: 2.95). 02/2006; 17(1):166-78. DOI: 10.1109/TNN.2005.860853
Source: PubMed


In this paper, we propose a novel ensemble-based approach to boost performance of traditional Linear Discriminant Analysis (LDA)-based methods used in face recognition. The ensemble-based approach is based on the recently emerged technique known as "boosting". However, it is generally believed that boosting-like learning rules are not suited to a strong and stable learner such as LDA. To break the limitation, a novel weakness analysis theory is developed here. The theory attempts to boost a strong learner by increasing the diversity between the classifiers created by the learner, at the expense of decreasing their margins, so as to achieve a tradeoff suggested by recent boosting studies for a low generalization error. In addition, a novel distribution accounting for the pairwise class discriminant information is introduced for effective interaction between the booster and the LDA-based learner. The integration of all these methodologies proposed here leads to the novel ensemble-based discriminant learning approach, capable of taking advantage of both the boosting and LDA techniques. Promising experimental results obtained on various difficult face recognition scenarios demonstrate the effectiveness of the proposed approach. We believe that this work is especially beneficial in extending the boosting framework to accommodate general (strong/weak) learners.

Download full-text


Available from: Konstantinos Plataniotis,
  • Source
    • "The feature extraction approaches are mainly divided into two categories: 1) holistic feature based [5]–[7] and 2) local feature based [8]–[10]. Holistic-feature-based approaches, including the principal component analysis (PCA) [5], linear discriminant analysis (LDA) [7], and independent component analysis [6], have been popularly used for face recognition . Nonetheless, more recent works [8]–[12] attempt to develop local-feature-based techniques, since they are more stable to local changes, such as illumination, expression, and occlusion [13]–[15]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: For a human face, the Gabor transform can extract its multiple scale and orientation features that are very useful for the recognition. In this paper, the Gabor-feature-based face recognition is formulated as a multitask sparse representation model, in which the sparse coding of each Gabor feature is regarded as a task. To effectively exploit the complementary yet correlated information among different tasks, a flexible representation algorithm termed multitask adaptive sparse representation (MASR) is proposed. The MASR algorithm not only restricts Gabor features of one test sample to be jointly represented by training atoms from the same class but also promotes the selected atoms for these features to be varied within each class, thus allowing better representation. In addition, to use the local information, we operate the MASR on local regions of Gabor features. Then, by considering the structural characteristics of the face and the effects of the external interferences, a structural-residual weighting strategy is proposed to adaptively fuse the decision of each region. Experiments on various datasets verify the effectiveness of the proposed method in dealing with face occlusion, corruption, small number of training samples, as well as variations of lighting and expression.
    IEEE Transactions on Instrumentation and Measurement 10/2015; 64(10):1-1. DOI:10.1109/TIM.2015.2427893 · 1.79 Impact Factor
  • Source
    • "To construct the WMCs, we set a PRR for each WMC to be some value, which is slightly greater than a threshold T , say, 0.5. This is inspired by the basic idea of the typical boosting algorithms [50], [51]. If the classification accuracy yielded by a feature is greater than T , then we consider this feature is a WMC. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We modify the conventional principal component analysis (PCA) and propose a novel subspace learning framework, modified PCA (MPCA), using multiple similarity measurements. MPCA computes three similarity matrices exploiting the similarity measurements: 1) mutual information; 2) angle information; and 3) Gaussian kernel similarity. We employ the eigenvectors of similarity matrices to produce new subspaces, referred to as similarity subspaces. A new integrated similarity subspace is then generated using a novel feature selection approach. This approach needs to construct a kind of vector set, termed weak machine cell (WMC), which contains an appropriate number of the eigenvectors spanning the similarity subspaces. Combining the wrapper method and the forward selection scheme, MPCA selects a WMC at a time that has a powerful discriminative capability to classify samples. MPCA is very suitable for the application scenarios in which the number of the training samples is less than the data dimensionality. MPCA outperforms the other state-of-the-art PCA-based methods in terms of both classification accuracy and clustering result. In addition, MPCA can be applied to face image reconstruction. MPCA can use other types of similarity measurements. Extensive experiments on many popular real-world data sets, such as face databases, show that MPCA achieves desirable classification results, as well as has a powerful capability to represent data.
    IEEE transactions on neural networks and learning systems 08/2014; 25(8):1538-1552. DOI:10.1109/TNNLS.2013.2294492 · 4.29 Impact Factor
  • Source
    • "The aim of this study is to propose a new learning scheme, inheriting the advantages of the data driven NHL (DDNHL) algorithm and ensemble learning technique, and to compare this new training approach with the most known DDNHL approach for learning FCMs, according to its classification capabilities as stated in the literature [22] [23]. Ensemble learning is one of the most promising areas of soft computing, which is used successfully in many real world applications such as text categorization, optical character recognition, face recognition and computer aided medical diagnosis [24] [25] [26] [27] [28] [29] [30]. Ensemble with several neural networks is widely used to improve the generalization performance over a single network. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Fuzzy cognitive maps have gained considerable research interest and widely used to analyze complex systems and making decisions. Recently they have been found large applicability in diverse domains for decision support and classification tasks. A new learning paradigm for FCMs is proposed in this research work, inheriting the main aspects of ensemble based learning approaches, such as bagging and boosting. FCM ensemble learning is an approach where the model is trained using non linear Hebbian learning (NHL) algorithm and further its performance is enhanced using ensemble techniques. This work is inspired from the neural networks ensembles and it is used to learn the FCMs ensembles produced by the already known and efficient data driven NHL algorithm. The new proposed approach of FCM ensembles is applied to identification of Autism and the results are compared with those produced by data driven NHL algorithm alone for FCM training. Experimental results demonstrate that the proposed FCM ensemble algorithm works better than the NHL-based approach alone with respect to accuracy for learning FCM.
    Applied Soft Computing 01/2013; 12(12). DOI:10.1016/j.asoc.2012.03.064 · 2.81 Impact Factor
Show more