Ensemble-based discriminant learning with boosting for face recognition.

The Edward S. Rogers Sr. Department of Electrical and Computer Engineering, University of Toronto, ON M5S 3G4, Canada.
IEEE Transactions on Neural Networks (Impact Factor: 2.95). 02/2006; 17(1):166-78.
Source: PubMed

ABSTRACT In this paper, we propose a novel ensemble-based approach to boost performance of traditional Linear Discriminant Analysis (LDA)-based methods used in face recognition. The ensemble-based approach is based on the recently emerged technique known as "boosting". However, it is generally believed that boosting-like learning rules are not suited to a strong and stable learner such as LDA. To break the limitation, a novel weakness analysis theory is developed here. The theory attempts to boost a strong learner by increasing the diversity between the classifiers created by the learner, at the expense of decreasing their margins, so as to achieve a tradeoff suggested by recent boosting studies for a low generalization error. In addition, a novel distribution accounting for the pairwise class discriminant information is introduced for effective interaction between the booster and the LDA-based learner. The integration of all these methodologies proposed here leads to the novel ensemble-based discriminant learning approach, capable of taking advantage of both the boosting and LDA techniques. Promising experimental results obtained on various difficult face recognition scenarios demonstrate the effectiveness of the proposed approach. We believe that this work is especially beneficial in extending the boosting framework to accommodate general (strong/weak) learners.

1 Bookmark
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We modify the conventional principal component analysis (PCA) and propose a novel subspace learning framework, modified PCA (MPCA), using multiple similarity measurements. MPCA computes three similarity matrices exploiting the similarity measurements: 1) mutual information; 2) angle information; and 3) Gaussian kernel similarity. We employ the eigenvectors of similarity matrices to produce new subspaces, referred to as similarity subspaces. A new integrated similarity subspace is then generated using a novel feature selection approach. This approach needs to construct a kind of vector set, termed weak machine cell (WMC), which contains an appropriate number of the eigenvectors spanning the similarity subspaces. Combining the wrapper method and the forward selection scheme, MPCA selects a WMC at a time that has a powerful discriminative capability to classify samples. MPCA is very suitable for the application scenarios in which the number of the training samples is less than the data dimensionality. MPCA outperforms the other state-of-the-art PCA-based methods in terms of both classification accuracy and clustering result. In addition, MPCA can be applied to face image reconstruction. MPCA can use other types of similarity measurements. Extensive experiments on many popular real-world data sets, such as face databases, show that MPCA achieves desirable classification results, as well as has a powerful capability to represent data.
    IEEE transactions on neural networks and learning systems 08/2014; 25(8):1538-1552. · 4.37 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A novel framework of learning-based super-resolution is proposed by employing the process of learning from the estimation errors. The estimation errors generated by different learning-based super-resolution algorithms are statistically shown to be sparse and uncertain. The sparsity of the estimation errors means most of estimation errors are small enough. The uncertainty of the estimation errors means the location of the pixel with larger estimation error is random. Noticing the prior information about the estimation errors, a nonlinear boosting process of learning from these estimation errors is introduced into the general framework of the learning-based super-resolution. Within the novel framework of super-resolution, a low-rank decomposition technique is used to share the information of different super-resolution estimations and to remove the sparse estimation errors from different learning algorithms or training samples. The experimental results show the effectiveness and the efficiency of the proposed framework in enhancing the performance of different learning-based algorithms.
    IEEE transactions on cybernetics. 08/2014;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose different ensemble learning algorithms and their application to the face recognition problem. Three types of attributes are used for image representation: statistical, spectral, and segmentation features and regional descriptors. Classification is performed by nearest neighbor using different p-norms defined in the corresponding spaces of attributes. In this approach, each attribute together with its corresponding type of the analysis (local or global) and the distance criterion (norm or cosine), define a different classifier. The classification is unsupervised since no class information is used to improve the design of the different classifiers. Three different versions of ensemble classifiers are proposed in this paper: CAV1, CAV2, and CBAG, being the main differences among them the way the image candidates that perform the consensus are selected. The main results shown in this paper are the following: 1. The statistical attributes (local histogram and percentiles) are the individual classifiers that provided the higher accuracies, followed by the spectral methods (DWT), and the regional features (texture analysis). 2. No single attribute is able to provide systematically 100% accuracy over the ORL database. 3. The accuracy and stability of the classification is increased by consensus classification (ensemble learning techniques). 4. Optimum results are obtained by reducing the number of classifiers taking into account their diversity, and by optimizing the parameters of these classifiers using a member of the Particle Swarm Optimization (PSO) family. These results are in accord with the conclusions that are presented in the literature using ensemble learning methodologies, that is, it is possible to build strong classifiers by assembling different weak (or simple) classifiers based on different and diverse image attributes. Due to these encouraging results, future research will be devoted to the use of supervised ensemble techniques in face recognition and in other important biometric problems.
    International Journal of Pattern Recognition and Artificial Intelligence 06/2014; 28(04):32. · 0.56 Impact Factor

Full-text (3 Sources)

Available from
May 29, 2014