Conference Paper

Robust sparse coding for face recognition

Hong Kong Polytech. Univ., Hong Kong, China
DOI: 10.1109/CVPR.2011.5995393 Conference: Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on
Source: IEEE Xplore

ABSTRACT Recently the sparse representation (or coding) based classification (SRC) has been successfully used in face recognition. In SRC, the testing image is represented as a sparse linear combination of the training samples, and the representation fidelity is measured by the l2-norm or l1-norm of coding residual. Such a sparse coding model actually assumes that the coding residual follows Gaussian or Laplacian distribution, which may not be accurate enough to describe the coding errors in practice. In this paper, we propose a new scheme, namely the robust sparse coding (RSC), by modeling the sparse coding as a sparsity-constrained robust regression problem. The RSC seeks for the MLE (maximum likelihood estimation) solution of the sparse coding problem, and it is much more robust to outliers (e.g., occlusions, corruptions, etc.) than SRC. An efficient iteratively reweighted sparse coding algorithm is proposed to solve the RSC model. Extensive experiments on representative face databases demonstrate that the RSC scheme is much more effective than state-of-the-art methods in dealing with face occlusion, corruption, lighting and expression changes, etc.

0 Bookmarks
 · 
478 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this article we address the issue of adopting a local sparse coding representation (Histogram of Sparse Codes), in a part-based framework for inferring the locations of facial land-marks. The rationale behind this approach is that unsupervised learning of sparse code dictionaries from face data can be an effective approach to cope with such a challenging problem. Results obtained on the CMU Multi-PIE Face dataset are presented providing support for this approach.
    5th European Workshop on Visual Information Processing (EUVIP 2014 ), Paris; 12/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: It is still a very challenging task to recognize a face in a real world scenario, since the face may be corrupted by many unknown factors. Among them, illumination variation is an important one, which will be mainly discussed in this paper. First, the illumination variations caused by shadow or overexposure are regarded as a multiplicative scaling image over the original face image. The purpose of introducing scaling vector (or scaling image) is to enhance the pixels in shadow regions, while depress the pixels in overexposure regions. Then, based on the scaling vector, we propose a novel tone-aware sparse representation (TASR) model. Finally, a EM-like algorithm is proposed to solve the proposed TASR model. Extensive experiments on the benchmark face databases show that our method is more robust against illumination variations.
    Automatic Face and Gesture Recognition (FG), 2013 10th IEEE International Conference and Workshops on; 01/2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose a very simple, efficient yet surprisingly effective feature extraction method for face recognition (about 20 lines of Matlab code), which is mainly inspired by spatial pyramid pooling in generic image classification. We show that features formed by simply pooling local patches over a multi-level pyramid, coupled with a linear classifier, can significantly outperform most recent face recognition methods. The simplicity of our feature extraction procedure is demonstrated by the fact that no learning is involved (except PCA whitening). We show that, multi-level spatial pooling and dense extraction of multi-scale patches play critical roles in face image classification. The extracted facial features can capture strong structural information of individual faces with no label information being used. We also find that, pre-processing on local image patches such as contrast normalization can have an important impact on the classification accuracy. In particular, on the challenging face recognition datasets of FERET and LFW-a, our method improves previous best results by more than 10% and 20%, respectively.
    06/2014;

Full-text (2 Sources)

Download
4 Downloads
Available from