Conference Paper

Robust sparse coding for face recognition

Hong Kong Polytech. Univ., Hong Kong, China
DOI: 10.1109/CVPR.2011.5995393 Conference: Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on
Source: IEEE Xplore

ABSTRACT Recently the sparse representation (or coding) based classification (SRC) has been successfully used in face recognition. In SRC, the testing image is represented as a sparse linear combination of the training samples, and the representation fidelity is measured by the l2-norm or l1-norm of coding residual. Such a sparse coding model actually assumes that the coding residual follows Gaussian or Laplacian distribution, which may not be accurate enough to describe the coding errors in practice. In this paper, we propose a new scheme, namely the robust sparse coding (RSC), by modeling the sparse coding as a sparsity-constrained robust regression problem. The RSC seeks for the MLE (maximum likelihood estimation) solution of the sparse coding problem, and it is much more robust to outliers (e.g., occlusions, corruptions, etc.) than SRC. An efficient iteratively reweighted sparse coding algorithm is proposed to solve the RSC model. Extensive experiments on representative face databases demonstrate that the RSC scheme is much more effective than state-of-the-art methods in dealing with face occlusion, corruption, lighting and expression changes, etc.

0 Followers
 · 
552 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose a Bayesian approach to learn discriminative dictionaries for sparse representation of data. The proposed approach infers probability distributions over the atoms of a discriminative dictionary using a Beta Process. It also computes sets of Bernoulli distributions that associate class labels to the learned dictionary atoms. This association signifies the selection probabilities of the dictionary atoms in the expansion of class-specific data. Furthermore, the non-parametric character of the proposed approach allows it to infer the correct size of the dictionary. We exploit the aforementioned Bernoulli distributions in separately learning a linear classifier. The classifier uses the same hierarchical Bayesian model as the dictionary, which we present along the analytical inference solution for Gibbs sampling. For classification, a test instance is first sparsely encoded over the learned dictionary and the codes are fed to the classifier. We performed experiments for face and action recognition; and object and scene-category classification using five public datasets and compared the results with state-of-the-art discriminative sparse representation approaches. Experiments show that the proposed Bayesian approach consistently outperforms the existing approaches.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Recent studies have shown that linear subspace algorithms, such as Principal Component Analysis, Linear Discriminant Analysis and Locality Preserving Projections, have attracted tremendous attention in many fields of information processing. However, the projection results obtained by these algorithms are linear combination of the original features, which is difficult to be interpreted psychologically and physiologically. Motivated by Compressive Sensing theory, we formulate the generalized eigenvalue problem under CS framework, which then allows us to apply a sparsity penalty and minimization procedure to locality preserving projections. The proposed algorithm is called sparse locality preserving projections, which performs locality preserving projections in the lasso regression framework that dimensionality reduction, feature selection and classification are merged into one analysis. The method is also extended to its regularized form to improve its generalization. The proposed algorithm is a combination of locality preserving with sparse penalty. Additionally, the algorithm can be performed in either supervised or unsupervised tasks. Experimental results on toy and real data sets show that our methods are effective and demonstrate much higher performance.
    Information Sciences 05/2015; 303:1-14. DOI:10.1016/j.ins.2015.01.004 · 3.89 Impact Factor
  • Source
    Canadian Conference on Electrical and Computer Engineering; 05/2015

Preview (2 Sources)

Download
4 Downloads
Available from