Conference Paper

Sparsely Encoded Local Descriptor for face recognition

Key Lab. of Intell. Inf. Process., Chinese Acad. of Sci. (CAS), Beijing, China
DOI: 10.1109/FG.2011.5771389 Conference: Automatic Face & Gesture Recognition and Workshops (FG 2011), 2011 IEEE International Conference on
Source: IEEE Xplore

ABSTRACT In this paper, a novel Sparsely Encoded Local Descriptor (SELD) is proposed for face recognition. Compared with K-means or Random-projection tree based previous methods, sparsity constraint is introduced in our dictionary learning and sequent image encoding, which implies more stable and discriminative face representation. Sparse coding also leads to an image descriptor of summation of sparse coefficient vectors, which is quite different from existing code-words appearance frequency(/histogram)-based descriptors. Extensive experiments on both FERET and challenging LFW database show the effectiveness of the proposed SELD method. Especially on the LFW dataset, recognition accuracy comparable to the best known results is achieved.

Download full-text

Full-text

Available from: Zhen Cui, Aug 27, 2014
0 Followers
 · 
231 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: Recent researches emphasize more on exploring multiple features to improve classification performance. One popular scheme is to extend the sparse representation-based classification framework with various regularizations. These methods sparsely encode the query image over the training set under different constraints, and achieve very encouraging performances in various applications, especially in face recognition (FR). However, they merely make an issue on how to collaboratively encode the query, but ignore the latent relationships among the multiple features that can further improve the classification accuracy. It is reasonable to anticipate that the low-level features of facial images, such as edges and smoothed/low-frequency image, can be fused into a more compact and more discriminative representation through some relationships for better FR performances. Focusing on this, we propose a unified framework for FR to take advantage of this latent relationship and to fully make use of the fused features. Our method can realize the following tasks: (1) learning a specific dictionary for each individual that captures the most distinctive features; (2) learning a common pattern pool that provides the less-discriminative and shared patterns for all individuals, such as illuminations and poses; (3) simultaneously learning a fusion matrix to merge the features into a more discriminative and more compact representation. We perform a series of experiments on public available databases to evaluate our method, and the experimental results demonstrate the effectiveness of our proposed approach.
    Automatic Face and Gesture Recognition (FG), 2013 10th IEEE International Conference and Workshops on; 01/2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose a simple but effective spatial co-occurrence of local intensity order (CoLIO) feature for face recognition. Local intensity order (LIO) is robust to illu-mination variance. Spatial co-occurrence of LIO not only preserves great invariance to illumination, but also greatly enhances the discriminative power of the descriptor as Co-LIO well captures the correlation between locally adjacent regions. The proposed feature has been successfully ap-plied to two widely used face databases including AR [1] and LFW [2]. Superior performance on these two databases fully demonstrates the effectiveness of the proposed feature. Mean-while, the extremely fast extraction speed makes the proposed feature practically useful.
    ICME workshop; 07/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: Many approaches to facial expression recognition utilize only one type of features at a time. It can be difficult for a single type of features to characterize in a best possible way the variations and complexity of realistic facial expressions. In this paper, we propose a spectral embedding based multi-view dimension reduction method to fuse multiple features for facial expression recognition. Facial expression features extracted from one type of expressions can be assumed to form a manifold embedded in a high dimensional feature space. We construct a neighborhood graph that encodes the structure of the manifold locally. A graph Laplacian matrix is constructed whose spectral decompositions reveal the low dimensional structure of the manifold. In order to obtain discriminative features for classification, we propose to build a neighborhood graph in a supervised manner by utilizing the label information of training data. As a result, multiple features are able to be transformed into a unified low dimensional feature space by combining the Laplacian matrix of each view with the multiview spectral embedding algorithm. A linearization method is utilized to map unseen data to the learned unified subspace. Experiments are conducted on a set of established real-world and benchmark datasets. The experimental results provide a strong support to the effectiveness of the proposed feature fusion framework on realistic facial expressions.
    Neurocomputing 04/2014; 129:136–145. DOI:10.1016/j.neucom.2013.09.046 · 2.01 Impact Factor