Conference Paper

Sparsely Encoded Local Descriptor for face recognition

Key Lab. of Intell. Inf. Process., Chinese Acad. of Sci. (CAS), Beijing, China
DOI: 10.1109/FG.2011.5771389 Conference: Automatic Face & Gesture Recognition and Workshops (FG 2011), 2011 IEEE International Conference on
Source: IEEE Xplore

ABSTRACT In this paper, a novel Sparsely Encoded Local Descriptor (SELD) is proposed for face recognition. Compared with K-means or Random-projection tree based previous methods, sparsity constraint is introduced in our dictionary learning and sequent image encoding, which implies more stable and discriminative face representation. Sparse coding also leads to an image descriptor of summation of sparse coefficient vectors, which is quite different from existing code-words appearance frequency(/histogram)-based descriptors. Extensive experiments on both FERET and challenging LFW database show the effectiveness of the proposed SELD method. Especially on the LFW dataset, recognition accuracy comparable to the best known results is achieved.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this letter, a local sparse representation is proposed for face components to describe the local structure and characteristics of the face image for face verification. We first learn a dictionary from collected local patches of face images. Then, a novel local descriptor is presented by using sparse coefficients obtained by the learned dictionary and local face patches from face components to represent the entire human face. We demonstrate the performance of the proposed local sparse representation method on several publicly available datasets. Extensive experiments on both CMU PIE dataset and the challenging LFW database have shown the effectiveness of the proposed method.
    IEEE Signal Processing Letters 02/2013; 20(2):177-180. · 1.64 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Tracking by individual features, such as color or motion, is the main reason why most tracking algorithms are not as robust as expected. In order to better describe the object, multi-feature fusion is very necessary. In this paper we introduce a graph grammar based method to fuse the low level features and apply them to object tracking. Our tracking algorithm consists of two phases: key point tracking and tracking by graph grammar rules. The key points are computed using salient level set components. All key points, as well as the colors and the tangent directions, are fed to a Kalman filter for object tracking. Then the graph grammar rules are used to dynamically examine and adjust the tracking procedure to make it robust.
    Intelligent Networks and Intelligent Systems (ICINIS), 2012 Fifth International Conference on; 01/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: Face recognition based on local descriptors has been recently recognized as the state-of-the-art design framework for problems of facial identification and verification. Given the diversity of the existing approaches, the main objective of this paper is to present a comprehensive, in-depth comparative analysis of the recent face recognition methodologies based on local descriptors. We carefully review and contrast a suite of commonly encountered local descriptors. In particular, we highlight their main features in the setting of problems of facial recognition. The main advantages and limitations of the discussed methods are identified. Furthermore a carefully structured taxonomy of the existing approaches is presented We show that the presented techniques are particularly suitable for large scale facial authentication systems in which the training stage with the use of the overall face database might be computationally prohibited. A variety of approaches being used to realize a fusion of the local descriptions into the global ones are discussed along with their pros and cons. Furthermore different similarity measures and possible extensions and hybridizations with statistical learning techniques are elaborated on as well. Experimental results obtained for the FERET database are carefully assessed and compared.
    Journal of Visual Communication and Image Representation 01/2013; 24(8):1213–1231. · 1.20 Impact Factor

Full-text (2 Sources)

Available from
Aug 27, 2014