Article

Tied factor analysis for face recognition across large pose differences

Department of Computer Sciences, University College London, London, UK.
IEEE Transactions on Pattern Analysis and Machine Intelligence (Impact Factor: 5.69). 07/2008; 30(6):970-84. DOI: 10.1109/TPAMI.2008.48
Source: PubMed

ABSTRACT Face recognition algorithms perform very unreliably when the pose of the probe face is different from the gallery face: typical feature vectors vary more with pose than with identity. We propose a generative model that creates a one-to-many mapping from an idealized "identity" space to the observed data space. In identity space, the representation for each individual does not vary with pose. We model the measured feature vector as being generated by a pose-contingent linear transformation of the identity variable in the presence of Gaussian noise. We term this model "tied" factor analysis. The choice of linear transformation (factors) depends on the pose, but the loadings are constant (tied) for a given individual. We use the EM algorithm to estimate the linear transformations and the noise parameters from training data. We propose a probabilistic distance metric which allows a full posterior over possible matches to be established. We introduce a novel feature extraction process and investigate recognition performance using the FERET, XM2VTS and PIE databases. Recognition performance compares favourably to contemporary approaches.

Download full-text

Full-text

Available from: Fatima Maria Felisberti, Jun 26, 2015
0 Followers
 · 
167 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Developing a reliable and practical face recognition system is a long-standing goal in computer vision research. Existing literature suggests that pixel-wise face alignment is the key to achieve high-accuracy face recognition. By assuming a human face as piece-wise planar surfaces, where each surface corresponds to a facial part, we develop in this paper a Constrained Part-based Alignment (CPA) algorithm for face recognition across pose and/or expression. Our proposed algorithm is based on a trainable CPA model, which learns appearance evidence of individual parts and a tree-structured shape configuration among different parts. Given a probe face, CPA simultaneously aligns all its parts by fitting them to the appearance evidence with consideration of the constraint from the tree-structured shape configuration. This objective is formulated as a norm minimization problem regularized by graph likelihoods. CPA can be easily integrated with many existing classifiers to perform part-based face recognition. Extensive experiments on benchmark face datasets show that CPA outperforms or is on par with existing methods for robust face recognition across pose, expression, and/or illumination changes.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Heterogeneous face recognition is a challenging research problem which involves matching of the faces captured from different sensors. Very few methods have been designed to solve this problem using intensity features and considered small sample size issue. In this paper, we consider the worst case scenario when there exists a single instance of an individual image in a gallery with normal modality i.e. visual while the probe is captured with alternate modality, e.g. Near Infrared. To solve this problem, we propose a technique inspired from tied factor Analysis (TFA) and Bagging. In the proposed method, the original TFA method is extended to handle small training samples problem in heterogeneous environment. But one can report the higher recognition rates by testing on small subset of images. Therefore, bagging is introduced to remove the effects of biased results from original TFA method. Experiments conducted on a challenging benchmark HFB and Biosecure face databases validate its effectiveness and superiority over other state-of-the-art methods using intensity features holistically.
    Visual Information Processing (EUVIP), 2014 5th European Workshop, Paris , France; 12/2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To perform unconstrained face recognition robust to variations in illumination, pose and expression, this paper presents a new scheme to extract "Multi-Directional Multi-Level Dual-Cross Patterns" (MDML-DCPs) from face images. Specifically, the MDML-DCPs scheme exploits the first derivative of Gaussian operator to reduce the impact of differences in illumination and then computes the DCP feature at both the holistic and component levels. DCP is a novel face image descriptor inspired by the unique textural structure of human faces. It is computationally efficient and only doubles the cost of computing local binary patterns, yet is extremely robust to pose and expression variations. MDML-DCPs comprehensively yet efficiently encodes the invariant characteristics of a face image from multiple levels into patterns that are highly discriminative of inter-personal differences but robust to intra-personal variations. Experimental results on the FERET, CAS-PERL-R1, FRGC 2.0, and LFW databases indicate that DCP outperforms the state-of-the-art local descriptors (e.g. LBP, LTP, LPQ, POEM, and LGXP) for both face identification and face verification tasks. More impressively, the best performance is achieved on the challenging LFW and FRGC 2.0 databases by deploying MDML-DCPs in a simple recognition scheme.