Tied Factor Analysis for Face Recognition across Large Pose Differences

Department of Computer Sciences, University College London, London, UK.
IEEE Transactions on Pattern Analysis and Machine Intelligence (Impact Factor: 5.78). 07/2008; 30(6):970-84. DOI: 10.1109/TPAMI.2008.48
Source: PubMed


Face recognition algorithms perform very unreliably when the pose of the probe face is different from the gallery face: typical feature vectors vary more with pose than with identity. We propose a generative model that creates a one-to-many mapping from an idealized "identity" space to the observed data space. In identity space, the representation for each individual does not vary with pose. We model the measured feature vector as being generated by a pose-contingent linear transformation of the identity variable in the presence of Gaussian noise. We term this model "tied" factor analysis. The choice of linear transformation (factors) depends on the pose, but the loadings are constant (tied) for a given individual. We use the EM algorithm to estimate the linear transformations and the noise parameters from training data. We propose a probabilistic distance metric which allows a full posterior over possible matches to be established. We introduce a novel feature extraction process and investigate recognition performance using the FERET, XM2VTS and PIE databases. Recognition performance compares favourably to contemporary approaches.

Download full-text


Available from: Fatima Maria Felisberti
  • Source
    • "The recognition is performed by comparing the coefficients of the complete appearance model. The tied factor analysis method proposed by Prince et al. [28] is another typical statistical approach . In this method some tied factors across pose difference are learned using Expectation Maximization algorithm. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The pose problem is one of the bottlenecks in automatic face recognition. We argue that one of the diffculties in this problem is the severe misalignment in face images or feature vectors with different poses. In this paper, we propose that this problem can be statistically solved or at least mitigated by maximizing the intra-subject across-pose correlations via canonical correlation analysis (CCA). In our method, based on the data set with coupled face images of the same identities and across two different poses, CCA learns simultaneously two linear transforms, each for one pose. In the transformed subspace, the intra-subject correlations between the different poses are maximized, which implies pose-invariance or pose-robustness is achieved. The experimental results show that our approach could considerably improve the recognition performance. And if further enhanced with holistic+local feature representation, the performance could be comparable to the state-of-the-art.
    Preview · Article · Jul 2015
  • Source
    • "Recent approaches consider more practical scenarios where only face images under the normal condition (frontal view and neutral expression) are assumed to be available in the gallery. To recognize a probe face with pose or expression changes, they either identified an implicit identity feature/representation of the probe face [10] [11] [12], which is pose-and expression-invariant, or explicitly estimate global or local mappings of facial appearance so that a virtual face under the normal condition can be synthesized for recognition [13] [14] [15] [16] [17]. However, most of these methods are still far from practice since they assume both gallery and probe face images have been manually aligned into some canonical form, and they cannot generally cope with illumination variation either. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Developing a reliable and practical face recognition system is a long-standing goal in computer vision research. Existing literature suggests that pixel-wise face alignment is the key to achieve high-accuracy face recognition. By assuming a human face as piece-wise planar surfaces, where each surface corresponds to a facial part, we develop in this paper a Constrained Part-based Alignment (CPA) algorithm for face recognition across pose and/or expression. Our proposed algorithm is based on a trainable CPA model, which learns appearance evidence of individual parts and a tree-structured shape configuration among different parts. Given a probe face, CPA simultaneously aligns all its parts by fitting them to the appearance evidence with consideration of the constraint from the tree-structured shape configuration. This objective is formulated as a norm minimization problem regularized by graph likelihoods. CPA can be easily integrated with many existing classifiers to perform part-based face recognition. Extensive experiments on benchmark face datasets show that CPA outperforms or is on par with existing methods for robust face recognition across pose, expression, and/or illumination changes.
    Preview · Article · Jan 2015
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Heterogeneous face recognition is a challenging research problem which involves matching of the faces captured from different sensors. Very few methods have been designed to solve this problem using intensity features and considered small sample size issue. In this paper, we consider the worst case scenario when there exists a single instance of an individual image in a gallery with normal modality i.e. visual while the probe is captured with alternate modality, e.g. Near Infrared. To solve this problem, we propose a technique inspired from tied factor Analysis (TFA) and Bagging. In the proposed method, the original TFA method is extended to handle small training samples problem in heterogeneous environment. But one can report the higher recognition rates by testing on small subset of images. Therefore, bagging is introduced to remove the effects of biased results from original TFA method. Experiments conducted on a challenging benchmark HFB and Biosecure face databases validate its effectiveness and superiority over other state-of-the-art methods using intensity features holistically.
    Full-text · Conference Paper · Dec 2014
Show more