Conference Paper

Multi-Eigenspace Learning for Video-Based Face Recognition.

DOI: 10.1007/978-3-540-74549-5_20 Conference: Advances in Biometrics, International Conference, ICB 2007, Seoul, Korea, August 27-29, 2007, Proceedings
Source: DBLP

ABSTRACT In this paper, we propose a novel online learning method called Multi-Eigenspace Learning which can learn appearance models
incrementally from a given video stream. For each subject, we try to learn a few eigenspace models using IPCA (Incremental
Principal Component Analysis). In the process of Multi-Eigenspace Learning, each eigenspace generally contains more and more
samples except one eigenspace which contains the least number of samples. Then, these learnt eigenspace models are used for
video-based face recognition. Experimental results show that the proposed method can achieve high recognition rate.

0 Bookmarks
 · 
47 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a face recognition method using image sequence. As input we utilize plural face images rather than a ldquosingle-shotrdquo, so that the input reflects variation of facial expression and face direction. For the identification of the face, we essentially form a subspace with the image sequence and apply the Mutual Subspace Method in which the similarity is defined by the angle between the subspace of input and those of references. We demonstrate the effectiveness of the proposed method through several experimental results
    Automatic Face and Gesture Recognition, 1998. Proceedings. Third IEEE International Conference on; 02/1998
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In spite of over two decades of intense research, illumination and pose invariance remain prohibitively challenging aspects of face recognition for most practical applications. The objective of this work is to recognize faces using video sequences both for training and recognition input, in a realistic, unconstrained setup in which lighting, pose and user motion pattern have a wide variability and face images are of low resolution. In particular there are three areas of novelty: (i) we show how a photometric model of image formation can be combined with a statistical model of generic face appearance variation, learnt offline, to generalize in the presence of extreme illumination changes; (ii) we use the smoothness of geodesically local appearance manifold structure and a robust same-identity likelihood to achieve invariance to unseen head poses; and (iii) we introduce an accurate video sequence “reillumination” algorithm to achieve robustness to face motion patterns in video. We describe a fully automatic recognition system based on the proposed method and an extensive evaluation on 171 individuals and over 1300 video sequences with extreme illumination, pose and head motion variation. On this challenging data set our system consistently demonstrated a nearly perfect recognition rate (over 99.7%), significantly outperforming state-of-the-art commercial software and methods from the literature.
    01/1970: pages 27-40;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present an unsupervised technique for visual learning, which is based on density estimation in high-dimensional spaces using an eigenspace decomposition. Two types of density estimates are derived for modeling the training data: a multivariate Gaussian (for unimodal distributions) and a mixture-of-Gaussians model (for multimodal distributions). Those probability densities are then used to formulate a maximum-likelihood estimation framework for visual search and target detection for automatic object recognition and coding. Our learning technique is applied to the probabilistic visual modeling, detection, recognition, and coding of human faces and nonrigid objects, such as hands
    IEEE Transactions on Pattern Analysis and Machine Intelligence 08/1997; · 4.80 Impact Factor