Conference Paper

Frontal view recognition in multiview video sequences

Dept. of Inf., Aristotle Univ. of Thessaloniki, Thessaloniki, Greece
DOI: 10.1109/ICME.2009.5202593 Conference: Proceedings of the 2009 IEEE International Conference on Multimedia and Expo, ICME 2009, June 28 - July 2, 2009, New York City, NY, USA
Source: IEEE Xplore

ABSTRACT In this paper, a novel method is proposed as a solution to the problem of frontal view recognition from multiview image sequences. Our aim is to correctly identify the view that corresponds to the camera placed in front of a person, or the camera whose view is closer to a frontal one. By doing so, frontal face images of the person can be acquired, in order to be used in face or facial expression recognition techniques that require frontal faces to achieve a satisfactory result. The proposed method firstly employs the Discriminant Non-Negative Matrix Factorization (DNMF) algorithm on the input images acquired from every camera. The output of the algorithm is then used as an input to a support vector machines (SVMs) system that classifies the head poses acquired from the cameras to two classes that correspond to the frontal or non frontal pose. Experiments conducted on the IDIAP database demonstrate that the proposed method achieves an accuracy of 98.6% in frontal view recognition.

0 Followers
 · 
95 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: Frontal facial pose recognition deals with classifying facial images into two- classes: frontal and non-frontal. Recognition of frontal poses is required as a preprocessing step to face analysis algorithms (e.g. face or facial expression recognition) that can operate only on frontal views. A novel frontal facial pose recognition technique that is based on discriminant image splitting for feature extraction is presented in this paper. Spatially homogeneous and discriminant regions for each facial class are produced. The classical image splitting technique is used in order to determine those regions. Thus, each facial class is characterized by a unique region pattern which consist of homogeneous and discriminant 2-D regions. The mean intensities of these regions are used as features for the classification task. The proposed method has been tested on data from the XM2VTS facial database with very satisfactory results.
    12/2011; 19(4). DOI:10.2498/cit.1002024
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Multiview face recognition has become an active research area in the last few years. In this paper, we present an approach for video-based face recognition in camera networks. Our goal is to handle pose variations by exploiting the redundancy in the multiview video data. However, unlike traditional approaches that explicitly estimate the pose of the face, we propose a novel feature for robust face recognition in the presence of diffuse lighting and pose variations. The proposed feature is developed using the spherical harmonic representation of the face texture-mapped onto a sphere; the texture map itself is generated by back-projecting the multiview video data. Video plays an important role in this scenario. First, it provides an automatic and efficient way for feature extraction. Second, the data redundancy renders the recognition algorithm more robust. We measure the similarity between feature sets from different videos using the reproducing kernel Hilbert space. We demonstrate that the proposed approach outperforms traditional algorithms on a multiview video database.
    IEEE Transactions on Image Processing 03/2014; 23(3):1105-1117. DOI:10.1109/TIP.2014.2300812 · 3.11 Impact Factor

Preview

Download
2 Downloads
Available from