Article

Face Recognition by Exploring Information Jointly in Space, Scale and Orientation

Center for Biometrics and Security Research and National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.
IEEE Transactions on Image Processing (Impact Factor: 3.11). 01/2011; 20(1):247-56. DOI: 10.1109/TIP.2010.2060207
Source: PubMed

ABSTRACT Information jointly contained in image space, scale and orientation domains can provide rich important clues not seen in either individual of these domains. The position, spatial frequency and orientation selectivity properties are believed to have an important role in visual perception. This paper proposes a novel face representation and recognition approach by exploring information jointly in image space, scale and orientation domains. Specifically, the face image is first decomposed into different scale and orientation responses by convolving multiscale and multiorientation Gabor filters. Second, local binary pattern analysis is used to describe the neighboring relationship not only in image space, but also in different scale and orientation responses. This way, information from different domains is explored to give a good face representation for recognition. Discriminant classification is then performed based upon weighted histogram intersection or conditional mutual information with linear discriminant analysis techniques. Extensive experimental results on FERET, AR, and FRGC ver 2.0 databases show the significant advantages of the proposed method over the existing ones.

0 Followers
 · 
150 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: We describe a novel face recognition using the Extended Curvature Gabor (ECG) Classifier Bunch. First, we extend Gabor kernels into the ECG kernels by adding a spatial curvature term to the kernel and adjusting the width of the Gaussian at the kernel, which leads to numerous feature candidates being extracted from a single image. To handle large feature candidates efficiently, we divide them into multiple ECG coefficients according to different kernel parameters, and then we independently select the salient features from each ECG coefficient using the boosting method. A single ECG classifier is implemented by applying Linear Discriminant Analysis (LDA) to the selected feature vector. To overcome the accuracy limitation of a single classifier, we propose an ECG classifier bunch that combines multiple ECG classifiers with the fusion scheme. We confirm the generality of the performances of the proposed method using the FRGC version 2.0, XM2VTS, BANCA, and PIE databases.
    Pattern Recognition 04/2015; 48(4). DOI:10.1016/j.patcog.2014.09.029 · 2.58 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a Multi-feature Multi-Manifold Learning (M3L) method for single-sample face recognition (SSFR). While numerous face recognition methods have been proposed over the past two decades, most of them suffer a heavy performance drop or even fail to work for the SSFR problem because there are not enough training samples for discriminative feature extraction. In this paper, we propose a M3L method to extract multiple discriminative features from face image patches. First, each registered face image is partitioned into several non-overlapping patches and multiple local features are extracted within each patch. Then, we formulate SSFR as a multi-feature multi-manifold matching problem and multiple discriminative feature subspaces are jointly learned to maximize the manifold margins of different persons, so that person-specific discriminative information is exploited for recognition. Lastly, we present a multi-feature manifold–manifold distance measure to recognize the probe subjects. Experimental results on the widely used AR, FERET and LFW datasets demonstrate the efficacy of our proposed approach.
    Neurocomputing 11/2014; 143:134–143. DOI:10.1016/j.neucom.2014.06.012 · 2.01 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Weproposeanewlocaldescriptorfor fingerprint livenessdetection.Theinputimageisanalyzedbothin the spatialandinthefrequencydomain,inordertoextractinformationonthelocalamplitudecontrast, and onthelocalbehavioroftheimage,synthesizedbyconsideringthephaseofsomeselectedtransform coefficients. Thesetwopiecesofinformationareusedtogenerateabi-dimensionalcontrast-phase histogram,usedasfeaturevectorassociatedwiththeimage.Afteranappropriatefeatureselection,a trained linear-kernelSVMclassifier makesthe final live/fakedecision.Experimentsonthepublicly availableLivDet2011database,comprisingdatasetscollectedfromvarioussensors,provetheproposed method tooutperformthestate-of-the-artlivenessdetectiontechniques.
    Pattern Recognition 04/2015; DOI:10.1016/j.patcog.2014.05.021 · 2.58 Impact Factor

Full-text (2 Sources)

Download
35 Downloads
Available from
May 22, 2014

Similar Publications