Article

Face Recognition by Exploring Information Jointly in Space, Scale and Orientation

Center for Biometrics and Security Research and National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.
IEEE Transactions on Image Processing (Impact Factor: 3.63). 01/2011; 20(1):247-56. DOI: 10.1109/TIP.2010.2060207
Source: PubMed

ABSTRACT

Information jointly contained in image space, scale and orientation domains can provide rich important clues not seen in either individual of these domains. The position, spatial frequency and orientation selectivity properties are believed to have an important role in visual perception. This paper proposes a novel face representation and recognition approach by exploring information jointly in image space, scale and orientation domains. Specifically, the face image is first decomposed into different scale and orientation responses by convolving multiscale and multiorientation Gabor filters. Second, local binary pattern analysis is used to describe the neighboring relationship not only in image space, but also in different scale and orientation responses. This way, information from different domains is explored to give a good face representation for recognition. Discriminant classification is then performed based upon weighted histogram intersection or conditional mutual information with linear discriminant analysis techniques. Extensive experimental results on FERET, AR, and FRGC ver 2.0 databases show the significant advantages of the proposed method over the existing ones.

Download full-text

Full-text

Available from: Stan Z Li, Mar 25, 2014
  • Source
    • "Holistic-feature-based approaches, including the principal component analysis (PCA) [5], linear discriminant analysis (LDA) [7], and independent component analysis [6], have been popularly used for face recognition . Nonetheless, more recent works [8]–[12] attempt to develop local-feature-based techniques, since they are more stable to local changes, such as illumination, expression, and occlusion [13]–[15]. One of the most successful local descriptors is the Gabor wavelet [16], [17], due to its biological relevance and computational properties [18]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: For a human face, the Gabor transform can extract its multiple scale and orientation features that are very useful for the recognition. In this paper, the Gabor-feature-based face recognition is formulated as a multitask sparse representation model, in which the sparse coding of each Gabor feature is regarded as a task. To effectively exploit the complementary yet correlated information among different tasks, a flexible representation algorithm termed multitask adaptive sparse representation (MASR) is proposed. The MASR algorithm not only restricts Gabor features of one test sample to be jointly represented by training atoms from the same class but also promotes the selected atoms for these features to be varied within each class, thus allowing better representation. In addition, to use the local information, we operate the MASR on local regions of Gabor features. Then, by considering the structural characteristics of the face and the effects of the external interferences, a structural-residual weighting strategy is proposed to adaptively fuse the decision of each region. Experiments on various datasets verify the effectiveness of the proposed method in dealing with face occlusion, corruption, small number of training samples, as well as variations of lighting and expression.
    Full-text · Article · Oct 2015 · IEEE Transactions on Instrumentation and Measurement
  • Source
    • "The faces are cropped to 128 × 128 pixel size. For a fair comparison, we compare RVLBP with LBP [23], HOG [4], Local Phase Quantization (LPQ) [24], Completed LBP (CLBP) [13], Monogenic LBP (MonoLBP) [33], and GV-LBP-TOP [20] on three databases. For all methods except HOG, we divide a facial image into 8 × 8 blocks. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Automatic emotion analysis and understanding has received much attention over the years in affective computing. Recently, there are increasing interests in inferring the emotional intensity of a group of people. For group emotional intensity analysis, feature extraction and group expression model are two critical issues. In this paper, we propose a new method to estimate the happiness intensity of a group of people in an image. Firstly, we combine the Riesz transform and the local binary pattern descriptor, named Riesz-based volume local binary pattern, which considers neighbouring changes not only in the spatial domain of a face but also along the different Riesz faces. Secondly , we exploit the continuous conditional random fields for constructing a new group expression model, which considers global and local attributes. Intensive experiments are performed on three challenging facial expression databases to evaluate the novel feature. Furthermore, experiments are conducted on the HAPPEI database to evaluate the new group expression model with the new feature. Our experimental results demonstrate the promising performance for group happiness intensity analysis.
    Full-text · Conference Paper · Sep 2015
  • Source
    • "Despite the excellent results obtained, these proposals show little or no effort to adapt the descriptors to the specific fingerprint problem, and the experimental results do not point out any killer features, related to some physical or statistical trait of the data. A common effort, however, is to combine several descriptors [21] [24], as also done in other fields, using information coming from the space, frequency, and orientation domains [48], or concatenating the LBP and LPQ features [49]. In particular, in [24] we tested several popular descriptors, i.e., LBP, WLD and LPQ, for the liveness detection problem. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Weproposeanewlocaldescriptorfor fingerprint livenessdetection.Theinputimageisanalyzedbothin the spatialandinthefrequencydomain,inordertoextractinformationonthelocalamplitudecontrast, and onthelocalbehavioroftheimage,synthesizedbyconsideringthephaseofsomeselectedtransform coefficients. Thesetwopiecesofinformationareusedtogenerateabi-dimensionalcontrast-phase histogram,usedasfeaturevectorassociatedwiththeimage.Afteranappropriatefeatureselection,a trained linear-kernelSVMclassifier makesthe final live/fakedecision.Experimentsonthepublicly availableLivDet2011database,comprisingdatasetscollectedfromvarioussensors,provetheproposed method tooutperformthestate-of-the-artlivenessdetectiontechniques.
    Full-text · Article · Apr 2015 · Pattern Recognition
Show more