Face Recognition by Exploring Information Jointly in Space, Scale and Orientation

Center for Biometrics and Security Research and National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.
IEEE Transactions on Image Processing (Impact Factor: 3.63). 01/2011; 20(1):247-56. DOI: 10.1109/TIP.2010.2060207
Source: PubMed


Information jointly contained in image space, scale and orientation domains can provide rich important clues not seen in either individual of these domains. The position, spatial frequency and orientation selectivity properties are believed to have an important role in visual perception. This paper proposes a novel face representation and recognition approach by exploring information jointly in image space, scale and orientation domains. Specifically, the face image is first decomposed into different scale and orientation responses by convolving multiscale and multiorientation Gabor filters. Second, local binary pattern analysis is used to describe the neighboring relationship not only in image space, but also in different scale and orientation responses. This way, information from different domains is explored to give a good face representation for recognition. Discriminant classification is then performed based upon weighted histogram intersection or conditional mutual information with linear discriminant analysis techniques. Extensive experimental results on FERET, AR, and FRGC ver 2.0 databases show the significant advantages of the proposed method over the existing ones.

Download full-text


Available from: Stan Z Li, Mar 25, 2014
54 Reads
  • Source
    • "The faces are cropped to 128 × 128 pixel size. For a fair comparison, we compare RVLBP with LBP [23], HOG [4], Local Phase Quantization (LPQ) [24], Completed LBP (CLBP) [13], Monogenic LBP (MonoLBP) [33], and GV-LBP-TOP [20] on three databases. For all methods except HOG, we divide a facial image into 8 × 8 blocks. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Automatic emotion analysis and understanding has received much attention over the years in affective computing. Recently, there are increasing interests in inferring the emotional intensity of a group of people. For group emotional intensity analysis, feature extraction and group expression model are two critical issues. In this paper, we propose a new method to estimate the happiness intensity of a group of people in an image. Firstly, we combine the Riesz transform and the local binary pattern descriptor, named Riesz-based volume local binary pattern, which considers neighbouring changes not only in the spatial domain of a face but also along the different Riesz faces. Secondly , we exploit the continuous conditional random fields for constructing a new group expression model, which considers global and local attributes. Intensive experiments are performed on three challenging facial expression databases to evaluate the novel feature. Furthermore, experiments are conducted on the HAPPEI database to evaluate the new group expression model with the new feature. Our experimental results demonstrate the promising performance for group happiness intensity analysis.
    Proceedings of the British Machine Vision Conference, Swan, UK; 09/2015
  • Source
    • "Despite the excellent results obtained, these proposals show little or no effort to adapt the descriptors to the specific fingerprint problem, and the experimental results do not point out any killer features, related to some physical or statistical trait of the data. A common effort, however, is to combine several descriptors [21] [24], as also done in other fields, using information coming from the space, frequency, and orientation domains [48], or concatenating the LBP and LPQ features [49]. In particular, in [24] we tested several popular descriptors, i.e., LBP, WLD and LPQ, for the liveness detection problem. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Weproposeanewlocaldescriptorfor fingerprint livenessdetection.Theinputimageisanalyzedbothin the spatialandinthefrequencydomain,inordertoextractinformationonthelocalamplitudecontrast, and onthelocalbehavioroftheimage,synthesizedbyconsideringthephaseofsomeselectedtransform coefficients. Thesetwopiecesofinformationareusedtogenerateabi-dimensionalcontrast-phase histogram,usedasfeaturevectorassociatedwiththeimage.Afteranappropriatefeatureselection,a trained linear-kernelSVMclassifier makesthe final live/fakedecision.Experimentsonthepublicly availableLivDet2011database,comprisingdatasetscollectedfromvarioussensors,provetheproposed method tooutperformthestate-of-the-artlivenessdetectiontechniques.
    Pattern Recognition 04/2015; DOI:10.1016/j.patcog.2014.05.021 · 3.10 Impact Factor
  • Source
    • "In [8], Marsico et al. provide FAce Recognition against Occlusions and Expression Variations (FARO) as a new method based on partitioned iterated function systems (PIFSs), which is quite robust with respect to expression changes and partial occlusions. Recently, the fusion method combined Gabor filter has been key focus to many researchers [9] [10] [11]. Gabor wavelets capture the local structure corresponding to specific spatial frequency, spatial locality, and selective orientation which are demonstrated to be discriminative and robust to illumination and expression changes. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Sometimes realistic face representation is confronted with blur or low-resolution face images, as a result, existing classification methods are not powerful and robust enough. This paper proposes a novel face representation approach (GLL) which fuses Gabor filter, Local Binary Pattern (LBP) and Local Phase Quantization (LPQ). In the process of Gabor filter, it uses Gabor wavelet functions with two scales and eight orientations to capture the salient visual properties in face image. On this basis of Gabor features, we acquire LBP features and LPQ features, respectively, so as to fully explore the blur invariant property and the information in the spatial domain and among different scales and orientations. Experiments on both CMU-PIE and Yale B demonstrate the effectiveness of our GLL when dealing with different condition face data sets.
    Neurocomputing 09/2013; 116:260–264. DOI:10.1016/j.neucom.2012.05.036 · 2.08 Impact Factor
Show more