Article

Extracting Multiple Features in the CID Color Space for Face Recognition

Dept. of Comput. Sci., New Jersey Inst. of Technol., Newark, NJ, USA
IEEE Transactions on Image Processing (Impact Factor: 3.11). 10/2010; DOI: 10.1109/TIP.2010.2048963
Source: DBLP

ABSTRACT This correspondence presents a novel face recognition method that extracts multiple features in the color image discriminant (CID) color space, where three new color component images, D 1 , D 2, and D 3, are derived using an iterative algorithm. As different color component images in the CID color space display different characteristics, three different image encoding methods are presented to effectively extract features from the component images for enhancing pattern recognition performance. To further improve classification performance, the similarity scores due to the three color component images are fused for the final decision making. Experimental results using two large-scale face databases, namely, the face recognition grand challenge (FRGC) version 2 database and the FERET database, show the effectiveness of the proposed method.

0 Bookmarks
 · 
167 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes a novel method for facial expression recognition using a new image representation and multiple feature fusion. First, the new image representation is derived from the normalized hybrid color space, by principal component analysis (PCA) followed by Fisher linear discriminant analysis (FLDA). Second, multi-scale local phase quantization (LPQ) features and patch-based Gabor features are applied to the new image representation and gray-level image, respectively, to extract multiple feature sets. Finally, due to the complementary characteristic between the new image representation and gray-level image, combining the classification results of multiple feature sets at the score level can improve recognition performance further. Experiments on Multi-PIE show that the proposed method achieves state-of-the-art performance for facial expression recognition.
    IScIDE 2012; 10/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: A two-phase test sample sparse representation method (TPTSR) attracted wide attentions for its good performance after it had been proposed. This good method just exploited gray images for face recognition rather than color face recognition. In this paper, we extend TPTSR for color face recognition. We first exploit the first, second and third channels of each color image to produce recognition scores and then use a fusion scheme to obtain the ultimate fusion score for face recognition. Finally we use the fusion score to classify test samples. Our experiments compare our method with the original TPTSR, PCA, LDA, LPCA, KLDA on AR color face database. Our experiment results demonstrate that our method take a good performance on color face recognition.
    Instrumentation, Measurement, Computer, Communication and Control (IMCCC), 2012 Second International Conference on; 01/2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Illumination invariance remains one of the most researched, yet the most challenging aspect of automatic face recognition. In this paper the discriminative power of colour-based invariants is investigated in the presence of large illumination changes between training and query data, when appearance changes due to cast shadows and non-Lambertian effects are significant. Specifically, there are three main contributions: (i) a general photometric model of the camera is described and it is shown how its parameters can be estimated from realistic video input of pseudo-random head motion, (ii) several novel colour-based face invariants are derived for different special instances of the camera model, and (iii) the performance of the largest number of colour-based representations in the literature is evaluated and analysed on a database of 700 video sequences. The reported results suggest that: (i) colour invariants do have a substantial discriminative power which may increase the robustness and accuracy of recognition from low resolution images in extreme illuminations, and (ii) that the non-linearities of the general photometric camera model have a significant effect on recognition performance. This highlights the limitations of previous work and emphasizes the need to assess face recognition performance using training and query data which had been captured by different acquisition equipment.
    Pattern Recognition 07/2012; 45(7):2499–2509. · 2.58 Impact Factor

Full-text (2 Sources)

Download
4 Downloads
Available from
Sep 18, 2014