Conference Paper

Fusion of Visual and Thermal Signatures with Eyeglass Removal for Robust Face Recognition

The University of Tennessee, Knoxville;
DOI: 10.1109/CVPR.2004.77 Conference: Computer Vision and Pattern Recognition Workshop, 2004 Conference on
Source: IEEE Xplore

ABSTRACT This paper describes a fusion of visual and thermal infrared (IR) images for robust face recognition. Two types of fusion methods are discussed: data fusion and decision fusion. Data fusion produces an illumination-invariant face image by adaptively integrating registered visual and thermal face images. Decision fusion combines matching scores of individual face recognition modules. In the data fusion process, eyeglasses, which block thermal energy, are detected from thermal images and replaced with an eye template. Three fusion-based face recognition techniques are implemented and tested: Data fusion of visual and thermal images (Df), Decision fusion with highest matching score (Fh), and Decision fusion with average matching score (Fa). A commercial face recognition software FaceIt® is used as an individual recognition module. Comparison results show that fusion-based face recognition techniques outperformed individual visual and thermal face recognizers under illumination variations and facial expressions.

0 Bookmarks
 · 
88 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: The requirement for a reliable personal identification in computerized access control, security applications, human machine interaction etc has led to an unprecedented interest in biometrics. The usefulness of face as a primary modality for biometric authentication is on the rise in the recent years because of it's non-intrusiveness and uniqueness. Visual Face recognition is successful only in the controlled environment but fails in the case of disguised faces and under varying lighting conditions. As an alternative to the visual recognition this paper presents the Long Wave Infra Red (LWIR) for face recognition. In this we make of the facial thermograms that are the images formed by the capturing the heat radiated by the face. It is observed that it's performance falls drastically with varying temperature conditions. To overcome this drawback simplified Blood perfusion model is proposed to convert thermograms into Blood perfusion data. If a person wears spectacles, the glasses obstruct the radiated and hence thermograms loses the information. An efficient algorithm is developed to detect the eyeglasses and to remove it's effect.
    Applied Imagery Pattern Recognition Workshop (AIPR), 2012 IEEE; 01/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a method for optimally combining pixel information from thermal imaging and visible spectrum colour cameras, for tracking an arbitrarily shaped deformable moving target. The tracking algorithm rapidly re-learns its background models for each camera modality from scratch at every frame. This enables, firstly, automatic adjustment of the relative importance of thermal and visible information in decision making, and, secondly, a degree of “camouflage target” tracking by continuously re-weighting the importance of those parts of the target model that are most distinct from the present background at each frame. Furthermore, this very rapid background adaptation ensures robustness to rapid camera motion. The combination of thermal and visible information is applicable to any target, but particularly useful for people tracking. The method is also important in that it can be readily extended for fusion of data from arbitrarily many imaging modalities.
    Sensors, 2012 IEEE; 01/2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper demonstrates two different fusion techniques at two different levels of a human face recognition process. The first one is called data fusion at lower level and the second one is the decision fusion towards the end of the recognition process. At first a data fusion is applied on visual and corresponding thermal images to generate fused image. Data fusion is implemented in the wavelet domain after decomposing the images through Daubechies wavelet coefficients (db2). During the data fusion maximum of approximate and other three details coefficients are merged together. After that Principle Component Analysis (PCA) is applied over the fused coefficients and finally two different artificial neural networks namely Multilayer Perceptron(MLP) and Radial Basis Function(RBF) networks have been used separately to classify the images. After that, for decision fusion based decisions from both the classifiers are combined together using Bayesian formulation. For experiments, IRIS thermal/visible Face Database has been used. Experimental results show that the performance of multiple classifier system along with decision fusion works well over the single classifier system.
    06/2011;

Full-text

View
0 Downloads
Available from