Conference Paper

Face Detection in Low-Resolution Color Images

DOI: 10.1007/978-3-642-13772-3_46 Conference: Image Analysis and Recognition, 7th International Conference, ICIAR 2010, Póvoa de Varzim, Portugal, June 21-23, 2010. Proceedings, Part I
Source: DBLP

ABSTRACT

Most face detection methods require high or medium resolution face images to attain satisfactory results. However, in many
surveillance applications, where there is a need to image wide fields of view, faces cover just a few pixels, which makes
their detection difficult. Despite its importance, little work has been aimed at providing reliable detection at these low
resolutions. In this work, we study the relationship between resolution and the automatic face detection rate with the Modified
Census Transform, one of the most successful algorithms for face detection presented to date, and propose a new Color Census
Transform that provides significantly better results than the original when applied to low-resolution color images.

Download full-text

Full-text

Available from: Olac Luis Fuentes
  • Source
    • "However their higher resolutions can still be considered as very low. For instance, [32] showed improved detection rate from 6 × 6 to 24 × 24 resolution using Modified Census Transform (MCT) and Boosting on colour Fig. 10: Expression classifiers over Sobel detected faces using the 25% bound. When considering the results of Figure 11, the Sobel expression classifiers shows best performance. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The aim of this study is to simulate the pareidolia capability of humans to produce an emotional response to a scene using analysis of facial expressions associated with abstract face-like patterns. We developed a system that uses a holistic face detector and a facial expression classifier. The υ and SVDD One-Class Support Vector Machines (SVM) were evaluated for creating a holistic face detector, which looks for faces that can vary from natural faces to minimal face-like patterns. A Pairwise Adaptive C and υ-SVM (pa-SVM) were evaluated for creating the facial expression classifier. In both scenarios, a dataset of human faces and facial expressions was used to produce a number of preprocessed images (grayscale, histogram equalised grayscale; and their respective Sobel and Canny edges) at a number of resolutions for analysis. A Gaussian and a degree two polynomial kernel were used with the SVM methods and the results were obtained using a 10 fold cross validation technique. A concern with the face detectors is verifying that they can look for minimal face-like patterns empirically. To address this concern, we created cartoon faces of the human face dataset and degraded these cartoon faces to produce an array of minimal face-like patterns. We then evaluated the face detectors and facial expression classifiers with the best model parameters on these cartoon faces. The outcome is a holistic system with the potential to describe a scene by producing an array of emotion scores corresponding to Ekman's seven Universal Facial Expressions of Emotion.
    Full-text · Conference Paper · Apr 2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: This work presents a novel approach for pedestrian head localization and head pose estimation in single images. The presented method addresses an environment of low resolution gray-value images taken from a moving camera with large variations in illumination and object appearance. The proposed algorithms are based on normalized detection confidence values of separate, pose associated classifiers. Those classifiers are trained using a modified one vs. all framework that tolerates outliers appearing in continuous head pose classes. Experiments on a large set of real world data show very good head localization and head pose estimation results even on the smallest considered head size of 7×7 pixels. These results can be obtained in a probabilistic form, which make them of a great value for pedestrian path prediction and risk assessment systems within video-based driver assistance systems or many other applications.
    No preview · Conference Paper · Aug 2011
  • [Show abstract] [Hide abstract]
    ABSTRACT: The aim of this study is to simulate the pareidolia capability of humans to produce an emotional response to a scene using analysis of facial expressions associated with abstract face-like patterns. We developed a system that uses a holistic face detector and a facial expression classifier. The υ and SVDD One-Class Support Vector Machines (SVM) were evaluated for creating a holistic face detector, which looks for faces that can vary from natural faces to minimal face-like patterns. A Pairwise Adaptive C and υ-SVM (pa-SVM) were evaluated for creating the facial expression classifier. In both scenarios, a dataset of human faces and facial expressions was used to produce a number of preprocessed images (grayscale, histogram equalised grayscale; and their respective Sobel and Canny edges) at a number of resolutions for analysis. A Gaussian and a degree two polynomial kernel were used with the SVM methods and the results were obtained using a 10 fold cross validation technique. A concern with the face detectors is verifying that they can look for minimal face-like patterns empirically. To address this concern, we created cartoon faces of the human face dataset and degraded these cartoon faces to produce an array of minimal face-like patterns. We then evaluated the face detectors and facial expression classifiers with the best model parameters on these cartoon faces. The outcome is a holistic system with the potential to describe a scene by producing an array of emotion scores corresponding to Ekman's seven Universal Facial Expressions of Emotion.
    No preview · Conference Paper · Jan 2013
Show more