Conference Paper

Using 3D computer graphics for perception: the role of local and global information in face processing.

DOI: 10.1145/1272582.1272586 Conference: Proceedings of the 4th Symposium on Applied Perception in Graphics and Visualization, APGV 2007, At Tübingen, Germany
Source: DBLP

ABSTRACT Everyday life requires us to recognize faces under transient changes in pose, expression and lighting conditions. Despite this, humans are adept at recognizing familiar faces. In this study, we focused on determining the types of information human observers use to recognize faces across variations in viewpoint. Of specific interest was whether holistic information is used exclusively, or whether the local information contained in facial parts (featural or component information), as well as their spatial relationships (configural information) is also encoded. A rigorous study investigating this question ahs not previously been possible, as the generatio of a suitable set of stimuli using standard image manipulation techniques was not feasible. A 3D database of faces that have been processed to extract morphable models (Blanz & Vetter, 1999) allows us to generate such stimuli efficiently and with a high degree of control over display parameters. Three experiments were conducted, modeled after the inter-extra-ortho experiments by Bülthoff & Edelman, 1992. The first experiment served as a baseline for the subsequent two experiments. Ten face-stimuli were presented from a frontal view and from a 45 degree side view. At test, they had to be recognized among ten distractor faces shown from different viewpoints. We found systematic effects of viewpoint, in that the recognition performance increased as the angle between the learned view and the tested view decreased. This finding is consistent with face processing models based on 2D-view interpolation. Experiments 2 and 3 were the same as Experiment 1 except for the fact that in the testing phase, the faces were presented scrambled or blurred. Scrambling was used to isolate featural from configural information. Blurring was used to provide stimuli in which local featural information was reduced. The results demonstrated that human observers are capable of recognizing faces across different viewpoints on the sole basis of isolated featural information and of isolated configural information.

Download full-text


Available from: Adrian Schwaninger, Sep 27, 2015
24 Reads
  • Source
    • "More recently, Schwaninger, et al. (2007) concluded that humans exhibit the capability to recognize faces based on information drawn from isolated features (i.e. components) [3]. In fact, Gold, et al. [21] present evidence supporting the idea that face processing is the result of the integration of individual component processing . "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a framework for component-based face alignment and representation that demonstrates improvements in matching performance over the more common holistic approach to face alignment and representation. This work is motivated by recent evidence from the cognitive science community demonstrating the efficacy of component-based facial representations. The component-based framework presented in this paper consists of the following major steps: 1) landmark extraction using Active Shape Models (ASM), 2) alignment and cropping of components using Procrustes Analysis, 3) representation of components with Multiscale Local Binary Patterns (MLBP), 4) per-component measurement of facial similarity, and 5) fusion of per-component similarities. We demonstrate on three public datasets and an operational dataset consisting of face images of 8000 subjects, that the proposed component-based representation provides higher recognition accuracies over holistic-based representations. Additionally, we show that the proposed component-based representations: 1) are more robust to changes in facial pose, and 2) improve recognition accuracy on occluded face images in forensic scenarios.
    IEEE Transactions on Information Forensics and Security 01/2013; 8(1):239-253. DOI:10.1109/TIFS.2012.2226580 · 2.41 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The hypothesis of the present study is that features of abstract face-like patterns can be perceived in the archi-tectural design of selected house façades and trigger emo-tional responses of observers. In order to simulate this phe-nomenon, which is a form of pareidolia, a software system for pattern recognition based on statistical learning was ap-plied. One-class classification was used for face detection and an eight-class classifier was employed for facial ex-pression analysis. The system was trained by means of a database consisting of 280 frontal images of human faces that were normalised to the inner eye corners. A separate set of test images contained human facial expressions and selected house façades. The experiments demonstrated how facial expression patterns associated with emotional states such as surprise, fear, happiness, sadness, anger, disgust, contempt or neutrality could be identified in both types of test images, and how the results depended on preprocessing and parameter selection for the classifiers.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Most current psychological theories of face recognition suggest that faces are stored as multiple 2D views. This research aims to explore the application of 3D face representations by means of a new paradigm. Participants were required to match frontal views of faces to silhouettes of the same faces. The formats of the face stimuli were modified in different experiments to make 3D representations accessible (Experiments 1 and 2) or inaccessible (Experiment 3). Multiple 2D view-based algorithms were not applicable due to the singularity of the frontal-view faces. The results disclosed the application and adaptability of 3D face representations. Participants can readily solve the tasks when the face images retain the information essential for the formation of a 3D face representations. However, the performance substantially declined when the 3D information in faces was eliminated (Experiment 3). Performance also varied between different face orientations and different participant groups.
    Vision research 02/2011; 51(9):969-77. DOI:10.1016/j.visres.2011.02.006 · 1.82 Impact Factor
Show more