Conference Paper

Using 3D computer graphics for perception: the role of local and global information in face processing.

DOI: 10.1145/1272582.1272586 Conference: Proceedings of the 4th Symposium on Applied Perception in Graphics and Visualization, APGV 2007, At Tübingen, Germany
Source: DBLP

ABSTRACT Everyday life requires us to recognize faces under transient changes in pose, expression and lighting conditions. Despite this, humans are adept at recognizing familiar faces. In this study, we focused on determining the types of information human observers use to recognize faces across variations in viewpoint. Of specific interest was whether holistic information is used exclusively, or whether the local information contained in facial parts (featural or component information), as well as their spatial relationships (configural information) is also encoded. A rigorous study investigating this question ahs not previously been possible, as the generatio of a suitable set of stimuli using standard image manipulation techniques was not feasible. A 3D database of faces that have been processed to extract morphable models (Blanz & Vetter, 1999) allows us to generate such stimuli efficiently and with a high degree of control over display parameters. Three experiments were conducted, modeled after the inter-extra-ortho experiments by Bülthoff & Edelman, 1992. The first experiment served as a baseline for the subsequent two experiments. Ten face-stimuli were presented from a frontal view and from a 45 degree side view. At test, they had to be recognized among ten distractor faces shown from different viewpoints. We found systematic effects of viewpoint, in that the recognition performance increased as the angle between the learned view and the tested view decreased. This finding is consistent with face processing models based on 2D-view interpolation. Experiments 2 and 3 were the same as Experiment 1 except for the fact that in the testing phase, the faces were presented scrambled or blurred. Scrambling was used to isolate featural from configural information. Blurring was used to provide stimuli in which local featural information was reduced. The results demonstrated that human observers are capable of recognizing faces across different viewpoints on the sole basis of isolated featural information and of isolated configural information.

0 Bookmarks
 · 
71 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: In recent years, data-driven speech animation approaches have achieved significant successes in terms of animation quality. However, how to automatically evaluate the realism of novel synthesized speech animations has been an important yet unsolved research problem. In this paper we propose a novel statistical model (called SAQP) to automatically predict the quality of on-the-fly synthesized speech animations generated by various data-driven techniques. Its essential idea is to construct a phoneme-based, Speech Animation Trajectory Fitting (SATF) metric to describe speech animation synthesis errors and then build a statistical regression model to learn the association between the obtained SATF metric and the objective speech animation synthesis quality. Through delicately designed user studies, we evaluate the effectiveness and robustness of the proposed SAQP model. To the best of our knowledge, this work is the first-of-its-kind, quantitative quality model for data-driven speech animation. We believe it is the important first step to remove a critical technical barrier for applying data-driven speech animation techniques to numerous online or interactive talking avatar applications.
    IEEE transactions on visualization and computer graphics. 02/2012;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The hypothesis of the present study is that features of abstract face-like patterns can be perceived in the archi-tectural design of selected house façades and trigger emo-tional responses of observers. In order to simulate this phe-nomenon, which is a form of pareidolia, a software system for pattern recognition based on statistical learning was ap-plied. One-class classification was used for face detection and an eight-class classifier was employed for facial ex-pression analysis. The system was trained by means of a database consisting of 280 frontal images of human faces that were normalised to the inner eye corners. A separate set of test images contained human facial expressions and selected house façades. The experiments demonstrated how facial expression patterns associated with emotional states such as surprise, fear, happiness, sadness, anger, disgust, contempt or neutrality could be identified in both types of test images, and how the results depended on preprocessing and parameter selection for the classifiers.
    01/2009;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Most current psychological theories of face recognition suggest that faces are stored as multiple 2D views. This research aims to explore the application of 3D face representations by means of a new paradigm. Participants were required to match frontal views of faces to silhouettes of the same faces. The formats of the face stimuli were modified in different experiments to make 3D representations accessible (Experiments 1 and 2) or inaccessible (Experiment 3). Multiple 2D view-based algorithms were not applicable due to the singularity of the frontal-view faces. The results disclosed the application and adaptability of 3D face representations. Participants can readily solve the tasks when the face images retain the information essential for the formation of a 3D face representations. However, the performance substantially declined when the 3D information in faces was eliminated (Experiment 3). Performance also varied between different face orientations and different participant groups.
    Vision research 02/2011; 51(9):969-77. · 2.29 Impact Factor

Full-text (3 Sources)

Download
47 Downloads
Available from
May 19, 2014