Conference Paper

Cloning of Facial Expressions Using Spatial Information

Multimedia & Intell. Software Technol. Beijing Municipal Key Lab., Beijing Univ. of Technol., Beijing
DOI: 10.1109/CGIV.2007.25 Conference: Computer Graphics, Imaging and Visualisation, 2007. CGIV '07
Source: IEEE Xplore


In virtual world, it is always a challenge research issue for an animated human face model to appear natural. In this paper, we present a novel facial animation method for the technique of expression cloning which can directly maps an expression of the source model onto the surface of the target model. Using litter time, our method reuses the 3D motion vectors of the vertices of the source model to create similar animations on a new target model having the same mesh structure.

3 Reads
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper introduces two frameworks for head and facial animation tracking. The first framework introduces a particle-filter tracker capable of tracking the 3D head pose using a statistical facial texture model. The second framework introduces an appearance-adaptive tracker capable of tracking the 3D head pose and the facial animations in real-time. This framework has the merits of both deterministic and stochastic approaches. It consists of an online adaptive observation model of the face texture together with an adaptive transition motion model. The latter is based on a registration technique between the appearance model and the incoming observation. The second framework extends the concept of Online Appearance Models to the case of tracking 3D non-rigid face motion (3D head pose and facial animations). Tracking long video sequences demonstrated the effectiveness of the developed methods. Accurate tracking was obtained even in the presence of perturbing factors such as illumination changes, significant head pose and facial expression variations as well as occlusions.
    Computer Vision and Pattern Recognition Workshop, 2004 Conference on; 07/2004
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents an automatic FDP (Facial Definition Parameters) and FAP (Facial Animation Parameters) generation method from an image sequence that captures a frontal face. The proposed method is based on facial feature tracking without markers on a face. We present an efficient method to extract 2D facial features and to generate the FDP by applying 2D features to a generic face model. We also propose a template matching based FAP generation method. The advantage of this approach is that it can be easily applied to single camera MPEG-4 SNHC encoding systems
    Circuits and Systems, 2000. Proceedings. ISCAS 2000 Geneva. The 2000 IEEE International Symposium on; 02/2000
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes a prototype implementation of a speech driven facial animation system for embedded devices. The system is comprised of speech recognition and talking head synthesis. A context-based visubsyllable database is set up to map Chinese initials or finals to their corresponding pronunciation mouth shape. With the database, 3D facial animation can be synthesized based on speech signal input. Experiment results show the system works well in simulating real mouth shapes and forwarding a friendly interface in communication terminals.
    Embedded Software and Systems, 2005. Second International Conference on; 01/2006