Cloning of Facial Expressions Using Spatial Information
ABSTRACT In virtual world, it is always a challenge research issue for an animated human face model to appear natural. In this paper, we present a novel facial animation method for the technique of expression cloning which can directly maps an expression of the source model onto the surface of the target model. Using litter time, our method reuses the 3D motion vectors of the vertices of the source model to create similar animations on a new target model having the same mesh structure.
Conference Paper: Facial animation based on muscular contraction parameters[Show abstract] [Hide abstract]
ABSTRACT: We present a new approach to estimating muscular contraction parameters for the purposes of animating facial expressions. In this paper, first, the facial surface feature points of face image are detected by using image processing methods. Next, the muscular contraction parameters are estimated by using the neutral expression and arbitrary expression displacement of the facial model wireframe fitting, based on detected facial surface feature points. Finally, the facial expression is generated by using the vertex displacements of an individual facial model based on estimated muscular contraction parameters. The facial animation can be generated by using the image sequences. Experimental results reveal that our approach can generate facial animation of the individual facial model, which corresponds to the facial expression of the actual face image sequences.Systems, Man and Cybernetics, 2005 IEEE International Conference on; 11/2005
Conference Paper: Facial animation system for embedded application[Show abstract] [Hide abstract]
ABSTRACT: This paper describes a prototype implementation of a speech driven facial animation system for embedded devices. The system is comprised of speech recognition and talking head synthesis. A context-based visubsyllable database is set up to map Chinese initials or finals to their corresponding pronunciation mouth shape. With the database, 3D facial animation can be synthesized based on speech signal input. Experiment results show the system works well in simulating real mouth shapes and forwarding a friendly interface in communication terminals.Embedded Software and Systems, 2005. Second International Conference on; 01/2006
Conference Paper: Automatic FDP/FAP generation from an image sequence[Show abstract] [Hide abstract]
ABSTRACT: This paper presents an automatic FDP (Facial Definition Parameters) and FAP (Facial Animation Parameters) generation method from an image sequence that captures a frontal face. The proposed method is based on facial feature tracking without markers on a face. We present an efficient method to extract 2D facial features and to generate the FDP by applying 2D features to a generic face model. We also propose a template matching based FAP generation method. The advantage of this approach is that it can be easily applied to single camera MPEG-4 SNHC encoding systemsCircuits and Systems, 2000. Proceedings. ISCAS 2000 Geneva. The 2000 IEEE International Symposium on; 02/2000