Conference Paper

Sketching-out virtual humans: from 2D storyboarding to immediate 3D character animation.

DOI: 10.1145/1178823.1178913 Conference: Proceedings of the International Conference on Advances in Computer Entertainment Technology, ACE 2006, Hollywood, California, USA, June 14-16, 2006
Source: DBLP

ABSTRACT Virtual beings are playing a remarkable role in today's public entertainment, while ordinary users are still treated as audiences due to the lack of appropriate expertise, equipment, and computer skills. In this paper, we present a fast and intuitive storyboarding interface, which enables users to sketch-out 3D virtual humans, 2D/3D animations, and character intercommunication. We devised an intuitive "stick figure→fleshing-out→skin mapping" graphical animation pipeline, which realises the whole process of key framing, 3D pose reconstruction, virtual human modelling, motion path/timing control, and the final animation synthesis by almost pure 2D sketching. A "creative model-based method" is developed, which emulates a human perception process, to generate the 3D human bodies of variational sizes, shapes, and fat distributions. Meanwhile, our current system also supports the sketch-based crowd animation and the storyboarding of the 3D multiple character intercommunication. This system has been formally tested by various users on Tablet PC. After minimal training, even a beginner can create vivid virtual humans and animate them within minutes.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Quick creation of 3D character animations is an important task in game design, simulations, forensic animation, education, training, and more. We present a framework for creating 3D animated characters using a simple sketching interface coupled with a large, unannotated motion database that is used to find the appropriate motion sequences corresponding to the input sketches. Contrary to the previous work that deals with static sketches, our input sketches can be enhanced by motion and rotation curves that improve matching in the context of the existing animation sequences. Our framework uses animated sequences as the basic building blocks of the final animated scenes, and allows for various operations with them such as trimming, resampling, or connecting by use of blending and interpolation. A database of significant and unique poses, together with a two-pass search running on the GPU, allows for interactive matching even for large amounts of poses in a template database. The system provides intuitive interfaces, an immediate feedback, and poses very small requirements on the user. A user study showed that the system can be used by novice users with no animation experience or artistic talent, as well as by users with an animation background. Both groups were able to create animated scenes consisting of complex and varied actions in less than 20 minutes.
    The Visual Computer 05/2013; · 1.07 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Temporal modeling of facial expression has been the interest of various fields of studies such as expression recognition, realism in computer animation and behavioral study in psychological field. While various researches are actively being conducted to capture the movement of facial features for its temporal property, works in term of the head movement during the facial expression process is lacking. The absence of head movement description will make expression description to be incomplete especially in expression that involves head movement such as disgust. Therefore, this paper proposes a method to track the movement of the head by using a dual pivot head tracking system (DPHT). In proving its usefulness, the tracking system will then be applied to track the movement of subjects depicting disgust. A simple statistical two-tailed analysis and visual rendering comparison will be made with a system that uses only a single pivot to illustrate the practicality of using DPHT. Results show that better depictions of expression can be implemented if the movement of the head is incorporated in the facial expression study. KeywordsFace expression modeling-Computer graphics-Face tracking-Face animation
    07/2010: pages 121-135;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The design and animation of digital 3D models is an essential task for many applications in science, en-gineering, education, medicine and arts. In many instances only an approximate representation is re-quired and a simple and intuitive modelling and animation process, suitable for untrained users, is more important than realism and extensive features. Sketch-based modelling has been shown to be a suit-able interface because the underlying pen-and-paper metaphor is intuitive and effective. In this paper we present LifeSketch, a framework for sketched-based modelling and animation. Three-dimensional models are created with a variation of the popular "Teddy" algorithm. The models are anal-ysed and skeletons with joints are extracted fully au-tomatically. The surface mesh is bound to the curved skeletons using skinning techniques and the result-ing model can be animated using skeletal animation methods. The results of our evaluation and user study sug-gest that modelling and animation tasks are consid-erable more efficient than with traditional tools. The learning curve is very flat and a half page document was sufficient to familiarise users with the tools func-tionality. Users were satisfied with the automatically extracted joints, but some users struggled selecting the appropriate rotation axes and angles for animat-ing the resulting 3D objects. A more intuitive, prefer-able automatic or sketch-based approach for anima-tions is needed. Overall users were satisfied with the modelling capabilities of the tool, found most of its functionality natural and intuitive, and they enjoyed using it.


1 Download
Available from