Conference Paper

Real-Time Control of a Remote Virtual Tutor Using Minimal Pen-Gestures.

DOI: 10.1007/978-3-642-13437-1_62 Conference: Intelligent Tutoring Systems, 10th International Conference, ITS 2010, Pittsburgh, PA, USA, June 14-18, 2010, Proceedings, Part II
Source: DBLP

ABSTRACT We present a distance tutoring system that allows a tutor to provide instruction via an animated avatar. The system captures
pen-gestures of real tutor, generates 3D behaviors automatically, and animates a virtual tutor on the remote side in near
real-tim The uniqueness of the system comes from the pen-gesture interface. We have done a study to test this interface. The
system can effectively recognize and animate different types of gestures. Gesturing on the tablet and gesturing on the board
were then compared. The results show that users can easily adopt to tablet, and pen-gesture on the tablet naturally. They
were able to use pen-tablet interface effectively after a short instructional period.

0 Bookmarks
 · 
74 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we describe a layered approach to simplify character animation in X3D. Therefore, we present an interface and control language for specifying and synchronizing animations and similar actions at a higher level. Because this requires to have the accordant features on the lower X3D-based levels, we furthermore propose a set of nodes for realizing these demands. This includes for instance an audio node for text-to-speech that automatically calculates the actual phonemes and weighting factors for the corresponding visemes in order to achieve lip synchronization. To bridge the gap between these layers we also propose nodes for controlling animations, which are capable to convert the scripted schedules, and to mix an arbitrary number of interpolation based animations, whilst still being extensible to new concepts of on-line motion generation.
    Proceeding of the 13th International Conference on 3D Web Technology, Web3D 2008, Los Angeles, California, USA, August 9-10, 2008; 01/2008
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Synchronization of speech, facial expressions and body gestures is one of the most critical problems in realistic avatar animation in virtual environments. In this paper, we address this problem by proposing a new high-level animation language to describe avatar animation. The Avatar Markup Language (AML), based on XML, encapsulates the Text to Speech, Facial Animation and Body Animation in a unified manner with appropriate synchronization. We use low-level animation parameters, defined by the MPEG-4 standard, to demonstrate the use of the AML. However, the AML itself is independent of any low-level parameters as such. AML can be effectively used by intelligent software agents to control their 3D graphical representations in the virtual environments. With the help of the associated tools, AML also facilitates to create and share 3D avatar animations quickly and easily. We also discuss how the language has been developed and used within the SoNG project framework. The tools developed to use AML in a real-time animation system incorporating intelligent agents and 3D avatars are also discussed subsequently
    01/2002;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Real-time control of three-dimensional avatars is an important problem in the context of computer games and virtual environments. Avatar animation and control is difficult, however, because a large repertoire of avatar behaviors must be made available, and the user must be able to select from this set of behaviors, possibly with a low-dimensional input device. One appealing approach to obtaining a rich set of avatar behaviors is to collect an extended, unlabeled sequence of motion data appropriate to the application. In this paper, we show that such a motion database can be preprocessed for flexibility in behavior and efficient search and exploited for real-time avatar control. Flexibility is created by identifying plausible transitions between motion segments, and efficient search through the resulting graph structure is obtained through clustering. Three interface techniques are demonstrated for controlling avatar motion using this data structure: the user selects from a set of available choices, sketches a path through an environment, or acts out a desired motion in front of a video camera. We demonstrate the flexibility of the approach through four different applications and compare the avatar motion to directly recorded human motion.
    ACM Transactions on Graphics 07/2003; · 3.36 Impact Factor