Conference Paper

A system for reconstructing multiparty conversation field based on augmented head motion by dynamic projection

DOI: 10.1145/2072298.2072445 Conference: Proceedings of the 19th International Conference on Multimedea 2011, Scottsdale, AZ, USA, November 28 - December 1, 2011
Source: DBLP


A novel system is presented for reconstructing, in the real world, multiparty face-to-face conversation scenes; it uses dynamics projection to augment human head motion. This system aims to display and playback pre-recorded conversations to the viewers as if the remote people were taking in front of them. This system consists of multiple projectors and transparent screens. Each screen separately displays the life-size face of one meeting participant, and are spatially arranged to recreate the actual scene. The main feature of this system is dynamics projection, screen pose is dynamically controlled to emulate the head motions of the participants, especially rotation around the vertical axis, that are typical of shifts in visual attention, i.e. turning gaze from one to another. This recreation of head motion by physical screen motion, in addition to image motion, aims to more clearly express the interactions involving visual attention among the participants. The minimal design, frameless-projector-screen, with augmented head motion is expected to create a feeling that the remote participants are actually present in the same room. This demo presents our initial system and discusses its potential impact on future visual communications.

Download full-text


Available from: Junji Yamato, Mar 13, 2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: A novel system, called MM+Space, is presented for recreating multiparty face-to-face conversation scenes in the real world. It aims to display and playback pre-recorded conversations as if the people were talking in front of the viewer(s). This system consists of multiple projectors and transparent screens, which display the life-size faces of people. The key idea is the physical augmentation of human head motions, i.e. the screen pose is dynamically controlled to emulate the head motions, for boosting the viewers' perception of nonverbal behaviors and interactions. In particular, MM+Space newly introduces 2-Degree-of-Freedom (DoF) translations, in forward-backward and right-left directions, in addition to 2-DoF head rotations (nodding and shaking), which were proposed in our former MM-Space system. The full 4-DoF kinetic display is expected to enhance the expressibility of head and body motions, and to create more realistic representation of interacting people. Experiments showed that the proposed system with 4-DoF motions outperformed the rotation-only system in the increased perception of people's presence and in expressing their postures. In addition, it was reported that the proposed system allowed the viewers to experience rich emotional expressibility, immersion in conversations, and potential behavioral/emotional contagion.
    No preview · Conference Paper · Dec 2013