Conference Paper

Poster: A virtual body for augmented virtuality by chroma-keying of egocentric videos

DOI: 10.1109/3DUI.2009.4811218 Conference: IEEE Symposium on 3D User Interfaces, 3DUI 2009, Lafayette, LA, 14-15 March, 2009
Source: DBLP


A fully-articulated visual representation of oneself in an immersive virtual environment has considerable impact on the subjective sense of presence in the virtual world. Therefore, many approaches address this challenge and incorporate a virtual model of the user's body in the VE. Such a "virtual body" (VB) is manipulated according to user motions which are defined by feature points detected by a tracking system. The required tracking devices are unsuitable in scenarios which involve multiple persons simultaneously or in which participants frequently change. Furthermore, individual characteristics such as skin pigmentation, hairiness or clothes are not considered by this procedure. In this paper we present a software-based approach that allows to incorporate a realistic visual representation of oneself in the VE. The idea is to make use of images captured by cameras that are attached to video-see-through head-mounted displays. These egocentric frames can be segmented into foreground showing parts of the human body and background. Then the extremities can be overlayed with the user's current view of the virtual world, and thus a high-fidelity virtual body can be visualized.

Download full-text


Available from: Gerd Bruder, Dec 08, 2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: During teleoperation manipulation, visual consistency and the quality of the visual observation of what the operator sees is important to deliver the sensation of existing at remote place. Current telexistence technologies allow full upper body posture synchronization through multi DOF humanoid robot structures and allow the operator to feel the remote body that he/she sees as his own. However, it does not preserve the visual consistency feedback, such as the human like skin tones, operator's hand shape and the current outfit which he is wearing during the operation. Thus in this paper we propose a new method that provides operator's own body complexion, shape and light correction using real-time visuals taken from a see-through camera placed in the HMD and superimposing over robot vision. User hand and arm trajectory generated through kinematics were used to generate a masking image to isolate his local body appearance, which is captured via a see-through head mounted display, and impose the resulted masked body over remote environment vision. This paper describes the design and implementation of the above technique and effectiveness has been verified with several lab experiments. 1 INTRODUCTION During first point of view teleoperations immersive environment, operator often checks for his hands and body to confirm his existence in another environment. This implies the importance of perceiving one own body at the expected location to maintain the immersive experience, and as a way of verification
    Full-text · Conference Paper · Dec 2013