Conference Paper

VAMBAM: View and Motion-based Aspect Models for Distributed Omnidirectional Vision Systems.

Conference: Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence, IJCAI 2001, Seattle, Washington, USA, August 4-10, 2001
Source: DBLP

ABSTRACT This paper proposes a new model for gesture recognition. The model, called view and motion -based aspect models (VAMBAM), is an omnidirectional view-based aspect model based on motion-based segmentation. This model realizes location-free and rotation-free gesture recognition with a distributed omnidirectional vision system (DOVS). The distributed vision system consisting of multiple omnidirectional cameras is a prototype of a perceptual information infrastructure for monitoring and recognizing the real world. In addition to the concept of VABAM, this paper shows how the model realizes robust and real-time visual recognition of the DOVS.

  • Source
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: If we could build an android as a very humanlike robot, how would we humans distinguish a real human from an android? The answer to this question is not so easy. In human-android interaction, we cannot see the internal mechanism of the android, and thus we may simply believe that it is a human. This means that a human can be defined from two perspectives: one by organic mechanism and the other by appearance. Further, the current rapid progress in artificial organs makes this distinction confusing. The approach discussed in this article is to create artificial humans with humanlike appearances. The developed artificial humans, an android and a geminoid, can be used to improve understanding of humans through psychological and cognitive tests conducted using the artificial humans. We call this new approach to understanding humans android science.
    Journal of Artificial Organs 02/2007; 10(3):133-42. · 1.41 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Two key problems for camera networks that observe wide areas with many distributed cameras are self-localization and camera identification. Although there are many methods for localizing the cameras, one of the easiest and most desired methods is to estimate camera positions by having the cameras observe each other; hence the term self-localization. If the cameras have a wide viewing field, e.g. an omnidirectional camera, and can observe each other, baseline distances between pairs of cameras and relative locations can be determined. However, if the projection of a camera is relatively small on the image of other cameras and is not readily visible, the baselines cannot be detected. In this paper, a method is proposed to determine the baselines and relative locations of these “invisible” cameras. The method consists of two processes executed simultaneously: (a) to statistically detect the baselines among the cameras, and (b) to localize the cameras by using information from (a) and propagating triangle constraints. Process (b) works for the localization in the case where the cameras are observed each other, and it does not require complete observation among the cameras. However, if many cameras cannot be observed each other because of the poor image resolution, it dose not work. The baseline detection by process (a) solves the problem. This methodology is described in detail and results are provided for several scenarios.
    International Journal of Computer Vision 01/2004; 58(3). · 3.62 Impact Factor