Conference Paper

VAMBAM: View and Motion-based Aspect Models for Distributed Omnidirectional Vision Systems.

Conference: Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence, IJCAI 2001, Seattle, Washington, USA, August 4-10, 2001
Source: DBLP

ABSTRACT This paper proposes a new model for gesture recognition. The model, called view and motion -based aspect models (VAMBAM), is an omnidirectional view-based aspect model based on motion-based segmentation. This model realizes location-free and rotation-free gesture recognition with a distributed omnidirectional vision system (DOVS). The distributed vision system consisting of multiple omnidirectional cameras is a prototype of a perceptual information infrastructure for monitoring and recognizing the real world. In addition to the concept of VABAM, this paper shows how the model realizes robust and real-time visual recognition of the DOVS.

0 Bookmarks
 · 
50 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: There is an ever increasing demand for security monitoring systems in the modern world. Visual surveillance is one of the most promising areas in security monitoring for several reasons. It is easy to install, easy to repair, and the initial setup cost is inexpensive when compared with other sensor based monitoring systems, such as audio sensors, motion detection systems, thermal sensors etc.
    12/2009: pages 149-169;
  • Source
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Two key problems for camera networks that observe wide areas with many distributed cameras are self-localization and camera identification. Although there are many methods for localizing the cameras, one of the easiest and most desired methods is to estimate camera positions by having the cameras observe each other; hence the term self-localization. If the cameras have a wide viewing field, e.g. an omnidirectional camera, and can observe each other, baseline distances between pairs of cameras and relative locations can be determined. However, if the projection of a camera is relatively small on the image of other cameras and is not readily visible, the baselines cannot be detected. In this paper, a method is proposed to determine the baselines and relative locations of these “invisible” cameras. The method consists of two processes executed simultaneously: (a) to statistically detect the baselines among the cameras, and (b) to localize the cameras by using information from (a) and propagating triangle constraints. Process (b) works for the localization in the case where the cameras are observed each other, and it does not require complete observation among the cameras. However, if many cameras cannot be observed each other because of the poor image resolution, it dose not work. The baseline detection by process (a) solves the problem. This methodology is described in detail and results are provided for several scenarios.
    International Journal of Computer Vision 01/2004; 58(3). · 3.62 Impact Factor