Conference Paper

VAMBAM: View and Motion-based Aspect Models for Distributed Omnidirectional Vision Systems.

Conference: Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence, IJCAI 2001, Seattle, Washington, USA, August 4-10, 2001
Source: DBLP

ABSTRACT This paper proposes a new model for gesture recognition. The model, called view and motion -based aspect models (VAMBAM), is an omnidirectional view-based aspect model based on motion-based segmentation. This model realizes location-free and rotation-free gesture recognition with a distributed omnidirectional vision system (DOVS). The distributed vision system consisting of multiple omnidirectional cameras is a prototype of a perceptual information infrastructure for monitoring and recognizing the real world. In addition to the concept of VABAM, this paper shows how the model realizes robust and real-time visual recognition of the DOVS.

  • Source
    • "Complex sensors, such as cameras have already been used for recognizing user state. Computer visions tracking [7], [8] and behavior recognition [9], [10], [11] often work in the laboratory but sometimes fail in real environments due to the lighting variations and occlusions that are frequent in natural environments. Given their relative immaturity, we restrict ourselves to simple sensors such as RFID tags and weight sensors to achieve robust user state detection. "
    [Show abstract] [Hide abstract]
    ABSTRACT: It is a very important to develop context-aware systems that can handle, at the same time, multiple heterogeneous applications that require different contexts with different levels of abstraction. This paper proposes a framework for such systems. To handle the heterogeneity of the context required by the applications, we introduce a user activity context detection method based on the combination of a multi spatio-temporal description of measured sensor data, a description of detected context with multiple levels of abstraction, and an order-sensitive description of the context model required by an application. We also introduce an algorithm that implements the context detection method by reflecting the context detection capabilities of any given environment. We build a prototype system by embedding sensors into an experimental house; evaluations show it promise. Index Terms — context recognition, decision tree, spatio-temporal representation, ubiquitous C I.
  • Source
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a card type battery-less information terminal $CardBIT - and a method of situated interaction that uses single-lens cameras. The CardBIT system provides situated information support at locations such as exhibition halls, train stations and streets. CardBIT can operate without a battery because it utilizes energy from the information car-tier and the user. It realizes location-and-direction-based interaction; a user can get appropriate information for his/her position and direction. The most significant feature of the CardBIT is its high portability because it is easily implanted into widely-used IC cards. Based on information accumulated in the CardBIT, personalized information in the form of sound, such as speech or music, is provided. Furthermore, a user can signal to the interface system and obtain information dynamically. This paper introduces the CardBIT system concept and its implementation using cameras, position estimation, and sign recognition methods. Our experiments show the feasibility of the proposed system.
    Proceedings of the First IEEE International Conference on Pervasive Computing and Communications (PerCom'03), March 23-26, 2003, Fort Worth, Texas, USA; 01/2003
Show more