Conference Paper

LISTEN: Non-interactive Localization in Wireless Camera Sensor Networks.

DOI: 10.1109/RTSS.2010.15 Conference: Proceedings of the 31st IEEE Real-Time Systems Symposium, RTSS 2010, San Diego, California, USA, November 30 - December 3, 2010
Source: DBLP

ABSTRACT Recent advances in the application field increasingly demand the use of wireless camera sensor networks (WCSNs), for which localization is a crucial task to enable various location-based services. Most of the existing localization approaches for WCSNs are essentially interactive, i.e. require the interaction among the nodes throughout the localization process. As a result, they are costly to realize in practice, vulnerable to sniffer attacks, inefficient in energy consumption and computation. In this paper we propose LISTEN, a non-interactive localization approach. Using LISTEN, every camera sensor node only needs to silently listen to the beacon signals from a mobile beacon node and capture a few images until determining its own location. We design the movement trajectory of the mobile beacon node, which guarantees to locate all the nodes successfully. We have implemented LISTEN and evaluated it through extensive experiments. The experimental results demonstrate that it is accurate, efficient, and suitable for WCSNs that consist of low-end camera sensors.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A collaborative vision-based technique is proposed for local-izing the nodes of a surveillance network based on obser-vations of a non-cooperative moving target. The proposed method employs lightweight in-node image processing and limited data exchange between the nodes to determine the positions and orientations of the nodes participating in syn-chronized observations of the target. A node with an oppor-tunistic observation of a passing target broadcasts a synchro-nizing packet and triggers image capture by its neighbors. In the cluster of participating nodes, the triggering node and a helper node define a relative coordinate system. Once a small number of joint observations of the target are made by the nodes, the model allows for a decentralized or a cluster-based solution for the localization problem. No images are transferred between the network nodes for the localization task, making the proposed method efficient and scalable. Simulation and experimental results are provided to verify the performance of the proposed technique.
  • [Show abstract] [Hide abstract]
    ABSTRACT: This work proposes the novel use of spinning beacons for precise indoor localization. The proposed "SpinLoc" (Spinning Indoor Localization) system uses "spinning" ( i.e., rotating) beacons to create and detect predictable and high- ly distinguishable Doppler signals for sub-meter localiza- tion accuracy. The system analyzes Doppler frequency shifts of signals from spinning beacons, which are then used to calculate orientation angles to a target. By obtaining orientation angles from two or more beacons, SpinLoc can precisely locate stationary or slow-moving targets. After designing and implementing the system using MICA2 motes, its performance was tested in an indoor garage en- vironment. The experimental results revealed a median error of 40~50 centimeters and a 90% error of 70~90 cen- timeters.
    Proceedings of the 6th International Conference on Embedded Networked Sensor Systems, SenSys 2008, Raleigh, NC, USA, November 5-7, 2008; 01/2008
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: possible to make compact, cheap and low-power image sensors capable of on-board image processing. These embedded vision sensors provide a rich new sensing modality enabling new classes of wireless sensor networking applications. In order to build these applications, system designers need to overcome challenges associated with limited bandwith, limited power, group coordination and fusing of multiple camera views with various other sensory inputs. Real-time properties must be upheld if multiple vision sensors are to process data, com- municate with each other and make a group decision before the measured environmental feature changes. In this paper, we present FireFly Mosaic, a wireless sensor network image processing framework with operating system, networking and image processing primitives that assist in the development of distributed vision-sensing tasks. Each FireFly Mosaic wireless camera consists of a FireFly [1] node coupled with a CMUcam3 [2] embedded vision processor. The FireFly nodes run the Nano-RK [3] real-time operating system and communicate using the RT-Link [4] collision-free TDMA link protocol. Using FireFly Mosaic, we demonstrate an assisted living application capable of fusing multiple cameras with overlapping views to discover and monitor daily activities in a home. Using this application, we show how an integrated platform with support for time synchronization, a collision-free TDMA link layer, an underlying RTOS and an interface to an embedded vision sensor provides a stable framework for distributed real-time vision processing. To the best of our knowledge, this is the first wireless sensor networking system to integrate multiple coordinating cameras performing local processing.
    Real-Time Systems Symposium, 2007. RTSS 2007. 28th IEEE International; 01/2008