Conference Paper

Object Tracking in the Presence of Occlusions via a Camera Network

Stanford Univ. Stanford, Stanford
DOI: 10.1109/IPSN.2007.4379711 Conference: Information Processing in Sensor Networks, 2007. IPSN 2007. 6th International Symposium on
Source: DBLP


This paper describes a sensor network approach to tracking a single object in the presence of static and moving occluders using a network of cameras. To conserve communication bandwidth and energy, each camera first performs simple local processing to reduce each frame to a scan line. This information is then sent to a cluster head to track a point object. We assume the locations of the static occluders to be known, but only prior statistics on the positions of the moving occluders are available. A noisy perspective camera measurement model is presented, where occlusions are captured through an occlusion indicator function. An auxiliary particle filter that incorporates the occluder information is used to track the object. Using simulations, we investigate (i) the dependency of the tracker performance on the accuracy of the moving occluder priors, (ii) the tradeoff between the number of cameras and the occluder prior accuracy required to achieve a prescribed tracker performance, and (iii) the importance of having occluder priors to the tracker performance as the number of occluders increases. We generally find that computing moving occluder priors may not be worthwhile, unless it can be obtained cheaply and to a reasonable accuracy. Preliminary experimental results are provided.

Download full-text


Available from: Ali Özer Ercan, Aug 18, 2014
  • Source
    • "Partial occlusion hides some parts of the target while complete occlusion hides the entire target for some time. Many techniques exist to handle the occlusion problem with particle filter probabilistic models, such as in [1]. But the proposed tracking method uses network of many cameras to handle this problem. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Visual tracking is the problem of using visual sensor measurements to determine location and path of target object. One of big challenges for visual tracking is full occlusion. When full occlusions are present, image data alone can be unreliable, and is not sufficient to detect the target object. The developed tracking algorithm is based on bootstrap particle filter and using color feature target. Furthermore the algorithm is modified using nonretinotopic concept, inspired from the way of human visual cortex handles occlusion by constructing nonretinotopic layers. We interpreted the concept by using past tracking memory about motion dynamics rather than current measurement when quality level of tracking reliability below a threshold. Using experiments, we found (i) the performance of the object tracking algorithm in handling occlusion can be improved using nonretinotopic concept, (ii) dynamic model is crucial for object tracking, especially when the target object experienced occlusion and maneuver motions, (iii) the dependency of the tracker performance on the accuracy of tracking quality threshold when facing illumination challenge. Preliminary experimental results are provided.
  • Source
    • "Our work focus on camera placement to guarantee full MPC of a region as opposed to assessing the improvement on the quality of object tracking when multiple perspective views are available. Nonetheless, our work can complement the work in [10] by providing different resolution levels and checking how these different levels are improving the quality of object tracking. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we tackle the problem of providing coverage for video panorama generation in Wireless Heterogeneous Visual Sensor Networks (VSNs) where cameras may have different price, resolution, Field-of-View (FoV) and Depth-of-Field (DoF). We utilize multi-perspective coverage (MPC) which refers to the coverage of a point from given disparate perspectives simultaneously. For a given minimum average resolution, area boundaries, and variety of camera sensors, we propose a deployment algorithm which minimizes the total cost while guaranteeing full MPC of the area (i.e., the coverage needed for video panorama generation) and the minimum required resolution. Specifically, the approach is based on a bi-level mixed integer program (MIP), which runs two models, namely master problem and sub-problem, iteratively. Master-problem provides coverage for initial set of identified points while meeting the minimum resolution requirement with minimum cost. Sub-problem which follows the master-problem finds an uncovered point and extends the set of points to be covered. It then sends this set back to the master-problem. Master-problem and sub-problem continue to run iteratively until sub-problem becomes infeasible, which means full MPC has been achieved with the resolution requirements. The numerical results show the superiority of our approach with respect to existing approaches.
    2011 IEEE International Symposium on Multimedia, ISM 2011, Dana Point, CA, USA, December 5-7, 2011; 01/2011
  • Source
    • "Previous studies in barrier coverage mainly focused on traditional scalar sensor networks, in which the sensing range of a sensor is often modeled as a disk and an object is said to be covered or detected by a sensor if it is within the sensing range of the sensor [14]. Recently, there has been an increasing interest in camera sensor networks [18] [10] [8] [2] [1] [22]. Compared with traditional scalar sensors, camera sensors can provide much richer information of the environment in the forms of images or videos and hence promise a huge potential in applications. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Barrier coverage has attracted much attention in the past few years. However, most of the previous works focused on traditional scalar sensors. We propose to study barrier coverage in camera sensor networks. One fundamental difference between camera and scalar sensor is that cameras from different positions can form quite different views of the object. As a result, simply combining the sensing range of the cameras across the field does not necessarily form an effective camera barrier since the face image (or the interested aspect) of the object may be missed. To address this problem, we use the angle between the object's facing direction and the camera's viewing direction to measure the quality of sensing. An object is full-view covered if there is always a camera to cover it no matter which direction it faces and the camera's viewing direction is sufficiently close to the object's facing direction. We study the problem of constructing a camera barrier, which is essentially a connected zone across the monitored field such that every point within this zone is full-view covered. We propose a novel method to select camera sensors from an arbitrary deployment to form a camera barrier, and present redundancy reduction techniques to effectively reduce the number of cameras used. We also present techniques to deploy cameras for barrier coverage in a deterministic environment, and analyze and optimize the number of cameras required for this specific deployment under various parameters.
    Proceedings of the 12th ACM Interational Symposium on Mobile Ad Hoc Networking and Computing, MobiHoc 2011, Paris, France, May 16-20, 2011; 01/2011
Show more