Conference Paper

A Model-Free Voting Approach for Integrating Multiple Cues.

Conference: Computer Vision - ECCV'98, 5th European Conference on Computer Vision, Freiburg, Germany, June 2-6, 1998, Proceedings, Volume I
Source: DBLP
0 Bookmarks
 · 
45 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes techniques for fusing the output of multiple cues to robustly and accurately segment foreground objects from the background in image sequences. Two different methods for cue integration are presented and tested. The first is a probabilistic approach which at each pixel computes the likelihood of observations over all cues before assigning pixels to foreground or back-ground layers using Bayes Rule. The second method allows each cue to make a decision independent of the other cues before fusing their outputs with a weighted sum. A further important contribution of our work concerns demonstrating how models for some cues can be learnt and subsequently adapted online. In particular, regions of coherent motion are used to train distributions for colour and for a simple texture descriptor. An additional aspect of our framework is in providing mechanisms for suppressing cues when they are believed to be unreliable, for instance during training or when they disagree with the general consensus. Results on extended video sequences are presented.
    07/2002;
  • Source
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a stochastic tracking algorithm for surveillance videos where targets are dim and of low resolution. Our tracker utilizes the particle filter as the basic framework. Two important novel features of the tracker include: A dynamic motion model consisting of both background and foreground motion parameters is used; Appearance and motion cues are adaptively integrated in a system observation model when estimating the likelihood functions. Based on these features, the accuracy and robustness of the tracker, two important metrics in surveillance applications has been improved. We present the results of applying the proposed algorithm to many sequences with different visual conditions; the algorithm always gives satisfactory results even in some challenging sequences.
    IEEE TRANS. ON IMAGE PROCESSING. 7.