Conference Paper

A Model-Free Voting Approach for Integrating Multiple Cues.

Conference: Computer Vision - ECCV'98, 5th European Conference on Computer Vision, Freiburg, Germany, June 2-6, 1998, Proceedings, Volume I
Source: DBLP
0 Reads
  • Source
    • "According to [24], technologies that involve multiple cues, which are limited only with regards to available resources, have advantages in computer vision. Methods have been proposed that integrate multiple cues [6] [13] [24]. Their reason for this is that multiple features can overcome limitations that each single feature has. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The key issue addressed by this paper is the necessity to devise performance evaluation measures for systems that integrate multiple cues for tracking in video sequences. We propose a generic evaluation approach that can be implemented in systems that perform higher-level people tracking by integrating multiple low-level features extracted from the video data. Two new measures: video sequence accuracy (VSA) and voting average measure (VAM), are introduced and explained by using the two fundamental image processing techniques of edge and optical flow detection. The effectiveness of the approach is demonstrated using a set of real video sequences with ground truth.
    Proceedings of the International Conference on Digital Image Computing: Techniques and Applications, DICTA 2008, Canberra, ACT, Australia, 1-3 December 2008; 01/2008
  • Source
    • "Another strategy is to use integration schemes [18] [26] [30]. Here, the pattern recognition literature offers a vast choice, but one of the most popular methods in object recognition is the voting scheme [15] [6] [12]. There are many possible variants of the voting scheme, but we can say that voting is, in general, dealing with a set of equivalent input cues and producing the output which is approved by most of them. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Object recognition systems aiming to work in real world settings should use multiple cues in order to achieve robustness. We present a new cue integration scheme, which extends the idea of cue accumulation to discriminative classifiers. We derive and test the scheme for support vector machines (SVMs), but we also show that it is easily extendible to any large margin classifier. In the case of one-class SVMs the scheme can be interpreted as a new class of Mercer kernels for multiple cues. Experimental comparison with a probabilistic accumulation scheme is favorable to our method. Comparison with voting scheme shows that our method may suffer as the number of object classes increases. Based on these results, we propose a recognition algorithm consisting of a decision tree where decisions at each node are taken using our accumulation scheme. Results obtained using this new algorithm compare very favorably to accumulation (both probabilistic and discriminative) and voting scheme.
    Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on; 01/2004
  • Source
    • "are the appearance and motion cues, respectively, observed at t. [5] reports an alternative integration technique, known as the " weighted voting " scheme that integrates the likelihoods derived for different cues as a weighted sum. The key difference of this scheme from the former one is that each cue makes an independent decision before "
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a stochastic tracking algorithm for surveillance videos where targets are dim and of low resolution. Our tracker utilizes the particle filter as the basic framework. Two important novel features of the tracker include: A dynamic motion model consisting of both background and foreground motion parameters is used; Appearance and motion cues are adaptively integrated in a system observation model when estimating the likelihood functions. Based on these features, the accuracy and robustness of the tracker, two important metrics in surveillance applications has been improved. We present the results of applying the proposed algorithm to many sequences with different visual conditions; the algorithm always gives satisfactory results even in some challenging sequences.
Show more