Conference Paper

Active Vision from Multiple Cues.

DOI: 10.1007/3-540-45482-9_20 Conference: Biologically Motivated Computer Vision, First IEEE International Workshop, BMVC 2000, Seoul, Korea, May 15-17, 2000, Proceedings
Source: DBLP


Active vision involves processes for stabilisation and fixation on objects of interest. To provide robust performance for
such processes it is necessary to consider integration and processing as closely coupled processes. In this paper we discuss
methods for integration of cues and present a unified architecture for active vision. The performance of the approach is illustrated
by a few examples.

0 Reads
  • Source
    • "The former typically employs a pan-tilt-zoom (PTZ) camera such that the point of interest is aligned with the optical axis and projected at the image center (fovea) [3], [4]. The latter usually considers a stereo head [5], with the point of interest being foveated by intersecting the optical axes of both cameras at the exact target location (the vergence/fixation point) [5]–[7]. Since the fixation point lies in the horopter [8], many binocular systems use target disparity between retinas as feedback-control signal [9]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a general approach for the simultaneous tracking of multiple moving targets using a generic active stereo setup. The problem is formulated on the plane, where cameras are modeled as “line scan cameras,” and targets are described as points with unconstrained motion. We propose to control the active system parameters in such a manner that the images of the targets in the two views are related by a homography. This homography is specified during the design stage and, thus, can be used to implicitly encode the desired tracking behavior. Such formulation leads to an elegant geometric framework that enables a systematic and thorough analysis of the problem at hand. The benefits of the approach are illustrated by applying the framework to two distinct stereo configurations. In the first case, we assume two pan-tilt-zoom cameras, with rotation and zoom control, which are arbitrarily placed in the working environment. It is proved that such a stereo setup can track up to N = 3 free-moving targets, while assuring that the image location of each target is the same for both views. The second example considers a robot head with neck pan motion and independent eye rotation. For this case, it is shown that it is not possible to track more than N = 2 targets because of the lack of zoom. The theoretical framework is used to derive the control equations, and the implementation of the tracking behavior is described in detail. The correctness of the results is confirmed through simulations and real tracking experiments.
    IEEE Transactions on Robotics 07/2010; 26(3-26):442 - 457. DOI:10.1109/TRO.2010.2047300 · 2.43 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To efficiently solve challenges related to motion-planning problems with dynamics, this paper proposes treating motion planning not just as a search problem in a continuous space but as a search problem in a hybrid space consisting of discrete and continuous components. A multilayered framework is presented which combines discrete search and sampling-based motion planning. This framework is called synergistic combination of layers of planning ( SyCLoP) hereafter. Discrete search uses a workspace decomposition to compute leads, i.e., sequences of regions in the neighborhood that guide sampling-based motion planning during the state-space exploration. In return, information gathered by motion planning, such as progress made, is fed back to the discrete search. This combination allows SyCLoP to identify new directions to lead the exploration toward the goal, making it possible to efficiently find solutions, even when other planners get stuck. Simulation experiments with dynamical models of ground and flying vehicles demonstrate that the combination of discrete search and motion planning in SyCLoP offers significant advantages. In fact, speedups of up to two orders of magnitude were obtained for all the sampling-based motion planners used as the continuous layer of SyCLoP.
    IEEE Transactions on Robotics 07/2010; 26(3-26):469 - 482. DOI:10.1109/TRO.2010.2047820 · 2.43 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Segmenting semantically meaningful whole objects from images is a challenging problem, and it becomes especially so without higher level common sense reasoning. In this paper, we present an interactive segmentation framework that integrates image appearance and boundary constraints in a principled way to address this problem. In particular, we assume that small sets of pixels, which are referred to as seed pixels, are labeled as the object and background. The seed pixels are used to estimate the labels of the unlabeled pixels using Dirichlet process multiple-view learning, which leverages 1) multiple-view learning that integrates appearance and boundary constraints and 2) Dirichlet process mixture-based nonlinear classification that simultaneously models image features and discriminates between the object and background classes. With the proposed learning and inference algorithms, our segmentation framework is experimentally shown to produce both quantitatively and qualitatively promising results on a standard dataset of images. In particular, our proposed framework is able to segment whole objects from images given insufficient seeds.
    IEEE Transactions on Image Processing 12/2011; 21(4):2119-29. DOI:10.1109/TIP.2011.2181398 · 3.63 Impact Factor

Similar Publications