Daisy Chaining Based Visual Servo Control Part II: Extensions, Applications and Open Problems
ABSTRACT In this paper, the open problems and applications of a daisy chaining visual servo control strategy are given. This paper is Part II of Hu et al. (2007) in which a tracking problem using the daisy chaining strategy is addressed. The main idea of the daisy chaining strategy is to use multi-view geometry to relate coordinate frames attached to the moving camera, moving planar patch, and the desired planar patch specified by an a priori image. Geometric constructs developed for traditional camera-in-hand problems are fused with fixed-camera geometry to develop a set of Euclidean homographies. Based on the homographies, the corresponding rotation and translation components can be extracted for use in the control development. Different from the traditional camera-to-hand and camera-in-hand visual servo control configurations, two cameras are used to construct the homography relationships and estimate the pose of an object modeled as a planar patch (e.g., an unmanned ground vehicle (UGV) or an unmanned air vehicle (UAV)) when in case the object is out of the field of view (FOV), or when the current and desired poses of the object are within the FOV of a single camera.
[show abstract] [hide abstract]
ABSTRACT: While a Global Positioning System (GPS) is the most widely used sensor modality for aircraft navigation, researchers have been motivated to investigate other navigational sensor modalities because of the desire to operate in GPS denied environments. Due to advances in computer vision and control theory, monocular camera systems have received growing interest as an alternative/collaborative sensor to GPS systems. Cameras can act as navigational sensors by detecting and tracking feature points in an image. Current methods have a limited ability to relate feature points as they enter and leave the camera field of view (FOV). A vision-based position and orientation estimation method for aircraft navigation and control is described. This estimation method accounts for a limited camera FOV by releasing tracked features that are about to leave the FOV and tracking new features. At each time instant that new features are selected for tracking, the previous pose estimate is updated. The vision-based estimation scheme can provide input directly to the vehicle guidance system and autopilot. Simulations are performed wherein the vision-based pose estimation is integrated with a nonlinear flight model of an aircraft. Experimental verification of the pose estimation is performed using the modelled aircraft.IEEE Transactions on Aerospace and Electronic Systems 08/2010; · 1.10 Impact Factor