Conference Paper

Plane-to-plane positioning from image-based visual servoing and structured light

Institut d'lnformatica i Aplicacions, Girona Univ., Spain
DOI: 10.1109/IROS.2004.1389484 Conference: Intelligent Robots and Systems, 2004. (IROS 2004). Proceedings. 2004 IEEE/RSJ International Conference on, Volume: 1
Source: OAI

ABSTRACT In this paper we face the problem of positioning a camera attached to the end-effector of a robotic manipulator so that it gets parallel to a planar object. Such problem has been treated for a long time in visual servoing. Our approach is based on linking to the camera several laser pointers so that its configuration is aimed to produce a suitable set of visual features. The aim of using structured light is not only for easing the image processing and to allow low-textured objects to be treated, but also for producing a control scheme with nice properties like decoupling, stability, well conditioning and good camera trajectory.

Download full-text

Full-text

Available from: François Chaumette, Sep 02, 2015
0 Followers
 · 
84 Views
 · 
55 Downloads
  • Source
    • "However, they have limitations in measuring a long-distance motion, relative to the chosen reference frame. As another alternative for remote motion detection, a structured light system illuminates patterns of light instead of placing LED markers on the target [18]–[21]. In the literature, four laser pointers have been arranged in a cross shape facing the target. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Strong demands for accurate long-range 6-DOF motion sensors have been arising for the purposes of navigation and object tracking in maritime transportation. Up to now, high-end sensors such as a differential global positioning system, laser tracker, and motion-capture camera have been utilized for long-distance sensing in spite of their high cost. In this paper, we developed a long-range motion-sensing system with the combination of low-cost sensors including infrared markers, three 1-D laser sensors, and vision cameras. A proposed sensing system, referred to as the three-beam detector, can measure the 6-DOF motion of a dynamic target, even in an outdoor environment, without any prior knowledge of the initial condition of the target. Through various static and dynamic experiments, the three-beam detector is validated to have the accuracy of 3 mm at a 30-m distance.
    IEEE Transactions on Industrial Electronics 08/2013; 60(8):3386-3395. DOI:10.1109/TIE.2012.2200225 · 6.50 Impact Factor
  • Source
    • "The system is validated in a plane-to-plane task where only three degrees of freedom are considered. Depending of the visual features vector chosen, they demonstrate the decoupling of the interaction matrix only in local conditions (i.e., positions near the desired location) in [44], or the global convergence in [45]. This last paper details the convergence study of the proposed visual servoing scheme. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors.
    Sensors 12/2009; 9(12). DOI:10.3390/s91209689 · 2.05 Impact Factor
  • Source
    • "t) converges to 0 for any initial state since lim t→∞ u(t) = b lim t→∞ a(t) = ∞    ⇒ lim t→∞ e 1 (t) = 0 (29) In [22] "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper considers the problem of positioning an eye-in-hand system so that it becomes parallel to a planar object. Our approach to this problem is based on linking to the camera a structured light emitter designed to produce a suitable set of visual features. The aim of using structured light is not only for simplifying the image processing and allowing low-textured objects to be considered, but also for producing a control scheme with nice properties like decoupling, convergence, and adequate camera trajectory. This paper focuses on an image-based approach that achieves decoupling in all the workspace, and for which the global convergence is ensured in perfect conditions. The behavior of the image-based approach is shown to be partially equivalent to a 3-D visual servoing scheme, but with a better robustness with respect to image noise. Concerning the robustness of the approach against calibration errors, it is demonstrated both analytically and experimentally
    IEEE Transactions on Robotics 11/2006; 22(5-22):1000 - 1010. DOI:10.1109/TRO.2006.878785 · 2.65 Impact Factor
Show more