Article

Active Distance Measurement based on Robust Artificial Markers as a Building Block for a Service Robot Architecture

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Software architectures for service robots have to support flexible combinations of basic algorithms to form complex goal directed behaviors. This paper presents a building block for an active distance measurement: the Glare Laser Range Scanner Robotics Experiment.The setup and the intended functionality of GLaRE are described and the more complex algorithms that form modules within GLaRE will be presented in detail. A artificial visual marker is chosen as a point of attention, for which the distances has to be determined. The detection of the marker is distance and orientation invariant and thus suitable for both experimental and practical applications. The overall architectural design of GLaRE will be described.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... A visual marker is an artificial object consistent with a known model that is placed into a scene to supply a reference frame. Currently, such artefacts are unavoidable whenever a high level of precision and repeatability in imagebased measurement is required, as in the case of visiondriven dimensional assessment task such as robot navigation and SLAM [5,8,36], motion capture [2,38], pose estimation [37,39], camera calibration [7,14] and of course in field of augmented reality [6,40]. ...
Article
Full-text available
Visual marker systems have become an ubiquitous tool to supply a reference frame onto otherwise uncontrolled scenes. Throughout the last decades, a wide range of different approaches have emerged, each with different strengths and limitations. Some tags are optimized to reach a high accuracy in the recovered camera pose, others are based on designs that aim to maximizing the detection speed or minimizing the effect of occlusion on the detection process. Most of them, however, employ a two-step procedure where an initial homography estimation is used to translate the marker from the image plane to an orthonormal world, where it is validated and recognized. In this paper, we present a general purpose fiducial marker system that performs both steps directly in image-space. Specifically, by exploiting projective invariants such as collinearity and cross-ratios, we introduce a detection and recognition algorithm that is fast, accurate and moderately robust to occlusion. The overall performance of the system is evaluated in an extensive experimental section, where a comparison with a well-known baseline technique is presented. Additionally, several real-world applications are proposed, ranging from camera calibration to projector-based augmented reality.
... Malassis and Okutomi [5] use a three-color fiducial to provide pose information. Walthelm and Kluthe [10] measure marker distance based on concentric black and white circular fiducials. Our previous work in [6] utilized another design of a color marker, which was relatively more sensitive to current ...
Conference Paper
Full-text available
This paper presents the design and results of autonomous behaviors for tightly-coupled cooperation in heterogeneous robot teams, specifically for the task of navigation assistance. These cooperative behaviors enable capable, sensor rich ("leader") robots to assist in the navigation of sensor-limited ("simple") robots that have no onboard capabilities for obstacle avoidance or localization, and only minimal capabilities for kin recognition. The simple robots must be dispersed throughout a known, indoor environment to serve as a sensor network. However, because of their navigation limitations, they are unable to autonomously disperse themselves or move to planned sensor deployment positions independently. To address this challenge, we present cooperative behaviors for heterogeneous robots that enable the successful deployment of sensor-limited robots by assistance from more capable leader robots. These heterogeneous cooperative behaviors are quite complex, and involve the combination of several behavior components, including vision-based marker detection, autonomous teleoperation, color marker following in robot chains, laser-based localization, map-based path planning, and ad hoc mobile networking. We present the results of the implementation and extensive testing of these behaviors for deployment in a rigorous test environment. To our knowledge, this is the most complex heterogeneous robot team cooperative task ever attempted on physical robots. We consider it a significant success to have achieved such a high degree of system effectiveness, given the complexity of the overall heterogeneous system.
... Additionally, it cannot provide orientation information of the fiducial. The approach in [12] uses concentric black and white circular fiducials to measure distance. Similarly, Cho and Neumann [13] use concentric multi-ring, multi-size color circular fiducials. ...
Conference Paper
Full-text available
This paper presents an approach for deploying a team of mobile sensor nodes to form a sensor network in indoor environments. The challenge in this work is that the mobile sensor nodes have no ability for localization or obstacle avoidance. Thus, our approach entails the use of more capable "helper" robots that "herd" the mobile sensor nodes into their deployment positions. To extensively explore the issues of heterogeneity in multi-robot teams, we employ the use of two types of helper robots-one that acts as a leader and a second that: 1) acts as a follower and 2) autonomously teleoperates the mobile sensor nodes. Due to limited sensing capabilities, neither of these helper robots can herd the mobile sensor nodes alone; instead, our approach enables the team as a whole to successfully accomplish the sensor deployment task. Our approach involves the use of line-of-sight formation keeping, which enables the follower robot to use visual markers to move the group along the path executed by the leader robot. We present results of the implementation of this approach in simulation, as well as results to date in the implementation on physical robot systems. To our knowledge, this is the first implementation of robot herding using such highly heterogeneous robots, in which no single type of robot could accomplish the sensor network deployment task, even if multiple copies of that robot type were available.
Article
Full-text available
The control of complex sensorimotor systems in unstructured environments presents formidable control challenges. A distributed control approach is presented which constructs behavior on-line by activating combinations of reusable feedback control laws with formal stability and convergence properties drawn from a control basis. This control basis representation solves critical problems related to the size of the search space used to represent sensory and motor policies. Moreover, the predictability of the elemental controllers further reduces the complexity of the composition problem and simplifies the planning of composition policies. A generic control basis is constructed and applied to a 20 degree-of-freedom hand/arm system engaged in autonomous manipulation tasks. 1 Introduction A reasonable goal in the design of robot systems is to enhance the autonomy of manipulation strategies. While introducing flexibility, dextrous robot hand/arm systems present formidable control challenges ...
Article
Full-text available
Journal of Experimental and Theoretical Artificial Intelligence (JETAI) 9, 1997, 215-235. Special issue on Architectures for Physical Agents. Mobile robots, if they areto perform useful tasks andbecome accepted in open environments, must be fully autonomous. Autonomy has many different aspects; here we concentrate on three central ones: the ability to attend to another agent, to take advice about the environment, and to carry out assigned tasks. All three involve complex sensing and planning operations on the part of the robot, including the use of visual tracking of humans, coordination of motor controls, and planning. We show how these capabilities are integrated in the Saphira architecture, using the concepts of coordination of behavior, coherence of modeling, and communication with other agents. This paper reports work done while this author was at SRI International. 1 Autonomous Mobile Agents What are the minimal capabilities for an autonomous mobile agent? Posed in th...
Conference Paper
Einsatzbereich der vorgestellten Arbeit ist der Unterstützungsbereich älterer bzw. behinderter Menschen sowie der Rehabilitationsbereich. Ziel ist die Erhöhung der Autonomie des hilfsbedürftigen Anwenders. Der teilautonome Manipulator wird für Interaktionen mit der Umgebung genutzt, wie z.B. zum Aufheben und Manipulieren von Gegenständen. In den folgenden Abschnitten wird die Methode der virtuellen Punkte vorgestellt mit der die bildbasierte Regelung eines teilautonomen Serviceroboters für Manipulationsaufgaben realisiert wurde. Dargestellt wird die Realisierung für einen Industrie-Knickarmroboter, der durch ein externes Stereokamerasystem geregelt für Manipulationsaufgaben mit drei Freiheitsgraden eingesetzt wird.
Conference Paper
Durch die Verwendung eines geschlossenen visuellen Regelkreises zur Steuerung eines mobilen Roboters oder eines Manipulators, läßt sich ein kalibrierungsrobustes System realisieren. Wird ferner eine Zoomkamera anstatt einer festbrennweitigen Kamera eingesetzt, entstehen weitere Vorteile, es kommen aber auch neue Probleme hinzu. In diesem Beitrag wird auf diese Vorteile und Probleme eingegangen, ein bildbasierter Regler hergeleitet und es werden Ergebnisse aus einer Anwendung mit einem realen System vorgestellt. Hierbei wird ein neues Regelkonzept dargestellt, das es ermöglicht, daß während des Regelvorgangs die Merkmale eines Objekts im Bild nahezu konstant bleiben und somit die Objektidentifizierung robuster wird. Dazu wird der klassische bildbasierte Regelkreis um ein Kompensationsglied erweitert und ein weiterer Regelkreis zur Regelung der Objektgröße im Bild eingeführt.
Conference Paper
This paper describes how tracking and target selection are used in two behavior systems of the XT-1 vision architecture for mobile robots. The first system is concerned with active tracking of moving targets and the second is used for visually controlled spatial navigation. We overview the XT-1 architecture and describe the role of expectation-based template matching for both target tracking and navigation. The subsystems for low-level processing, attentional processing, single feature processing, spatial relations, and place/object-recognition are described and we present a number of behaviors that can make use of the different visual processing stages. The architecture, which is inspired by biology, has been successfully implemented in a number of robots which are also briefly described
Conference Paper
Describes the developed architecture for sensor information integration in a mobile robot. Hardware and software modules work locally with sensors signals, distributing the fitness functions over a parallel processing scheme. This organization tries to overcome real time constraints when dealing a with dynamic system-environments. A distributed network of DS5000 Dallas microprocessors and T800 transputers assisted by several DSP's, is addressed for first stages of sensors signals accommodation and generation of low level survival behaviors. Human knowledge in transferred to the perception modules as fuzzy logic inferences located on each processor. A set of decision rules account for efficient decisions making using a small number of linguistic terms that condense rough sensor data by means of membership functions representation. System experience can be modulated with existing algorithms according with some pre-defined safety conditions
Article
Is there a robust basis for dexterous manipulation tasks? This approach relies on reusable control laws to put together manipulation strategies online. A demonstration is presented that suggests that the approach scales to the complexity of manipulation tasks. The compact control basis representation and the predictable behavior of the constituent controllers greatly enhances the construction of correct composition policies. This predictability allows reasoning about end to end problem solving behavior, which is not supported by methods employing less formal behavioral specifications. In those methods the designer must determine the composition policy, or the system must find it through random exploration. Our approach opens the composition problem to a large variety of control, planning, and machine learning methods. We are investigating formal methods that automatically generate composition policies from abstract task descriptions provided by the user. The generic character of the control basis not only improves generalization across task domains, but also appears to improve generalization across a variety of hardware platforms
Article
Research on Autonomous Mobile Systems includes disciplines spanning over almost every field in engineering, most of which are not specific to the field of autonomous mobile systems. On the contrary we find that most research that is relevant to Autonomous Mobile Systems is not directed towards this field in particular, rather it is directed towards some aspects of autonomy or mobility or systems design. Therefore it is hard to find places where researchers include every aspect of the field. This paper aims at giving an overview of the current issues in the area of Mobile Autonomous Systems. It is based on literature studies and site visits to leading research laboratories around the world, including Europe, the US, and Japan. Since research related to this field is vast, we have concentrated on issues that we believe form the fundamental base, some of which can be said to be well studied and far advanced, while others are such that researchers are still struggling to grasp...
Emerging Barcode Symbologies
  • A Longacre
A. Longacre. Emerging Barcode Symbologies. In AIM International, Technical Review, pages 61-67, 1996
Visual Attention and Gaze Control for an Active Vision System
  • B Mertsching
  • M Bollmann
B. Mertsching, M. Bollmann. Visual Attention and Gaze Control for an Active Vision System. In N. Kasabov, R. Kozma, et al., editors, Progress in Connectionist-Based Information Systems, pages 76-79. Springer, 1997.
Integrative architecture of the autonomous hand-eye robot JANUS
  • G Richter
  • F Smieja
  • U Beyer
G. Richter, F. Smieja, U. Beyer. Integrative architecture of the autonomous hand-eye robot JANUS. In International Symposium on Computational Intelligence in Robotics and Automation (CIRA '97), pages 382-389, Los Alamitos, 1997.
Integrating Active Perception with an Autonomous Robot Architecture
  • G Wasson
  • R P Bonasso
  • D Kortenkamp
G. Wasson, R. P. Bonasso, D. Kortenkamp. Integrating Active Perception with an Autonomous Robot Architecture. In Proceedings of the 2nd International Conference on Autonomous Agents, pages 325-331,1998.
Closed-Loop Visual Grasping And Manipulation. hIlP:llwww. cs. columbia. edulroboticsl publications!
  • B H Yoshimi
  • P Alien
B. H. Yoshimi, P. Alien. Closed-Loop Visual Grasping And Manipulation. hIlP:llwww. cs. columbia. edulroboticsl publications!,1996.