Robotics and Autonomous Systems

Published by Elsevier BV

Print ISSN: 0921-8890

Articles


The Initial Development of Object Knowledge by a Learning Robot
  • Article

November 2008

·

89 Reads

·

We describe how a robot can develop knowledge of the objects in its environment directly from unsupervised sensorimotor experience. The object knowledge consists of multiple integrated representations: trackers that form spatio-temporal clusters of sensory experience, percepts that represent properties for the tracked objects, classes that support efficient generalization from past experience, and actions that reliably change object percepts. We evaluate how well this intrinsically acquired object knowledge can be used to solve externally specified tasks including object recognition and achieving goals that require both planning and continuous control.
Share

Comparison of contact sensor localization abilities duringmanipulation

September 1995

·

41 Reads

This paper presents an experimental comparison of tactile array versus force-torque sensing for localizing contact during manipulation. The manipulation tasks involved rotating and translating objects using a planar two fingered manipulator. A pin and a box were selected as limiting cases of point and line contact against a cylindrical robot finger tip. Force-torque contact sensing results suffered from difficulties in calibration, transient forces, and low grasp force. Tactile array sensing was immune to these problems, and the effect of shear loading was only noticeable for a simple centroid algorithm. The results show that with care, both of these sensing schemes can determine the contact location within a millimeter during real manipulation tasks

Path planning and guidance techniques for an autonomous mobilecleaning robot

October 1994

·

128 Reads

In the past mobile robot research was often focused on various kinds of point-to-point transportation tasks. Mobile robot application in service tasks, however, requires quite different path planning and guidance approaches. This paper introduces and discusses in detail specific planning and guidance techniques for a mobile floor-cleaning robot. A kinematic and geometric model of the robot and the cleaning units as well as a 2D-map of the indoor environment are used for planning an appropriate cleaning path. The path is represented by a concatenation of two kinds of typical motion patterns. Each pattern is defined by a sequence of discrete cartesian intermediate goal frames. These frames represent position and orientation of the vehicle and must be translated into motion commands for the robot. The steps of this semi-automatic path planning system are illustrated by a typical cleaning environment. Vehicle guidance includes execution of the planned motion commands, estimation of the robot location, path tracking, as well as detection of and reaction to (isolated) obstacles. For location estimation a least-squares fitting of corresponding geometric contours from the 2D-environment map and geometric 2D-sensor data is used. Obstacle detection is accomplished by testing geometric 2D-sensor data to be part of the preplanned cleaning path. Path planning and parts of the developed vehicle guidance system have been tested with the experimental mobile robot MACROBE. Results reported in this paper demonstrate the efficiency of the described planning, location estimation and path tracking procedures in basic floor-cleaning tasks

Q-learning of complex behaviors on a six-legged walking machine

November 1997

·

66 Reads

We present work on a six-legged walking machine that uses a hierarchical version of Q-learning (HQL) to learn both the elementary swing and stance movements of individual legs as well as the overall coordination scheme to perform forward movements. The architecture consists of a hierarchy of local controllers implemented in layers. The lowest layer consists of control modules performing elementary actions, like moving a leg up, down, left or right to achieve the elementary swing and stance motions for individual legs. The next level consists of controllers that learn to perform more complex tasks like forward movement by using the previously learned, lower level modules. On the third the highest layer in the architecture presented here the previously learned complex movements are themselves reused to achieve goals in the environment using external sensory input. The work is related to similar, although simulation-based, work by Lin (1993) on hierarchical reinforcement learning and Singh (1994) on compositional Q-learning. We report on the HQL architecture as well as on its implementation on the walking machine SIR ARTHUR. Results from experiments carried out on the real robot are reported to show the applicability of the HQL approach to real world robot problems

Visual behaviours for binocular tracking

November 1997

·

46 Reads

This paper presents a binocular tracking system based on the integration of visual behaviours. Biologically motivated behaviours, vergence and pursuit, cooperate as parallel, complementary and highly coupled processes in the tracking system, simplifying the acquisition of perceptual information and system modeling and control. The use of a space variant image representation and low-level visual cues as feedback signals in a closed loop control architecture, allow real-time and reliable performance for each behaviour, despite the low precision of the algorithms and modeling errors. The behaviours are integrated and the overall system is implemented in a stereo head running at real-time (12.5 Hz), without any specific processing hardware. Results are presented for objects of different shapes and motions, illustrating that tracking can be robustly achieved by the cooperation of purposively designed behaviours, tuned to specific subgoals

The ties that bind: motion planning for multiple tethered robots

June 1994

·

48 Reads

An algorithm for motion planning for multiple mobile tethered robots in a common planar environment is presented. The tethers of the robots are flexible cables that can be pushed and bent by other robots during their motion. Given the start and target positions of all robots and their cables, the objective is to design a sequential motion strategy for the robots that will not entangle the robot tethers. An algorithm is described that achieves this objective while generating an ordering of the robots that produces reasonably short paths. The algorithm's complexity is O(n<sup>4</sup>), where n is the number of robots

A self-calibration approach to extrinsic parameter estimation of stereo cameras

June 1994

·

32 Reads

A self-calibration technique is proposed in this paper to estimate extrinsic parameters of a stereo camera system. This technique does not require external 3D measurements of precision calibration points. Furthermore, it is conceptually simple and easy to implement. It has applications in such areas as autonomous vehicle navigation, robotics and computer vision. The proposed approach relies solely on distance measurements of a fixed-length object, say a stick. While the object is moved in the 3D space, the image coordinates of the object end points are extracted from the image sequence. A cost function that relates unknown parameters to measurement residuals is formulated. A nonlinear least squares algorithm is then applied to compute the parameters by minimizing the cost function, using the measured image coordinates and the known length of the object. Simulation studies in this papers answer questions such as the number of iterations needed for the algorithm to converge, the number of measurements needed for a robust estimation, singularity cases, and noise sensitivities of the algorithm

Localization and classification of target surfaces using two pairs of ultrasonic sensors

February 1999

·

48 Reads

Ultrasonic sensors have been widely used in recognizing the working environment for a mobile robot. However, their intrinsic problems, such as the specular reflection, the wide beam angle, and the slow propagation velocity, require an excessive number of sensors to be integrated to achieve the various sensing goals. This paper proposes new measurement scheme which uses only two sets of ultrasonic sensors to determine the location and the type of target surface. By measuring the time difference between the returned signals from the target surface, which are generated by two transmitters with 1 ms difference, it classifies type and determines the pose of the target surface. Since the proposed sensor system uses only the two sets of ultrasonic sensors to recognize and localize the target surface, it significantly simplifies the sensing system and reduces the signal processing time so that the working environment can be recognized in real time

Maintaining a common coordinate system for a group of robots based on vision

November 2003

·

35 Reads

This work presents a novel approach to the problem of establishing and maintaining a common coordinate system for a group of robots. A camera system mounted on top of a robot and vision algorithms are used to calculate the relative position of each surrounding robot. The watched movement of each robot is compared to the reported movement which is sent over some communication link. From this comparison a coordinate transformation is calculated. The algorithm was tested in simulation and is at the moment being implemented on a real robot system. Preliminary results of real world experiments are being presented.

A rapidly deployable manipulator system

May 1996

·

36 Reads

A rapidly deployable manipulator system combines the flexibility of reconfigurable modular hardware with modular programming tools, allowing the user to rapidly create a manipulator which is custom-tailored for a given task. This article describes two main aspects of such a system, namely, the reconfigurable modular manipulator system (RMMS) hardware and the corresponding control software

Embodied evolution: Embodying an evolutionary algorithm in a population of robots

February 1999

·

49 Reads

We introduce Embodied Evolution (EE) as a methodology for the automatic design of robotic controllers. EE is an evolutionary robotics (ER) technique that avoids the pitfalls of the simulate-and-transfer method, allows the speed-up of evaluation time by utilizing parallelism, and is particularly suited to future work on multi-agent behaviors. In EE, an evolutionary algorithm is distributed amongst and embodied within a population of physical robots that reproduce with one another while situated in the task environment. We have built a population of eight robots and successfully implemented our first experiments. The controllers evolved by EE compare favorably to hand-designed solutions for a simple task. We detail our methodology, report our initial results, and discuss the application of EE to more advanced and distributed robotics tasks

A multisine approach for trajectory optimization based on information gain

February 2002

·

60 Reads

This paper presents a multisine approach for trajectory optimization based on information gain, with distance and orientation sensing to known beacons. It addresses the problem of active sensing, i.e. the selection of a robot motion or sequence of motions, which make the robot arrive in its desired goal configuration (position and orientation) with maximum accuracy, given the available sensor information. The optimal trajectory is parameterized as a linear combination of sine functions. An appropriate optimality criterion is selected which takes into account various requirements (e.g. maximum accuracy and minimum time). Several constraints can be formulated, e.g. with respect to collision avoidance. The optimal trajectory is then determined by numerical optimization techniques. The approach is applicable to both nonholonomic and holonomic robots. Its effectiveness is illustrated here for a nonholonomic wheeled mobile robot (WMR) in an environment with and without obstacles.

Spatial learning with perceptually grounded representations

November 1997

·

28 Reads

The goal of this paper is to develop the foundation for a spatial navigation without objective representations. Rather than building the spatial representations on a Euclidean space, a weaker conception of space is used which has a closer connection to perception. A type of spatial representation is described that uses perceptual information directly to define regions in space. By combining such regions, it is possible to derive a number of useful spatial representations such as place-fields, paths and topological maps. Compared to other methods, the representations of the presented approach have the advantage that they are always grounded in the perceptual abilities of the robot, and thus, more likely to function correctly

Real-time visual system for interaction with a humanoid robot

February 2001

·

44 Reads

We describe a real-time visual system that enables a humanoid robot to learn from and interact with humans. The core of the visual system is a probabilistic tracker that uses shape and color information to find relevant objects in the scene. Multiscale representations, windowing and masking are employed to accelerate the data processing. The perception system is directly coupled with the motor control system of our humanoid robot DB. We present an example of on-line interaction with a humanoid robot: mimicking of human hand motion. The generation of humanoid robot motion based on the human motion is accomplished in real-time. The study is supported by experimental results on DB

A vision system for object verification and localization based onlocal features

February 1999

·

30 Reads

An object verification and localization system should answer the question of whether an expected object is present in an image or not, i.e. verification, and if present where it is located. Such a system would be very useful for mobile robots for example for landmark recognition or for the fulfilment of certain tasks. In this paper we present an object verification and localization system specially adapted to the needs of mobile robots. The object model is based on a collection of local features derived from a small neighbourhood around automatically detected interest points. The learned representation of the object is then matched with the image under consideration. The tests, based on 90 images, showed a very satisfying tolerance to scale changes of up to 25%, to viewpoint variations of 20 degrees, to occlusion of up to 80% and to major background changes as well as to local and global illumination changes. The tests also showed that the verification capabilities are very good and that similar objects did not trigger any false verification

Recognising plants with ultrasonic sensing for mobile robot navigation

February 1999

·

46 Reads

Mobile robots navigate through many environments that include plants. A sensor that can recognise plants would be useful for navigation in these environments. Two problems make plant sensing difficult: plant similarity and plant asymmetry with rotation. A CTFM (continuously transmitted frequency modulated) ultrasonic sensor produces a signal that contains information about the geometric structure of plants. Correlation of echoes from many orientations show that plants can be recognised with sufficient accuracy for navigation

Design philosophy for service robots

November 1995

·

43 Reads

The purpose of this paper is to present our design philosophy for service robotics research and development and describe our current efforts along this line. Our approach begins with a discussion of the role of service robotics and some features that are unique to service robotics. We then describe our design philosophy that emphasizes compromise and practicality in design. We will use this philosophy in the design and integration of a new service robot system, based on ISAC and HERO. ISAC is a stationary service robot designed to feed physically challenged individuals that is operated by voice command. HERO is a small mobile robot integrated into the system to provide new functionality for the user. We will make use of our design philosophy to solve some of the robot navigation problems and describe how our approach will help us solve these problems in an efficient manner. Some problems will be approached by a technical solution, and other problems will be solved through an expanded user interface and appeal to the intelligence of the user of the system. Performance of a useful service with limited intervention from a user at a reasonable cost is our goal

Submitted to IJCAI-01 Learning Compact 3D Models of Indoor and Outdoor Environments with a Mobile Robot

July 2003

·

346 Reads

This paper presents an algorithm for full 3D shape reconstruction of indoor and outdoor environments with mobile robots. Data is acquired with laser range finders installed on a mobile robot. Our approach combines efficient scan matching routines for robot pose estimation with an algorithm for approximating environments using flat surfaces. On top of that, our approach includes a mesh simplification technique to reduce the complexity of the resulting models. In extensive experiments, our method is shown to produce accurate models of indoor and outdoor environments that compare favorably to other methods.

Robox at Expo.02: A Large Scale Installation of Personal Robots

March 2003

·

202 Reads

·

·

·

[...]

·

In this paper we present Robox, a mobile robot designed for operation in a mass exhibition and the experience we made with its installation at the Swiss National Exhibition Expo.02. Robox is a fully autonomous mobile platform with unique multi-modal interaction capabilities, a novel approach to global localization using multiple Gaussian hypotheses, and a powerful obstacle avoidance. Eleven Robox ran for 12 hours daily from May 15 to October 20, 2002, traveling more than 3315 km and interacting with 686,000 visitors.

Modeling of Multi-Agent Market Systems in the Presence of Uncertainty: The Case of Information Economy.” Journal of Robotics and Autonomous Systems, 24, 93-113

September 1998

·

42 Reads

We discuss some issues involved in modeling of complex systems composed of dynamically interacting agents. We describe a prototype of simulation environment created for modeling of such systems with the aim of evaluating strategies of enterprizes in the information economy, but applicable to general multi-agent systems. The case study is presented along with the mathematical description of the multi-agent systems.

Fig. 1. The Nomad 200 mobile robot. A flux gate compass was used to keep the turret, and therefore the sensors, at a constant orientation. 
Fig. 2. (a) Raw odometry. (b) Compass-based odometry. The accumulated rotational drift in the robot’s raw odometry was removed on-line using the compass sense. 
Fig. 3. Example occupancy grid and histograms. Occupied cells are shown in black, empty cells in white and unknown cells in grey. 
Fig. 4. Matching the x and y histograms. The new histograms are convolved with the stored histograms for each place in the robot’s map to find the best match. 
Fig. 5. Left: environment C in Table 1. Right: retrospectively corrected odometer data used for performance evaluation (see also Fig. 2). 

+1

Nehmzow, U.: Mobile Robot Self-Localisation using Occupancy Histograms and a Mixture of Gaussian Location Hypotheses. Robotics and Autonomous Systems 34(2-3), 117-129
  • Article
  • Full-text available

February 2001

·

442 Reads

The topic of mobile robot self-localisation is often divided into the sub-problems of global localisation and position tracking. Both are now well understood individually, but few mobile robots can deal simultaneously with the two problems in large, complex environments. In this paper, we present a unified approach to global localisation and position tracking which is based on a topological map augmented with metric information. This method combines a new scan matching technique, using histograms extracted from local occupancy grids, with an efficient algorithm for tracking multiple location hypotheses over time. The method was validated with experiments in a series of real world environments, including its integration into a complete navigating robot. The results show that the robot can localise itself reliably in large, indoor environments using minimal computational resources.
Download

Barret, C.: Sensor-based Navigation of a Mobile Robot in an Indoor Environment. Robotics and Autonomous Systems 38, 1-18

January 2002

·

115 Reads

The work presented in this paper deals with the problem of the navigation of a mobile robot either in unknown indoor environment or in a partially known one.A navigation method in an unknown environment based on the combination of elementary behaviors has been developed. Most of these behaviors are achieved by means of fuzzy inference systems. The proposed navigator combines two types of obstacle avoidance behaviors, one for the convex obstacles and one for the concave ones. The use of zero-order Takagi–Sugeno fuzzy inference systems to generate the elementary behaviors such as “reaching the middle of the collision-free space” and “wall-following” is quite simple and natural. However, one can always fear that the rules deduced from a simple human expertise are more or less sub-optimal. This is why we have tried to obtain these rules automatically. A technique based on a back-propagation-like algorithm is used which permits the on-line optimization of the parameters of a fuzzy inference system, through the minimization of a cost function. This last point is particularly important in order to extract a set of rules from the experimental data without having recourse to any empirical approach.In the case of a partially known environment, a hybrid method is used in order to exploit the advantages of global and local navigation strategies. The coordination of these strategies is based on a fuzzy inference system by an on-line comparison between the real scene and a memorized one. The planning of the itinerary is done by visibility graph and A∗ algorithm. Fuzzy controllers are achieved, on the one hand, for the following of the planned path by the virtual robot in the theoretical environment and, on the other hand, for the navigation of the real robot when the real environment is locally identical to the memorized one.Both the methods have been implemented on the miniature mobile robot Khepera® that is equipped with rough sensors. The good results obtained illustrate the robustness of a fuzzy logic approach with regard to sensor imperfections.

Keypoint design and evaluation for place recognition in 2D lidar maps

December 2009

·

392 Reads

We address the place recognition problem, which we define as the problem of establishing whether an observed location has been previously seen, and if so, determining the transformation aligning the current observations to an existing map. In the contexts of robot navigation and mapping, place recognition amounts to globally localizing a robot or map segment without being given any prior estimate. An efficient method of solving this problem involves first selecting a set of keypoints in the scene which store an encoding of their local region, and then utilizing a sublinear-time search into a database of keypoints previously generated from the global map to identify places with common features. We present an algorithm to embed arbitrary keypoint descriptors in a reduced-dimension metric space, in order to frame the problem as an efficient nearest neighbor search. Given that there are a multitude of possibilities for keypoint design, we propose a general methodology for comparing keypoint location selection heuristics and descriptor models that describe the region around the keypoint. With respect to selecting keypoint locations, we introduce a metric that encodes how likely it is that the keypoint will be found in the presence of noise and occlusions during mapping passes. Metrics for keypoint descriptors are used to assess the distinguishability between the distributions of matches and non-matches and the probability the correct match will be found in an approximate k-nearest neighbors search. Verification of the test outcomes is done by comparing the various keypoint designs on a kilometers-scale place recognition problem. We apply our design evaluation methodology to three keypoint selection heuristics and six keypoint descriptor models. A full place recognition system is presented, including a series of match verification algorithms which effectively filter out false positives. Results from city-scale and long-term mapping problems illustrate our approach for both offline and online SLAM, map merging, and global localization and demonstrate that our algorithm is able to produce accurate maps over trajectories of hundreds of kilometers.

Foveated active tracking with redundant 2D motion parameters

June 2002

·

38 Reads

This work presents a real-time active vision tracking system based on log-polar image motion estimation with 2D geometric deformation models. We present a very efficient parametric motion estimation method, where most computation can be done offline. We propose a redundant parameterization for the geometric deformations, which improve the convergence range of the algorithm. A foveated image representation provides extra computational savings and attenuation of background effects. A proper choice of motion models and a hierarchical organization of the iterations provide additional robustness. We present a fully integrated system with real-time performance and robustness to moderate deviations from the assumed deformation models.

Fusion of 2D and 3D sensor data for articulated body tracking

March 2009

·

106 Reads

In this article, we present an approach for the fusion of 2d and 3d measurements for model-based person tracking, also known as Human Motion Capture. The applied body model is defined geometrically with generalized cylinders, and is set up hierarchically with connecting joints of different types. The joint model can be parameterized to control the degrees of freedom, adhesion and stiffness. This results in an articulated body model with constrained kinematic degrees of freedom.The fusion approach incorporates this model knowledge together with the measurements, and tracks the target body iteratively with an extended Iterative Closest Point (ICP) approach. Generally, the ICP is based on the concept of correspondences between measurements and model, which is normally exploited to incorporate 3d point cloud measurements. The concept has been generalized to represent and incorporate also 2d image space features.Together with the 3D point cloud from a 3d time-of-flight (ToF) camera, arbitrary features, derived from 2D camera images, are used in the fusion algorithm for tracking of the body. This gives complementary information about the tracked body, enabling not only tracking of depth motions but also turning movements of the human body, which is normally a hard problem for markerless human motion capture systems.The resulting tracking system, named VooDoo is used to track humans in a Human–Robot Interaction (HRI) context. We only rely on sensors on board the robot, i.e. the color camera, the ToF camera and a laser range finder. The system runs in realtime (∼20 Hz) and is able to robustly track a human in the vicinity of the robot.

Kumar Pratihar, D.: Dynamically balanced optimal gaits of a ditch-crossing biped robot. Robot. Auton. Syst. 58(4), 349-361

April 2010

·

209 Reads

This paper deals with the generation of dynamically balanced gaits of a ditch-crossing biped robot having seven degrees of freedom (DOFs). Three different approaches, namely analytical, neural network (NN)-based and fuzzy logic (FL)-based, have been developed to solve the said problem. The former deals with the analytical modeling of the ditch-crossing gait of a biped robot, whereas the latter two approaches aim to maximize the dynamic balance margin of the robot and minimize the power consumption during locomotion, after satisfying a constraint stating that the changes of joint torques should lie within a pre-specified value to ensure its smooth walking. It is to be noted that the power consumption and dynamic balance of the robot are also dependent on the position of the masses on various links and the trajectory followed by the hip joint. A genetic algorithm (GA) is used to provide training off-line, to the NN-based and FL-based gait planners developed. Once optimized, the planners will be able to generate the optimal gaits on-line. Both the NN-based and FL-based gait planners are able to generate more balanced gaits and that, too, at the cost of lower power consumption compared to those yielded by the analytical approach. The NN-based and FL-based approaches are found to be more adaptive compared to the other approach in generating the gaits of the biped robot.

Continuous localization of a mobile robot based on 3D-laser-range-data, predicted sensor images, and dead-reckoning

May 1995

·

47 Reads

This article describes the localization system of a free-navigating mobile robot. Absolute position and orientation of the vehicle are determined by matching vertical planar surfaces extracted from a 3D-laser-range-image with corresponding surfaces predicted from a 3D-environmental model. Continuous localization is achieved by fusing single-image localization and dead-reckoning data by means of a statistical uncertainty evolution technique. Extensive closed-loop experiments with the full-scale mobile robot MACROBE proved robustness, accuracy and real-time capability of this localization scheme.

On optimal constrained trajectory planning in 3D environments

December 2000

·

78 Reads

A novel approach to generating acceleration-based optimal smooth piecewise trajectories is proposed. Given two configurations (position and orientation) in 3D, we search for the minimal energy trajectory that minimizes the integral of the squared acceleration, opposed to curvature, which is widely investigated. The variation in both components of acceleration: tangential (forces on gas pedal or brakes) and normal (forces that tend to drive a car on the road while making a turn) controls the smoothness of generated trajectories. In the optimization process, our objective is to search for the trajectory along which a free moving robot is able to accelerate (decelerate) to a safe speed in an optimal way. A numerical iterative procedure is devised for computing the optimal piecewise trajectory as a solution of a constrained boundary value problem. The resulting trajectories are not only smooth but also safe with optimal velocity (acceleration) profiles and therefore suitable for robot motion planning applications. Experimental results demonstrate this fact.

Optic Flow-Based Vision System for Autonomous 3D Localization and Control of Small Aerial Vehicles

June 2009

·

435 Reads

The problem considered in this paper involves the design of a vision-based autopilot for small and micro Unmanned Aerial Vehicles (UAVs). The proposed autopilot is based on an optic flow-based vision system for autonomous localization and scene mapping, and a nonlinear control system for flight control and guidance. This paper focusses on the development of a real-time 3D vision algorithm for estimating optic flow, aircraft self-motion and depth map, using a low-resolution onboard camera and a low-cost Inertial Measurement Unit (IMU). Our implementation is based on 3 Nested Kalman Filters (3NKF) and results in an efficient and robust estimation process. The vision and control algorithms have been implemented on a quadrotor UAV, and demonstrated in real-time flight tests. Experimental results show that the proposed vision-based autopilot enabled a small rotorcraft to achieve fully-autonomous flight using information extracted from optic flow.

A genetic algorithm-based approach to calculate the optimal configuration of ultrasonic sensors in a 3D position estimation system

December 2002

·

63 Reads

This paper provides a genetic algorithm-based approach to calculate the optimal placement of receivers in a novel 3D position estimation system that uses a single transmitter and multiple receivers. The novelty in the system is the use of the difference in the times of arrival (TOAs) of an ultrasonic wave from the transmitter to the different receivers fixed in 3D space. This is a different approach to traditional systems that use the actual times of flight (TOFs) from the transmitter to the different receivers and triangulate the position of the transmitter. The new approach makes the system more accurate, makes the transmitter independent of the receivers and does not require the need of calculating the time delay term that is inherent in traditional systems due to delays caused by the electronic circuitry. This paper presents a thorough analysis of receiver configurations in the 2D and 3D systems that lead to singularities, i.e. locations of receivers that lead to formulations that cannot be solved due to a shortage of information. It provides guidelines of where not to place receivers so as to get a robust system, and further, presents a detailed analysis of locations that are optimal, i.e. locations that lead to the most accurate estimation of the transmitter positions. The results presented in this paper are not only applicable to ultrasonic systems but all systems that use wave theory, e.g. infrared, laser, etc. This work finds applications in virtual reality cells, robotics, guidance of indoor autonomous vehicles and vibration analysis.

3D environment modeling using laser range sensing

November 1995

·

74 Reads

This paper describes a technique for constructing a geometric model of an unknown environment based on data acquired by a Laser Range Finder on board of a mobile robot. The geometric model would be most useful both for navigation and verification purposes. The paper presents all the steps needed for the description of the environment, including the range image acquisition and processing, 3D surface reconstruction and the problem of merging multiple images in order to obtain a complete model.

An autonomous mobile robot with a 3D laser range finder for 3D exploration and digitalization of indoor environments

December 2003

·

653 Reads

Digital 3D models of the environment are needed in rescue and inspection robotics, facility managements and architecture. This paper presents an automatic system for gaging and digitalization of 3D indoor environments. It consists of an autonomous mobile robot, a reliable 3D laser range finder and three elaborated software modules. The first module, a fast variant of the Iterative Closest Points algorithm, registers the 3D scans in a common coordinate system and relocalizes the robot. The second module, a next best view planner, computes the next nominal pose based on the acquired 3D data while avoiding complicated obstacles. The third module, a closed-loop and globally stable motor controller, navigates the mobile robot to a nominal pose on the base of odometry and avoids collisions with dynamical obstacles. The 3D laser range finder acquires a 3D scan at this pose. The proposed method allows one to digitalize large indoor environments fast and reliably without any intervention and solves the SLAM problem. The results of two 3D digitalization experiments are presented using a fast octree-based visualization method.

Relative 3D-State Estimation for Autonomous Visual Guidance of Road Vehicles

August 1991

·

41 Reads

The integrated spatio-temporal approach to real-time machine vision, which has allowed outstanding performance with moderate computing power, is extended to obstacle recognition and relative spatial state estimation using monocular vision. A modular vision system architecture is discussed centering around features and objects. Experimental results with VaMoRs, a 5-ton test vehicle are given. Stopping in front of obstacles of at least 0.5 m2 cross section has been demonstrated on unmarked two-lane roads at velocitie up to 40 km/h.

An automatic self-installation and calibration method for a 3D position sensing system using ultrasonics

September 1999

·

61 Reads

This work addresses 3D position sensing systems that estimate the location of a wave source by triangulating its position based on the time-of-flights (TOFs) to various receivers fixed to an inertial frame of reference. Typical applications of such systems are finding the location of the transmitter that may be fixed to an autonomously guided vehicle (AGV) operating in an enclosed work environment, a robot end-effector, or virtual reality environments. These environments constitute a large working volume, and the receivers have to be fixed in this environment and their locations known exactly. This is a major source of problems in the installation/calibration stage since the receivers are usually distributed in space and finding their exact location entails using a separate 3D calibrating device which may or may not be as accurate as the location system itself. This paper presents a method to use the system itself to set up an inertial frame of reference and find out the locations of the receivers within this frame by simply using an accurate ID positioning system, e.g. an accurate ruler or a simple distance measuring system that uses ultrasonic or infrared sensors. The method entails moving the transmitter to known locations on a single plane, and using the TOFs to estimate the location of the receivers. A typical application would be that an AGV carries a set of receivers to a hazardous environment such as a nuclear power plant, places the receivers arbitrarily, carries out the self-installation/calibration procedure, maps out the environment, and begins to function autonomously, the whole procedure being done without human intervention or supervision.

Hardware design and gait generation of humanoid soccer robot Stepper-3D

July 2009

·

146 Reads

This paper presents the hardware design and gait generation of humanoid soccer robot Stepper-3D. Virtual Slope Walking, inspired by Passive Dynamic Walking, is introduced for gait generation. In Virtual Slope Walking, by actively extending the stance leg and shortening the swing leg, the robot walks on level ground as it walks down a virtual slope. In practical, Virtual Slope Walking is generated by connecting three key frames in the sagittal plane with sinusoids. Aiming for improving the walking stability, the parallel double crank mechanism are adopted in the leg structure. Experimental results show that Stepper-3D achieves a fast forward walking speed of 0.5 m/s and accomplishes omnidirectional walking. Stepper-3D performed fast and stable walking in the RoboCup 2008 Humanoid competitions.

Experimenting with 3D vision on a robotic head

January 1994

·

31 Reads

We intend to build a vision system that will allow dynamic 3D perception of objects of interest. More specifically, we discuss the idea of using 3D visual cues when tracking a visual target, in order to recover some of its 3D characteristics (depth, size, kinematic information). The basic requirements for such a 3D vision module to be embedded on a robotic head are discussed.The experimentation reported here corresponds to an implementation of these general ideas, considering a calibrated robotic head. We analyse how to make use of such a system for (1) detecting 3D objects of interest, (2) recovering the average depth and size of the tracked objects, (3) fixating and tracking such objects, to facilitate their observation.

Towards 3D Point cloud based object maps for household environments

November 2008

·

1,310 Reads

This article investigates the problem of acquiring 3D object maps of indoor household environments, in particular kitchens. The objects modeled in these maps include cupboards, tables, drawers and shelves, which are of particular importance for a household robotic assistant. Our mapping approach is based on PCD (point cloud data) representations. Sophisticated interpretation methods operating on these representations eliminate noise and resample the data without deleting the important details, and interpret the improved point clouds in terms of rectangular planes and 3D geometric shapes. We detail the steps of our mapping approach and explain the key techniques that make it work. The novel techniques include statistical analysis, persistent histogram features estimation that allows for a consistent registration, resampling with additional robust fitting techniques, and segmentation of the environment into meaningful regions.

A dynamic 3D environmental model with real-time access functions for use in autonomous mobile robots

May 1995

·

23 Reads

A model of the environment is a mandatory requirement for the autonomy of a mobile robot. In this paper, we present a framework for the prediction of expected sensor images on feature level, which is based on a-priori knowledge about the geometry of the environment. Our algorithm is capable to serve all distance and image measuring sensors typically found on mobile robots in real time. Fast prediction is achieved by using concepts from database technology and computer graphics.By incorporating differences between the expected and sensed image, the model can be adapted to changes and uncertainties in the environment. So the predicted features can be attributed with uncertainties to support and enhance the match process.

Globally consistent 3D mapping with scan matching

February 2008

·

460 Reads

A globally consistent solution to the simultaneous localization and mapping (SLAM) problem in 2D with three degrees of freedom (DoF) poses was presented by Lu and Milios [F. Lu, E. Milios, Globally consistent range scan alignment for environment mapping, Autonomous Robots 4 (April) (1997) 333–349]. To create maps suitable for natural environments it is however necessary to consider the 6DoF pose case, namely the three Cartesian coordinates and the roll, pitch and yaw angles. This article describes the extension of the proposed algorithm to deal with these additional DoFs and the resulting non-linearities. Simplifications using Taylor expansion and Cholesky decomposition yield a fast application that handles the massive amount of 3D data and the computational requirements due to the 6DoF. Our experiments demonstrate the functionality of estimating the exact poses and their covariances in all 6DoF, leading to a globally consistent map. The correspondences between scans are found automatically by use of a simple distance heuristic.

Self-calibrated visual servoing with respect to axial-symmetric 3D objects

April 2009

·

91 Reads

A self-calibrated approach to visual servoing with respect to non-planar targets modeled through a pair of coaxial circles plus one point is discussed. Full calibration data (fixed internal parameters) are obtained from two views, and used to recover the Euclidean structure of an auxiliary virtual plane associated to the target, together with the relative pose of the camera. Pose disambiguation is achieved without requiring any real third view of the target. The approach benefits of an off-line planning strategy by which the camera follows a 3D helicoidal path around an arbitrarily chosen axis. A convenient choice for the helicoidal axis is found to be that of the target axis itself. Simulation results demonstrate that the approach is robust with respect to noise both in the off-line and on-line control phases.

3D scene interpretation for a mobile robot

October 1997

·

32 Reads

This paper presents MESSIE, a multi-specialist architecture for scene interpretation in a robotic application. MESSIE is a centralized hierarchical blackboard architecture. The generic model of objects and the explicit description of sensors and materials allow the use of an application independent interpretation strategy. Two remote sensing applications on 2D scene interpretation and a third one, presented in this paper, on 3D scene interpretation allow the validation of the proposed architecture as well as the main features of MESSIE.After a brief overview of the state of art in 3D object modeling and scene interpretation, we discuss the scene interpretation problem from the knowledge representation view point. Then the architecture of MESSIE, the object modeling and the processing strategies (object detection and scene interpretation) are described. Further an application of 3D indoor scene interpretation in mobile robot context is given. We also present an interpretation running example using constrained low-level feature extraction mechanism to improve the image segmentation results.

Development of a 3DOF mobile exoskeleton robot for human upper-limb motion assist

August 2008

·

257 Reads

In order to assist physically disabled, injured, and/or elderly persons, we have been developing exoskeleton robots for assisting upper-limb motion, since upper-limb motion is involved in a lot of activities of everyday life. This paper proposes a mechanism and control method of a mobile exoskeleton robot for 3DOF upper-limb motion assist (shoulder vertical and horizontal flexion/extension, and elbow flexion/extension motion assist). The exoskeleton robot is mainly controlled by the skin surface electromyogram (EMG) signals, since EMG signals of muscles directly reflect how the user intends to move. The force vector at the end-effector is taken into account to generate the natural and smooth hand trajectory of the user in the proposed control method. An obstacle avoidance algorithm is applied to prevent accidental collision between the user’s upper-limb and the robot frame. The experiment was performed to evaluate the effectiveness of the proposed exoskeleton robot.

Kinematic analysis of two novel 3UPU I and 3UPU II PKMs

April 2008

·

53 Reads

Two novel 3UPU I and 3UPU II PKMs (parallel kinematic machines) with two rotations and one translation are proposed, and their kinematics are studied systematically. First, the kinematic characteristics of the 3UPU I and 3UPU II PKMs are analyzed and the geometric constrained equations are derived. Second, some analytic formulae are derived for solving inverse displacement, inverse/forward velocity and acceleration of the two PKMs. Third, the reachable workspaces of the two PKMs are solved and analyzed. The analytic results are verified by their simulation mechanism.

Learning from demonstration and adaptation of biped locomotion. Robotics and Autonomous Systems 47:79-91

June 2004

·

194 Reads

In this paper, we introduce a framework for learning biped locomotion using dynamical movement primitives based on non-linear oscillators. Our ultimate goal is to establish a design principle of a controller in order to achieve natural human-like locomotion. We suggest dynamical movement primitives as a central pattern generator (CPG) of a biped robot, an approach we have previously proposed for learning and encoding complex human movements. Demonstrated trajectories are learned through movement primitives by locally weighted regression, and the frequency of the learned trajectories is adjusted automatically by a novel frequency adaptation algorithm based on phase resetting and entrainment of coupled oscillators. Numerical simulations and experimental implementation on a physical robot demonstrate the effectiveness of the proposed locomotion controller.

Optimum design of the 5R symmetrical parallel manipulator with a surrounded and good-condition workspace

March 2006

·

385 Reads

This paper concerns the optimum design issue of the 5R symmetrical parallel manipulator with a surrounded workspace. Generally, such a manipulator has a very large workspace. With different working modes, a manipulator will have different singular loci and workspaces. In this paper, the singularity and the usable workspace without singularity inside will be determined for the manipulator with a specified mode. The usable workspace can be used to define the global conditioning index (GCI). In order to obtain the optimum design of the manipulator, a non-dimensional design space is established. Because each of the non-dimensional manipulators in the established design space can represent the performances of all of its possible similarity manipulators, the design space is a very useful tool for guaranteeing a global comparative result. Within the design space, the singularity, usable workspace and control accuracy (evaluated using the GCI) are studied and the corresponding atlases are constructed. Based on the atlases, one can synthesize link lengths of the manipulator studied with respect to specified criteria. One example will be given to show how to use the atlases. In particular, an example will be presented of reaching the optimum dimensional result with respect to a desired practical workspace based on the optimum non-dimensional result identified from the atlases. For the reason that using the atlases presented in this paper a designer can obtain the optimum result with respect to any specification, the optimum design method proposed in this paper may be accepted by others.

The CMUnited-97 robotic soccer team: Perception and multi-agent control

November 1999

·

19 Reads

Robotic soccer is a challenging research domain which involves multiple agents that need to collaborate in an adversarial environment to achieve specific objectives. In this paper, we describe CMUnited-97, the team of small robotic agents that we developed to enter the RoboCup-97 competition. We designed and built the robotic agents, devised the appropriate vision algorithm, and developed and implemented algorithms for strategic collaboration between the robots in an uncertain and dynamic environment. The robots can organize themselves in formations, hold specific roles, and pursue their goals. In game situations, they have demonstrated their collaborative behaviors on multiple occasions. The robots can also switch roles to maximize the overall performance of the team. We present an overview of the vision processing algorithm which successfully tracks multiple moving objects and predicts trajectories. The paper then focuses on the agent behaviors ranging from low-level individual behaviors to coordinated, strategic team behaviors. CMUnited-97 won the RoboCup-97 small-robot competition at IJCAI-97 in Nagoya, Japan.

Absolute localization for a mobile robot using place cells

December 1997

·

28 Reads

This paper describes a method for absolute localization and environment recognition for an autonomous, sonar-equipped robot. The addition of an auto-associative memory to previously developed non-neural map making software results in a system that is capable of recognizing its environment and its position within the environment using remembered features and room geometry. In the prior system the robot used sonar to construct a metric map of an environment, but the map information had to be reconstructed each time the robot returned to an environment. We evaluated the system with a task that requires memory of the position of a goal that is not directly detectable by sonar.

Simultaneous localization and map building using natural features and absolute information

August 2002

·

228 Reads

This work presents real time implementation algorithms of Simultaneous Localization and Map Building (SLAM) with emphasis to outdoor land vehicle applications in large environments. It presents the problematic of outdoors navigation in areas with combination of feature and featureless regions. The aspect of feature detection and validation is investigated to reliably detect the predominant features in the environment. Aided SLAM algorithms are presented that incorporate absolute information in a consistent manner. The SLAM implementation uses the compressed filter algorithm to maintain the map with a cost proportional to number of landmarks in the local area. The information gathered in the local area requires a full SLAM update when the vehicle leaves the local area. Algorithms to reduce the full update computational cost are also presented. Finally, experimental results obtained with a standard vehicle running in unstructured outdoor environment are presented.

Automatic abstraction in reinforcement learning using data mining techniques

November 2009

·

129 Reads

In this paper, we used data mining techniques for the automatic discovering of useful temporal abstraction in reinforcement learning. This idea was motivated by the ability of data mining algorithms in automatic discovering of structures and patterns, when applied to large data sets. The state transitions and action trajectories of the learning agent are stored as the data sets for data mining techniques. The proposed state clustering algorithms partition the state space to different regions. Policies for reaching different parts of the space are separately learned and added to the model in a form of options (macro-actions). The main idea of the proposed action sequence mining is to search for patterns that occur frequently within an agent’s accumulated experience. The mined action sequences are also added to the model in a form of options. Our experiments with different data sets indicate a significant speedup of the Q-learning algorithm using the options discovered by the state clustering and action sequence mining algorithms.

Resource Sharing in Distributed Robotic Systems Based on A Wireless Medium Access Protocol (CSMA/CD-W)

January 1994

·

13 Reads

Resource sharing is a crucial issue in any multi-agent systems, a distributed robotics system (DRS) is not an exception. A general strategy of sharing multiple types of discrete resources with finite capacity under the model of DRS is proposed. It is based upon a media access protocol, CSMA/CD-W (Carrier Sense Multiple Access with Collision Detection for Wireless), which supports wireless inter-robot communication among multiple autonomous mobile robots without using any centralized mechanism. This resource sharing strategy is derived based on the fact that with the single, time-multiplexed communication channel, asynchronous events for requesting and releasing resources are effectively serialized. It is shown that the control protocol is effective, efficient, reliable and robust.

Top-cited authors