A human's ability to perform physical tasks is limited by physical strength, not by intelligence. We define "extenders" as a class of robot manipulators worn by humans to augment human mechanical strength, while the wearer's intellect remains the central control system for manipulating the extender. Our research objective is to determine the ground rules for the design and control of robotic systems worn by humans through the design, construction, and control of several prototype experimental direct-drive/non-direct-drive multi-degree-of-freedom hydraulic/electric extenders. The design of extenders is different from the design of conventional robots because the extender interfaces with the human on a physical level. Two sets of force sensors measure the forces imposed on the extender by the human and by the environment (i.e., the load). The extender's compliances in response to such contact forces were designed by selecting appropriate force compensators. This paper gives a summary of some of the selected research efforts related to Extender Technology, carried out during 1980s. The references, at the end of this article, give detailed description of the research efforts.
We describe how a robot can develop knowledge of the objects in its environment directly from unsupervised sensorimotor experience. The object knowledge consists of multiple integrated representations: trackers that form spatio-temporal clusters of sensory experience, percepts that represent properties for the tracked objects, classes that support efficient generalization from past experience, and actions that reliably change object percepts. We evaluate how well this intrinsically acquired object knowledge can be used to solve externally specified tasks including object recognition and achieving goals that require both planning and continuous control.
This paper presents an experimental comparison of tactile array versus force-torque sensing for localizing contact during manipulation. The manipulation tasks involved rotating and translating objects using a planar two fingered manipulator. A pin and a box were selected as limiting cases of point and line contact against a cylindrical robot finger tip. Force-torque contact sensing results suffered from difficulties in calibration, transient forces, and low grasp force. Tactile array sensing was immune to these problems, and the effect of shear loading was only noticeable for a simple centroid algorithm. The results show that with care, both of these sensing schemes can determine the contact location within a millimeter during real manipulation tasks
In the past mobile robot research was often focused on various kinds of point-to-point transportation tasks. Mobile robot application in service tasks, however, requires quite different path planning and guidance approaches. This paper introduces and discusses in detail specific planning and guidance techniques for a mobile floor-cleaning robot. A kinematic and geometric model of the robot and the cleaning units as well as a 2D-map of the indoor environment are used for planning an appropriate cleaning path. The path is represented by a concatenation of two kinds of typical motion patterns. Each pattern is defined by a sequence of discrete cartesian intermediate goal frames. These frames represent position and orientation of the vehicle and must be translated into motion commands for the robot. The steps of this semi-automatic path planning system are illustrated by a typical cleaning environment. Vehicle guidance includes execution of the planned motion commands, estimation of the robot location, path tracking, as well as detection of and reaction to (isolated) obstacles. For location estimation a least-squares fitting of corresponding geometric contours from the 2D-environment map and geometric 2D-sensor data is used. Obstacle detection is accomplished by testing geometric 2D-sensor data to be part of the preplanned cleaning path. Path planning and parts of the developed vehicle guidance system have been tested with the experimental mobile robot MACROBE. Results reported in this paper demonstrate the efficiency of the described planning, location estimation and path tracking procedures in basic floor-cleaning tasks
We present work on a six-legged walking machine that uses a hierarchical version of Q-learning (HQL) to learn both the elementary swing and stance movements of individual legs as well as the overall coordination scheme to perform forward movements. The architecture consists of a hierarchy of local controllers implemented in layers. The lowest layer consists of control modules performing elementary actions, like moving a leg up, down, left or right to achieve the elementary swing and stance motions for individual legs. The next level consists of controllers that learn to perform more complex tasks like forward movement by using the previously learned, lower level modules. On the third the highest layer in the architecture presented here the previously learned complex movements are themselves reused to achieve goals in the environment using external sensory input. The work is related to similar, although simulation-based, work by Lin (1993) on hierarchical reinforcement learning and Singh (1994) on compositional Q-learning. We report on the HQL architecture as well as on its implementation on the walking machine SIR ARTHUR. Results from experiments carried out on the real robot are reported to show the applicability of the HQL approach to real world robot problems
This paper presents a binocular tracking system based on the integration of visual behaviours. Biologically motivated behaviours, vergence and pursuit, cooperate as parallel, complementary and highly coupled processes in the tracking system, simplifying the acquisition of perceptual information and system modeling and control. The use of a space variant image representation and low-level visual cues as feedback signals in a closed loop control architecture, allow real-time and reliable performance for each behaviour, despite the low precision of the algorithms and modeling errors. The behaviours are integrated and the overall system is implemented in a stereo head running at real-time (12.5 Hz), without any specific processing hardware. Results are presented for objects of different shapes and motions, illustrating that tracking can be robustly achieved by the cooperation of purposively designed behaviours, tuned to specific subgoals
An algorithm for motion planning for multiple mobile tethered robots in a common planar environment is presented. The tethers of the robots are flexible cables that can be pushed and bent by other robots during their motion. Given the start and target positions of all robots and their cables, the objective is to design a sequential motion strategy for the robots that will not entangle the robot tethers. An algorithm is described that achieves this objective while generating an ordering of the robots that produces reasonably short paths. The algorithm's complexity is O(n<sup>4</sup>), where n is the number of robots
A self-calibration technique is proposed in this paper to estimate
extrinsic parameters of a stereo camera system. This technique does not
require external 3D measurements of precision calibration points.
Furthermore, it is conceptually simple and easy to implement. It has
applications in such areas as autonomous vehicle navigation, robotics
and computer vision. The proposed approach relies solely on distance
measurements of a fixed-length object, say a stick. While the object is
moved in the 3D space, the image coordinates of the object end points
are extracted from the image sequence. A cost function that relates
unknown parameters to measurement residuals is formulated. A nonlinear
least squares algorithm is then applied to compute the parameters by
minimizing the cost function, using the measured image coordinates and
the known length of the object. Simulation studies in this papers answer
questions such as the number of iterations needed for the algorithm to
converge, the number of measurements needed for a robust estimation,
singularity cases, and noise sensitivities of the algorithm
Ultrasonic sensors have been widely used in recognizing the
working environment for a mobile robot. However, their intrinsic
problems, such as the specular reflection, the wide beam angle, and the
slow propagation velocity, require an excessive number of sensors to be
integrated to achieve the various sensing goals. This paper proposes new
measurement scheme which uses only two sets of ultrasonic sensors to
determine the location and the type of target surface. By measuring the
time difference between the returned signals from the target surface,
which are generated by two transmitters with 1 ms difference, it
classifies type and determines the pose of the target surface. Since the
proposed sensor system uses only the two sets of ultrasonic sensors to
recognize and localize the target surface, it significantly simplifies
the sensing system and reduces the signal processing time so that the
working environment can be recognized in real time
This work presents a novel approach to the problem of establishing and maintaining a common coordinate system for a group of robots. A camera system mounted on top of a robot and vision algorithms are used to calculate the relative position of each surrounding robot. The watched movement of each robot is compared to the reported movement which is sent over some communication link. From this comparison a coordinate transformation is calculated. The algorithm was tested in simulation and is at the moment being implemented on a real robot system. Preliminary results of real world experiments are being presented.
A rapidly deployable manipulator system combines the flexibility
of reconfigurable modular hardware with modular programming tools,
allowing the user to rapidly create a manipulator which is
custom-tailored for a given task. This article describes two main
aspects of such a system, namely, the reconfigurable modular manipulator
system (RMMS) hardware and the corresponding control software
We introduce Embodied Evolution (EE) as a methodology for the
automatic design of robotic controllers. EE is an evolutionary robotics
(ER) technique that avoids the pitfalls of the simulate-and-transfer
method, allows the speed-up of evaluation time by utilizing parallelism,
and is particularly suited to future work on multi-agent behaviors. In
EE, an evolutionary algorithm is distributed amongst and embodied within
a population of physical robots that reproduce with one another while
situated in the task environment. We have built a population of eight
robots and successfully implemented our first experiments. The
controllers evolved by EE compare favorably to hand-designed solutions
for a simple task. We detail our methodology, report our initial
results, and discuss the application of EE to more advanced and
distributed robotics tasks
This paper presents a multisine approach for trajectory optimization based on information gain, with distance and orientation sensing to known beacons. It addresses the problem of active sensing, i.e. the selection of a robot motion or sequence of motions, which make the robot arrive in its desired goal configuration (position and orientation) with maximum accuracy, given the available sensor information. The optimal trajectory is parameterized as a linear combination of sine functions. An appropriate optimality criterion is selected which takes into account various requirements (e.g. maximum accuracy and minimum time). Several constraints can be formulated, e.g. with respect to collision avoidance. The optimal trajectory is then determined by numerical optimization techniques. The approach is applicable to both nonholonomic and holonomic robots. Its effectiveness is illustrated here for a nonholonomic wheeled mobile robot (WMR) in an environment with and without obstacles.
The goal of this paper is to develop the foundation for a spatial navigation without objective representations. Rather than building the spatial representations on a Euclidean space, a weaker conception of space is used which has a closer connection to perception. A type of spatial representation is described that uses perceptual information directly to define regions in space. By combining such regions, it is possible to derive a number of useful spatial representations such as place-fields, paths and topological maps. Compared to other methods, the representations of the presented approach have the advantage that they are always grounded in the perceptual abilities of the robot, and thus, more likely to function correctly
We describe a real-time visual system that enables a humanoid robot to learn from and interact with humans. The core of the visual system is a probabilistic tracker that uses shape and color information to find relevant objects in the scene. Multiscale representations, windowing and masking are employed to accelerate the data processing. The perception system is directly coupled with the motor control system of our humanoid robot DB. We present an example of on-line interaction with a humanoid robot: mimicking of human hand motion. The generation of humanoid robot motion based on the human motion is accomplished in real-time. The study is supported by experimental results on DB
It should be understood that the robot is not a simple automatic machine, instead has a certain level of intelligence. Many kinds of intelligent robots have been developed in the author's laboratory during the past 15 years. These robots perform many kinds of games like the cup and ball game, top-spinning, walking on stilts, etc. These robots apparently look intelligent, but are they really and truly intelligent? There is one opinion that these robots are no more than simple automatic machines which are controlled by a computer with sophisticated programs. If so, then what is actual robot intelligence? The author is trying to construct a new robotics - insect-model based microrobotics - in order to get a new concept of robot intelligence.
This paper describes a reactive approach to the integration and control of continuously operating visual processes. Visual processes are expressed as transformations which map signals from virtual sensors into commands for devices. These transformations define reactive processes which tightly couple perception and action. Such transformations may be used to control robotic devices, including fixation an active binocular head, as well as to the select and control the processes which interpret visual data. This method takes inspiration from so-called "behavioural" approaches to mobility and manipulation. However, unlike most previous work, we define reactive transformations at the level of virtual sensors and device controllers. This permits a system to integrate a large number of perceptual processes and to dynamically compose sequences of such processes to perform visual tasks. The transition between visual processes is mediated by signals from a supervisory controller as well as signals obtained from perception. This method offers the possibility of constructing vision systems with large numbers of visual abilities in a manner which is both scalable and learnable.
An object verification and localization system should answer the question of whether an expected object is present in an image or not, i.e. verification, and if present where it is located. Such a system would be very useful for mobile robots for example for landmark recognition or for the fulfilment of certain tasks. In this paper we present an object verification and localization system specially adapted to the needs of mobile robots. The object model is based on a collection of local features derived from a small neighbourhood around automatically detected interest points. The learned representation of the object is then matched with the image under consideration. The tests, based on 90 images, showed a very satisfying tolerance to scale changes of up to 25%, to viewpoint variations of 20 degrees, to occlusion of up to 80% and to major background changes as well as to local and global illumination changes. The tests also showed that the verification capabilities are very good and that similar objects did not trigger any false verification
Mobile robots navigate through many environments that include
plants. A sensor that can recognise plants would be useful for
navigation in these environments. Two problems make plant sensing
difficult: plant similarity and plant asymmetry with rotation. A CTFM
(continuously transmitted frequency modulated) ultrasonic sensor
produces a signal that contains information about the geometric
structure of plants. Correlation of echoes from many orientations show
that plants can be recognised with sufficient accuracy for navigation
The purpose of this paper is to present our design philosophy for
service robotics research and development and describe our current
efforts along this line. Our approach begins with a discussion of the
role of service robotics and some features that are unique to service
robotics. We then describe our design philosophy that emphasizes
compromise and practicality in design. We will use this philosophy in
the design and integration of a new service robot system, based on ISAC
and HERO. ISAC is a stationary service robot designed to feed physically
challenged individuals that is operated by voice command. HERO is a
small mobile robot integrated into the system to provide new
functionality for the user. We will make use of our design philosophy to
solve some of the robot navigation problems and describe how our
approach will help us solve these problems in an efficient manner. Some
problems will be approached by a technical solution, and other problems
will be solved through an expanded user interface and appeal to the
intelligence of the user of the system. Performance of a useful service
with limited intervention from a user at a reasonable cost is our goal
This paper presents an algorithm for full 3D shape reconstruction of indoor and outdoor environments with mobile robots. Data is acquired with laser range finders installed on a mobile robot. Our approach combines efficient scan matching routines for robot pose estimation with an algorithm for approximating environments using flat surfaces. On top of that, our approach includes a mesh simplification technique to reduce the complexity of the resulting models. In extensive experiments, our method is shown to produce accurate models of indoor and outdoor environments that compare favorably to other methods.
In this paper we present Robox, a mobile robot designed for operation in a mass exhibition and the experience we made with its installation at the Swiss National Exhibition Expo.02. Robox is a fully autonomous mobile platform with unique multi-modal interaction capabilities, a novel approach to global localization using multiple Gaussian hypotheses, and a powerful obstacle avoidance. Eleven Robox ran for 12 hours daily from May 15 to October 20, 2002, traveling more than 3315 km and interacting with 686,000 visitors.
We discuss some issues involved in modeling of complex systems composed of dynamically interacting agents. We describe a prototype of simulation environment created for modeling of such systems with the aim of evaluating strategies of enterprizes in the information economy, but applicable to general multi-agent systems. The case study is presented along with the mathematical description of the multi-agent systems.
The topic of mobile robot self-localisation is often divided into the sub-problems of global localisation and position tracking. Both are now well understood individually, but few mobile robots can deal simultaneously with the two problems in large, complex environments. In this paper, we present a unified approach to global localisation and position tracking which is based on a topological map augmented with metric information. This method combines a new scan matching technique, using histograms extracted from local occupancy grids, with an efficient algorithm for tracking multiple location hypotheses over time. The method was validated with experiments in a series of real world environments, including its integration into a complete navigating robot. The results show that the robot can localise itself reliably in large, indoor environments using minimal computational resources.
The main idea in object tracking is to support the processing of incoming images by predicting future object's poses and features using the knowledge about the object's previous motion. In this paper we present a new method for the prediction and adjustment of motion parameters to the current measurements using the quaternion representation for the orientation and the Gauss-Newton iteration on the unit sphere S3. Unlike other trackers, our tracker searches for estimates directly in the space of rigid body motions (represented by R3 × S3) and takes into account the differential and geometric properties of this space. We present some results showing the successful tracking in synthetic and real image sequences.
The paper describes a system for open-ended communication by autonomous robots about event descriptions anchored in reality through the robot’s sensori-motor apparatus. The events are dynamic and agents must continually track changing situations at multiple levels of detail through their vision system. We are specifically concerned with the question how grounding can become shared through the use of external (symbolic) representations, such as natural language expressions.
This paper deals with the anchoring of one of the most influential symbolic formalisms used in cognitive robotics, namely the situation calculus, to a conceptual representation of dynamic scenarios. Our proposal is developed with reference to a cognitive architecture for robot vision. An experimental setup is presented, aimed at obtaining intelligent monitoring operations of a robotic finger starting from visual data.
The work presented in this paper deals with the problem of the navigation of a mobile robot either in unknown indoor environment or in a partially known one.A navigation method in an unknown environment based on the combination of elementary behaviors has been developed. Most of these behaviors are achieved by means of fuzzy inference systems. The proposed navigator combines two types of obstacle avoidance behaviors, one for the convex obstacles and one for the concave ones. The use of zero-order Takagi–Sugeno fuzzy inference systems to generate the elementary behaviors such as “reaching the middle of the collision-free space” and “wall-following” is quite simple and natural. However, one can always fear that the rules deduced from a simple human expertise are more or less sub-optimal. This is why we have tried to obtain these rules automatically. A technique based on a back-propagation-like algorithm is used which permits the on-line optimization of the parameters of a fuzzy inference system, through the minimization of a cost function. This last point is particularly important in order to extract a set of rules from the experimental data without having recourse to any empirical approach.In the case of a partially known environment, a hybrid method is used in order to exploit the advantages of global and local navigation strategies. The coordination of these strategies is based on a fuzzy inference system by an on-line comparison between the real scene and a memorized one. The planning of the itinerary is done by visibility graph and A∗ algorithm. Fuzzy controllers are achieved, on the one hand, for the following of the planned path by the virtual robot in the theoretical environment and, on the other hand, for the navigation of the real robot when the real environment is locally identical to the memorized one.Both the methods have been implemented on the miniature mobile robot Khepera® that is equipped with rough sensors. The good results obtained illustrate the robustness of a fuzzy logic approach with regard to sensor imperfections.
In this paper, a mobile robot control law for corridor navigation and wall-following, based on sonar and odometric sensorial information is proposed. The control law allows for stable navigation avoiding actuator saturation. The posture information of the robot travelling through the corridor is estimated by using odometric and sonar sensing. The control system is theoretically proved to be asymptotically stable. Obstacle avoidance capability is added to the control system as a perturbation signal. A state variables estimation structure is proposed that fuses the sonar and odometric information. Experimental results are presented to show the performance of the proposed control system.
We address the place recognition problem, which we define as the problem of establishing whether an observed location has been previously seen, and if so, determining the transformation aligning the current observations to an existing map. In the contexts of robot navigation and mapping, place recognition amounts to globally localizing a robot or map segment without being given any prior estimate. An efficient method of solving this problem involves first selecting a set of keypoints in the scene which store an encoding of their local region, and then utilizing a sublinear-time search into a database of keypoints previously generated from the global map to identify places with common features. We present an algorithm to embed arbitrary keypoint descriptors in a reduced-dimension metric space, in order to frame the problem as an efficient nearest neighbor search. Given that there are a multitude of possibilities for keypoint design, we propose a general methodology for comparing keypoint location selection heuristics and descriptor models that describe the region around the keypoint. With respect to selecting keypoint locations, we introduce a metric that encodes how likely it is that the keypoint will be found in the presence of noise and occlusions during mapping passes. Metrics for keypoint descriptors are used to assess the distinguishability between the distributions of matches and non-matches and the probability the correct match will be found in an approximate k-nearest neighbors search. Verification of the test outcomes is done by comparing the various keypoint designs on a kilometers-scale place recognition problem. We apply our design evaluation methodology to three keypoint selection heuristics and six keypoint descriptor models. A full place recognition system is presented, including a series of match verification algorithms which effectively filter out false positives. Results from city-scale and long-term mapping problems illustrate our approach for both offline and online SLAM, map merging, and global localization and demonstrate that our algorithm is able to produce accurate maps over trajectories of hundreds of kilometers.
This work presents a real-time active vision tracking system based on log-polar image motion estimation with 2D geometric deformation models. We present a very efficient parametric motion estimation method, where most computation can be done offline. We propose a redundant parameterization for the geometric deformations, which improve the convergence range of the algorithm. A foveated image representation provides extra computational savings and attenuation of background effects. A proper choice of motion models and a hierarchical organization of the iterations provide additional robustness. We present a fully integrated system with real-time performance and robustness to moderate deviations from the assumed deformation models.
In this article, we present an approach for the fusion of 2d and 3d measurements for model-based person tracking, also known as Human Motion Capture. The applied body model is defined geometrically with generalized cylinders, and is set up hierarchically with connecting joints of different types. The joint model can be parameterized to control the degrees of freedom, adhesion and stiffness. This results in an articulated body model with constrained kinematic degrees of freedom.The fusion approach incorporates this model knowledge together with the measurements, and tracks the target body iteratively with an extended Iterative Closest Point (ICP) approach. Generally, the ICP is based on the concept of correspondences between measurements and model, which is normally exploited to incorporate 3d point cloud measurements. The concept has been generalized to represent and incorporate also 2d image space features.Together with the 3D point cloud from a 3d time-of-flight (ToF) camera, arbitrary features, derived from 2D camera images, are used in the fusion algorithm for tracking of the body. This gives complementary information about the tracked body, enabling not only tracking of depth motions but also turning movements of the human body, which is normally a hard problem for markerless human motion capture systems.The resulting tracking system, named VooDoo is used to track humans in a Human–Robot Interaction (HRI) context. We only rely on sensors on board the robot, i.e. the color camera, the ToF camera and a laser range finder. The system runs in realtime (∼20 Hz) and is able to robustly track a human in the vicinity of the robot.
We introduce a concept of a real-world-oriented humanoid robot that can support humans’ activities in daily life. In such environments, robots have to watch humans, understand their behavior, and support their daily life tasks. In particular, these robots must be capable of such real-world behavior as handling tableware and delivering daily commodities by hand. We developed a humanoid robot, HRP-2W, which has an upper body of HRP-2 [K. Kaneko, F. Kanehiro, S. Kajita, H. Hirukawa, T. Kawasaki, M. Hirata, K. Akachi, T. Isozumi, Humanoid Robot HRP-2, in: Proceedings of the 2004 IEEE International Conference on Robotics & Automation, 2004, pp. 1083–1090] and a wheel module instead of legs, as a research platform to fulfill this aim. We also developed basic software configuration in order to integrate our platform with other research groups. Through experiments, we demonstrated the feasibility of the humanoid robot platform and the potential of the software architecture.
This paper deals with the generation of dynamically balanced gaits of a ditch-crossing biped robot having seven degrees of freedom (DOFs). Three different approaches, namely analytical, neural network (NN)-based and fuzzy logic (FL)-based, have been developed to solve the said problem. The former deals with the analytical modeling of the ditch-crossing gait of a biped robot, whereas the latter two approaches aim to maximize the dynamic balance margin of the robot and minimize the power consumption during locomotion, after satisfying a constraint stating that the changes of joint torques should lie within a pre-specified value to ensure its smooth walking. It is to be noted that the power consumption and dynamic balance of the robot are also dependent on the position of the masses on various links and the trajectory followed by the hip joint. A genetic algorithm (GA) is used to provide training off-line, to the NN-based and FL-based gait planners developed. Once optimized, the planners will be able to generate the optimal gaits on-line. Both the NN-based and FL-based gait planners are able to generate more balanced gaits and that, too, at the cost of lower power consumption compared to those yielded by the analytical approach. The NN-based and FL-based approaches are found to be more adaptive compared to the other approach in generating the gaits of the biped robot.
This paper proposes a novel strategy for grasping 3D unknown objects in accordance with their corresponding task. We define the handle or the natural grasping component of an object as the part chosen by humans to pick up this object. When humans reach out to grasp an object, it is generally in the aim of accomplishing a task. Thus, the chosen grasp is quite related to the object task. Our approach learns to identify object handles by imitating humans. In this paper, a new sufficient condition for computing force-closure grasps on the obtained handle is also proposed. Several experiments were conducted to test the ability of the algorithm to generalize to new objects. They also show the adaptability of our strategy to the hand kinematics.
This article describes the localization system of a free-navigating mobile robot. Absolute position and orientation of the vehicle are determined by matching vertical planar surfaces extracted from a 3D-laser-range-image with corresponding surfaces predicted from a 3D-environmental model. Continuous localization is achieved by fusing single-image localization and dead-reckoning data by means of a statistical uncertainty evolution technique. Extensive closed-loop experiments with the full-scale mobile robot MACROBE proved robustness, accuracy and real-time capability of this localization scheme.
A novel approach to generating acceleration-based optimal smooth piecewise trajectories is proposed. Given two configurations (position and orientation) in 3D, we search for the minimal energy trajectory that minimizes the integral of the squared acceleration, opposed to curvature, which is widely investigated. The variation in both components of acceleration: tangential (forces on gas pedal or brakes) and normal (forces that tend to drive a car on the road while making a turn) controls the smoothness of generated trajectories. In the optimization process, our objective is to search for the trajectory along which a free moving robot is able to accelerate (decelerate) to a safe speed in an optimal way. A numerical iterative procedure is devised for computing the optimal piecewise trajectory as a solution of a constrained boundary value problem. The resulting trajectories are not only smooth but also safe with optimal velocity (acceleration) profiles and therefore suitable for robot motion planning applications. Experimental results demonstrate this fact.
This paper analyzes the problem of motion estimation from a sequence of stereo images. Two methods are considered and their differential and discrete approaches are compared. The differential approaches use differential optical flow whereas the discrete approaches use feature correspondences. Both methods are used to compute, first, the 3D velocity in the direction of the optical axis and, next, the complete rigid motion parameters. The uncertainty propagation models for both methods and approaches are derived. These models are analyzed in order to point out the critical variables for the methods. The methods were extensively tested using synthetic images as well as real images and conclusions are drawn from the results. Real images are used without any illumination control of the scene in order to study the behavior of the methods in strongly noisy environments with low resolution depth maps.
Digital 3D models of the environment are needed in rescue and inspection robotics, facility managements and architecture. This paper presents an automatic system for gaging and digitalization of 3D indoor environments. It consists of an autonomous mobile robot, a reliable 3D laser range finder and three elaborated software modules. The first module, a fast variant of the Iterative Closest Points algorithm, registers the 3D scans in a common coordinate system and relocalizes the robot. The second module, a next best view planner, computes the next nominal pose based on the acquired 3D data while avoiding complicated obstacles. The third module, a closed-loop and globally stable motor controller, navigates the mobile robot to a nominal pose on the base of odometry and avoids collisions with dynamical obstacles. The 3D laser range finder acquires a 3D scan at this pose. The proposed method allows one to digitalize large indoor environments fast and reliably without any intervention and solves the SLAM problem. The results of two 3D digitalization experiments are presented using a fast octree-based visualization method.
This work addresses 3D position sensing systems that estimate the location of a wave source by triangulating its position based on the time-of-flights (TOFs) to various receivers fixed to an inertial frame of reference. Typical applications of such systems are finding the location of the transmitter that may be fixed to an autonomously guided vehicle (AGV) operating in an enclosed work environment, a robot end-effector, or virtual reality environments. These environments constitute a large working volume, and the receivers have to be fixed in this environment and their locations known exactly. This is a major source of problems in the installation/calibration stage since the receivers are usually distributed in space and finding their exact location entails using a separate 3D calibrating device which may or may not be as accurate as the location system itself. This paper presents a method to use the system itself to set up an inertial frame of reference and find out the locations of the receivers within this frame by simply using an accurate ID positioning system, e.g. an accurate ruler or a simple distance measuring system that uses ultrasonic or infrared sensors. The method entails moving the transmitter to known locations on a single plane, and using the TOFs to estimate the location of the receivers. A typical application would be that an AGV carries a set of receivers to a hazardous environment such as a nuclear power plant, places the receivers arbitrarily, carries out the self-installation/calibration procedure, maps out the environment, and begins to function autonomously, the whole procedure being done without human intervention or supervision.
The integrated spatio-temporal approach to real-time machine vision, which has allowed outstanding performance with moderate computing power, is extended to obstacle recognition and relative spatial state estimation using monocular vision. A modular vision system architecture is discussed centering around features and objects. Experimental results with VaMoRs, a 5-ton test vehicle are given. Stopping in front of obstacles of at least 0.5 m2 cross section has been demonstrated on unmarked two-lane roads at velocitie up to 40 km/h.
In this paper, we address the problem of recovering 3D scene structure from omni-directional cylindrical panoramic images. In our application, the images are obtained from a single omni-directional camera mounted on top of a mobile robot. Depth can be estimated from a single pair of images by a stereo vision technique. However, depth cannot be estimated reliably in the robot’s heading direction. Moreover, depth estimates obtained from a single pair of images tend to be noisy. To overcome these problems we have developed a multi-baseline stereo vision method which uses more than two images obtained from multiple non-co-linear viewpoints. Depth data obtained from different pairs of images is fused in a probabilistic framework. We present our method and experimental results obtained using image data acquired by our robot. The results indicate that robust and reliable depth estimates can be obtained using the proposed method.
This paper presents the hardware design and gait generation of humanoid soccer robot Stepper-3D. Virtual Slope Walking, inspired by Passive Dynamic Walking, is introduced for gait generation. In Virtual Slope Walking, by actively extending the stance leg and shortening the swing leg, the robot walks on level ground as it walks down a virtual slope. In practical, Virtual Slope Walking is generated by connecting three key frames in the sagittal plane with sinusoids. Aiming for improving the walking stability, the parallel double crank mechanism are adopted in the leg structure. Experimental results show that Stepper-3D achieves a fast forward walking speed of 0.5 m/s and accomplishes omnidirectional walking. Stepper-3D performed fast and stable walking in the RoboCup 2008 Humanoid competitions.
A globally consistent solution to the simultaneous localization and mapping (SLAM) problem in 2D with three degrees of freedom (DoF) poses was presented by Lu and Milios [F. Lu, E. Milios, Globally consistent range scan alignment for environment mapping, Autonomous Robots 4 (April) (1997) 333–349]. To create maps suitable for natural environments it is however necessary to consider the 6DoF pose case, namely the three Cartesian coordinates and the roll, pitch and yaw angles. This article describes the extension of the proposed algorithm to deal with these additional DoFs and the resulting non-linearities. Simplifications using Taylor expansion and Cholesky decomposition yield a fast application that handles the massive amount of 3D data and the computational requirements due to the 6DoF. Our experiments demonstrate the functionality of estimating the exact poses and their covariances in all 6DoF, leading to a globally consistent map. The correspondences between scans are found automatically by use of a simple distance heuristic.
In this paper a method for obtaining the location, size and shape of main surfaces in an environment, from points measured by a laser scanner on board a mobile robot, is presented. The most likely orientation of the surface normal is first calculated at every point, from points in an adaptive-radius neighboring region. Then, similar planes are merged under orientation and distance criteria. Finally, points are projected onto the planes they belong to, and the corresponding boundaries are obtained from Voronoi’s diagram. A simple representation of the environment is thus obtained, which matches the real topology thanks to the detailed prior analysis of the local geometry at every point.
This paper presents an innovative and practical strategy for automated leather surface roughing, using structured light 3D machine vision for object profile perception, and NURBS interpolation for accurate and smooth trajectory generation. As high pressure grit blasting is used for roughing, considering the spacial constraints in the blasting chamber, an additional degree of freedom is introduced using a rotary table, which supports the workpiece. Cooperative control is implemented between a 6-DOF robot and the rotary table to minimize robot movements, while satisfying the requirements of variable velocity control, accurate trajectory tracking and orientation control. Experimental results of consistent roughing performance have shown the efficiency of the proposed method.
We intend to build a vision system that will allow dynamic 3D perception of objects of interest. More specifically, we discuss the idea of using 3D visual cues when tracking a visual target, in order to recover some of its 3D characteristics (depth, size, kinematic information). The basic requirements for such a 3D vision module to be embedded on a robotic head are discussed.The experimentation reported here corresponds to an implementation of these general ideas, considering a calibrated robotic head. We analyse how to make use of such a system for (1) detecting 3D objects of interest, (2) recovering the average depth and size of the tracked objects, (3) fixating and tracking such objects, to facilitate their observation.
This paper provides a genetic algorithm-based approach to calculate the optimal placement of receivers in a novel 3D position estimation system that uses a single transmitter and multiple receivers. The novelty in the system is the use of the difference in the times of arrival (TOAs) of an ultrasonic wave from the transmitter to the different receivers fixed in 3D space. This is a different approach to traditional systems that use the actual times of flight (TOFs) from the transmitter to the different receivers and triangulate the position of the transmitter. The new approach makes the system more accurate, makes the transmitter independent of the receivers and does not require the need of calculating the time delay term that is inherent in traditional systems due to delays caused by the electronic circuitry. This paper presents a thorough analysis of receiver configurations in the 2D and 3D systems that lead to singularities, i.e. locations of receivers that lead to formulations that cannot be solved due to a shortage of information. It provides guidelines of where not to place receivers so as to get a robust system, and further, presents a detailed analysis of locations that are optimal, i.e. locations that lead to the most accurate estimation of the transmitter positions. The results presented in this paper are not only applicable to ultrasonic systems but all systems that use wave theory, e.g. infrared, laser, etc. This work finds applications in virtual reality cells, robotics, guidance of indoor autonomous vehicles and vibration analysis.
This paper describes a technique for constructing a geometric model of an unknown environment based on data acquired by a Laser Range Finder on board of a mobile robot. The geometric model would be most useful both for navigation and verification purposes. The paper presents all the steps needed for the description of the environment, including the range image acquisition and processing, 3D surface reconstruction and the problem of merging multiple images in order to obtain a complete model.
The problem considered in this paper involves the design of a vision-based autopilot for small and micro Unmanned Aerial Vehicles (UAVs). The proposed autopilot is based on an optic flow-based vision system for autonomous localization and scene mapping, and a nonlinear control system for flight control and guidance. This paper focusses on the development of a real-time 3D vision algorithm for estimating optic flow, aircraft self-motion and depth map, using a low-resolution onboard camera and a low-cost Inertial Measurement Unit (IMU). Our implementation is based on 3 Nested Kalman Filters (3NKF) and results in an efficient and robust estimation process. The vision and control algorithms have been implemented on a quadrotor UAV, and demonstrated in real-time flight tests. Experimental results show that the proposed vision-based autopilot enabled a small rotorcraft to achieve fully-autonomous flight using information extracted from optic flow.